Next Article in Journal
Higher Education Under Generative AI: Biographical Orientations of Democratic Learning and Teaching
Previous Article in Journal
A Participatory Workshop Design for Engaging Young People in IT Sustainability
Previous Article in Special Issue
Students’ Use and Perception of Educational GenAI Chatbot in High School Computing: Insights from the Decomposed Theory of Planned Behavior
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Subjective Intelligence: A Framework for Generative AI in STEM Education

by
Greses Pérez
1,2,3,*,
Trevion Henderson
2,3,
Takeshia Pierre
3,
G. R. Marvez
3,
Alejandra Vasquez
3,
Philippa Eshun
1 and
Ymbar Polanco Pino
1
1
Department of Civil and Environmental Engineering, Tufts University, Medford, MA 02155, USA
2
Department of Mechanical Engineering, Tufts University, Medford, MA 02155, USA
3
Department of Education, Tufts University, Medford, MA 02155, USA
*
Author to whom correspondence should be addressed.
Educ. Sci. 2025, 15(12), 1571; https://doi.org/10.3390/educsci15121571
Submission received: 2 July 2025 / Revised: 30 September 2025 / Accepted: 4 November 2025 / Published: 21 November 2025
(This article belongs to the Special Issue Generative AI in Education: Current Trends and Future Directions)

Abstract

Generative artificial intelligence (GenAI) is increasingly transforming science and engineering education through prompt-based interactions. While promising to transform how students learn engineering, GenAI’s increasing presence raises concerns about misinformation, bias, academic integrity, and inequity in learning environments, especially in the absence of clear guidelines for fair and appropriate access and use. This position paper advances a conceptual framework for the use of GenAI in science and engineering through the lens of students’ identities and subjectivities, subjective intelligence, including students’ varied linguistic resources as well as gender and cultural identities. Our subjective intelligence framework investigates the emerging role of GenAI in shaping socio-academic engagement and pedagogical practices in STEM higher education contexts while examining its implications for equity and ethics. Our work draws on our first-hand experiences from an engineering undergraduate course, a graduate STEM seminar, and an engineering design task to illustrate how this framework can foster innovative STEM education. The framework comprises three core tenants: (1) cognitive and moral development towards ethical engagement in data practices, (2) identification and interrogation of potential human biases, and (3) multilingual/multidialectal support for design considerations. Across cases, the framework enables inclusive and reflective teaching strategies, while also surfacing new tensions and possibilities around GenAI’s limitations and misuses.

1. Introduction

The integration of generative artificial intelligence (GenAI) into science and engineering education presents new opportunities for individualized learning and challenges for learning as a social practice. As GenAI tools become more accessible, scholars have highlighted the opportunities these emerging technologies offer for improving efficiency in tasks, aiding comprehension, and supporting self-paced learning, while also raising concerns about overreliance on technology that may diminish critical thinking and limited human interaction in the learning process (Chan & Hu, 2023). These debates around GenAI’s potential benefits and limitations are particularly pressing in STEM disciplines, where students are expected to engage in sociotechnical and socioscientific competencies. For example, some scholars have described the capacity of GenAI to improve STEM instruction through adaptive and personalized learning experiences that respond to individual student needs, while also providing an environment for students to learn how to receive and give feedback (Corbin et al., 2025; Guettala et al., 2024), suggesting that GenAI has the potential to support identity-affirming and equity-oriented pedagogies. These scholars explored GenAI tools for their role in shaping new forms of engagement and learning in STEM education. The scholarship on GenAI for learning informs the re-thinking of learning environments with a focus on the cognitive and cultural resources of students and communities, suggesting an important shift in how researchers and practitioners conceptualize engineering and science learning (Bura & Myakala, 2024).
While the promises of GenAI could be transformative for STEM teaching and learning, GenAI also raises complex questions about bias, academic integrity, and inequity in learning environments. For instance, Kozlowski et al. (2022) found that name inference algorithms under-identify Black authors by 19 percent and over-identify White authors by 16 percent in bibliometric datasets, suggesting that AI tools can reproduce systemic demographic biases in scholarly metrics. The findings reveal how students relying on AI-supported literature reviews may inherit skewed notions of who contributes to scientific and technical knowledge (Bender et al., 2021). Similar biases are present in AI-generated instructional materials, for which Chinta et al. (2024) have discussed the development of algorithmic fairness frameworks to identify and address systemic bias while also joining other scholars in surfacing new challenges around misinformation, biases, and social inequalities (Al-kfairy et al., 2024). Some scholars argue that GenAI could democratize access to research and level the playing field, which might broaden participation in STEM; however, without careful guardrails, GenAI risks reinforcing existing power imbalances (Bender et al., 2021).
As a technology developed by scientists and engineers with their own cognitive biases, GenAI is not a neutral technology but rather an artifact that translates and expedites the biases of developers. Its development is shaped by large-scale training data and computational models, reflecting dominant cultural and linguistic norms (Ji et al., 2023; Qadir, 2023), often marginalizing the lived experiences and language practices of multilingual, multidialectal, historically underserved, and underrepresented communities. This marginalization can lead to lower-quality outputs and less effective feedback for students from these groups, reinforcing patterns of exclusion in STEM education (Nyaaba & Zhai, 2024). However, the exclusion of communities in the design of these technologies is only part of the problem. Ignoring or trying to remove GenAI tools from engineering and science learning environments poses other challenges and would be short-sighted given the prevalence of GenAI tools in our everyday life. Students are increasingly using GenAI tools to support writing, problem-solving, and idea generation in STEM (Ravšelj et al., 2025). Blocking their use or ignoring the limited representation in the design of these technologies would only hamper the development of essential digital literacies and leave educators ill-equipped to guide students in using GenAI effectively in STEM education (El Fathi et al., 2025). Addressing GenAI’s challenges from a student- and community-centered perspective (e.g., mitigating algorithmic bias, safeguarding academic integrity, and closing digital access gaps) allows researchers and practitioners to develop equitable, innovative learning experiences that center the identities of students, their communities (e.g., multilingual or historically marginalized groups), and the GenAI systems themselves in STEM education.
In particular, the digital access gap within GenAI highlights a need to consider the students’ academic, social, and cultural environments. Recent studies have shown that GenAI compounds existing differences in cultural and technological familiarity, widening an emerging “AI–literacy divide” for students who identify as first-generation and minoritized (Shoval, 2025). Yet, much of the current scholarship centers on surface-level questions of academic use, often overlooking how GenAI interacts with students’ cultural identities, values, and lived experiences (Kangwa et al., 2025). Instead, studies typically cluster around three reductive themes. The first is academic integrity, where systematic reviews document both rising misconduct risks and the development of detection tools (Bittle & El-Gayar, 2025). Second is the idea of efficiency, with meta-analyses showing that ChatGPT can streamline grading, feedback, and boost academic performance in STEM courses (Wang & Fan, 2025). Last, studies focus on access while broadly considering students’ lived realities, with attention to unequal subscription costs and bandwidth gaps that restrict who can use advanced models, potentially reinforcing digital inequities (Shoval, 2025). However, the question of how students’ identities—the personal and socially constructed understandings of who they are and how they are perceived (Avraamidou, 2019; Luyckx et al., 2011)—influence their perceptions of and engagement with GenAI tools remains underexplored. Understanding how students take up or resist GenAI must be grounded in their cultural, social, and academic contexts, where identity actively shapes access, use, and interpretation within STEM education.
While these studies point to underexplored dimensions of identity and GenAI, our teaching and collaborative research further illustrate how these dynamics unfold in real-world STEM learning environments. We have observed students expressing identity-informed concerns about fairness, risk, and the broader societal implications of GenAI. Others have also treated GenAI as a creative sandbox with a focus on outputs and limited regard for people’s identities. For example, Mai et al. (2024) found that embedding ChatGPT into real-world problem-solving tasks boosted students’ creative outputs and critical thinking in STEM courses. While this highlights GenAI’s potential to enhance engagement, the study pays limited attention to how students’ identities shape their interaction with the tool. Similarly, C. Zhu et al. (2023) recommend integrating structured reflection prompts and “AI transparency” badges into laboratory and design classes to foster ethical engagement and critical evaluation of model outputs. However, their framework does not address how such reflection might differ across students with varied cultural, linguistic, or educational backgrounds. Munaye et al. (2025) take a step further by calling for participatory co-design with underrepresented student groups to develop culturally responsive prompt libraries and shared-authorship policies that formally acknowledge both human and GenAI contributions. Our work builds on this momentum by proposing a more explicit framework––subjective intelligence––that centers students’ identities and subjectivities as core to understanding how GenAI is interpreted, resisted, or embraced in STEM learning environments. This position paper addresses the following central and unanswered research questions raised by the scholarship:
  • What role do people’s identities and subjectivities––including ethical value systems and students’ cognitive and moral development about fair and appropriate use of GenAI––play in the use of GenAI in engineering and science learning?
  • What are the opportunities and limitations of thinking about identity in the use of GenAI in STEM learning environments?
  • How do students’ perceptions of others’ identities (communities or technology) influence GenAI use and its impact on people and communities?
To address these questions, this manuscript proposes a conceptual framework for analyzing and thinking about identity in the use of GenAI for learning, subjective intelligence, in engineering and science education. Identity plays a crucial role in supporting learning with GenAI by shaping how students frame and interpret these technologies in ways that connect to their cultural and linguistic backgrounds (Johri, 2020; Valeri et al., 2025). If we overlook identity in students, communities, or the technology itself, GenAI can reinforce bias and exclusion instead of supporting learning. In developing our conceptual framework, we discuss the role of identity and subjectivity for learning in engineering and science education. This discussion highlights identity as dynamic and relational, attending to how students interpret and navigate learning technologies and artifacts in ways that are shaped by their lived experiences, cultural backgrounds, and positions within broader systems. We then expand on innovations of GenAI use and possibilities of Agentic AI, particularly the ideas behind the design of these systems and the potential pitfalls for learning. We highlight how design choices can guide or limit what students notice and question, and how thoughtful design can help students think critically about bias, ethics, and social impacts. Rather than viewing identity and technology as separate categories, we consider how they intersect and shape one another in educational contexts. To illustrate how GenAI might support or constrain equitable and identity-affirming learning, we then provide subjective examples of learning experiences from an undergraduate engineering course, a graduate STEM seminar, and an engineering design task. At the end, we discuss the contributions of this framework as an entry point for STEM educators, researchers, and practitioners to consider. The framework invites future inquiry into how emerging technologies can be critically and responsibly integrated into STEM learning environments.

2. Literature Review

Few studies in STEM education connect identity research with AI tools for learning. This section provides an overview of (1) how scholars have engaged with who people are and how others may perceive them in STEM learning environments and (2) how artificial intelligence tools have been used for teaching and learning in higher education. We also examine the ways that students’ identities and subjectivities are embedded in, and connected to, their decisions to adopt AI tools for learning. We also explore the affordances and limitations of GenAI for learning and identity development in STEM.

2.1. Identity Formation

Researchers have defined identity from an individual and interactional perspective. Marcia (1980) described identity as an “internal, self-constructed, dynamic organization of drives, abilities, beliefs, and individual history” (Marcia, 1980, p. 159). Others take an interactional lens by defining identity as a multilayered construct that is influenced by cognitive, moral, cultural, and social factors with varying dimensions (Erikson, 1950). Identity is dynamic and constantly evolves over time as a response to an individual’s personal experiences and social contexts (Schachter, 2005).
Different perspectives considered identity development as shaped by an individual’s goals and values, community and social relations, and structures (Erikson, 1950). Identity development is a reflective and constructive process where an individual constantly reshapes their values and beliefs, forming them into goal-oriented practices that align with these ideas (Grotevant, 1987). For example, a change in their home environment, such as a divorce, or a major life event, such as a graduation, will cause the individual to re-evaluate their current goals and pursue other possible life paths (Grotevant, 1987). Furthermore, the concept of identity development is described as integrating an individual’s childhood identities (e.g., personal, cultural) with a person’s set of ideals, values, and goals (Erikson, 1950, 1968). An individual’s values and goals, paired with their interactions and their social environment, shape a person’s identity. In essence, assimilation and accommodation occur during the identity development process to evolve the person’s values and beliefs, ultimately forming the person’s identity (Luyckx et al., 2011). Both the individual’s values and goals play an essential role in identity development; however, the process is also strongly influenced by their social environment. For example, one’s values, goals, and commitments may differ across social contexts and environments, leading one to identify differently across contexts. Moreover, how one wishes to communicate their identities may differ across social contexts (e.g., one may call themselves, or be labeled by others as, “Latinx” in one environment and “Colombiano” in another environment).
Other researchers examined community as a factor of identity development, emphasizing that feedback from the individual’s social relations and context are constantly at play (Kerpelman et al., 1997). Within STEM fields, race, ethnicity, and gender have affected students’ sense of belonging and identity within their discipline (Meyers et al., 2012). For example, Avraamidou (2019) highlights a case study where Amina, a White Muslim, aspires to be a physics professor and experiences challenges based on her religion, social class, and gender. Although she is able to enroll in a physics major, she experiences microaggressions ranging from the professor’s negative comments and even the institution’s ban on wearing religious attire, all of which contribute to her sacrificing her religious identity in order to develop her science knowledge and identity (Avraamidou, 2019). This case is not only related to her culture but can be viewed as political because of the government’s regulation of religious attire on university campuses. On a broader scale, historically marginalized groups such as Black computer science students tend to display grit and resilience in their disciplinary environment (Mensah & Pierre, 2025) due to the underlying sociopolitical structures present in STEM fields. Indigenous groups, such as the Māori population, may have underlying conflicts with fields such as engineering, where historic colonization and cultural conflict against their group have resulted in fewer students enrolling in these fields. The political agenda within institutions often makes it difficult for students to develop a sense of belonging in a school community that is not composed of mentors, peers, teachers, and their cultural or ethnic community (Leydens et al., 2017), and this weakened sense of belonging can further lead to a weakened sense of a disciplinary identity.
Community contributes to STEM identity formation as well. Researchers have seen middle school students who, although leaders within their teams in science, refuse to describe themselves as scientists due to perceived negative stereotypes about scientists and how others in their school community may perceive them (Calabrese Barton & Tan, 2010). In ethnic groups such as Latinx undergraduate students, familial support and recognition can provide the emotional support needed to pursue computing careers, thus fostering a sense of responsibility to give back to their communities (Ramirez et al., 2024). Recognition, especially by others in the field, is a key aspect of developing their disciplinary identity. In the science field, recognition of their scientific identity by scientific individuals is crucial for an individual’s self-perception as a scientist. For example, positive recognition, such as receiving fellowships or a teaching assistant role, confirms a student’s science identity in comparison to being excluded from lab groups or having individual skills be misjudged by a professor due to race or ethnicity (Carlone & Johnson, 2007). In a study with Latinx computer science undergraduates, participants were motivated to provide computing lessons to neighboring elementary schools to raise awareness and be an example of representation in the computing field (Ramirez et al., 2024).
Godwin (2016) has argued that engineering undergraduate students have three identities, social, personal, and engineering, where the engineering identity is composed of the student’s performance and competence, their interest in the subject, and their perceptions that others recognize them as a competent engineer. Similarly, Carlone & Johnson (2007) conceptualized science identity as shaped by three factors: performance of science practices, competence in understanding science knowledge, and recognition from family, teachers, and communities. The individual’s racial, ethnic, and gender identities interact with recognition from others as a science person, putting into perspective the influence role models can have on an individual forming a strong and positive science identity (Carlone & Johnson, 2007).
Similarly, Schachter argues that identity is shaped by an individual’s goals and values, but instead views identity as a structure (2005). Cultural context affects a person’s identity development by affecting their identity’s structural properties and its developmental process—that is, the person’s environment introduces new factors that the person would otherwise not encounter, drastically influencing the person’s identity formation (Schachter, 2005). Both the individual’s values and self-perceptions, and the individual’s environment, influence identity development.
Additionally, both personal and collective identities influence an individual’s expectations and subjective values (i.e., what matters to the individual). When a task aligns closely with an individual’s identities and values, it increases the individual’s motivation to persevere in the task (Eccles, 2009). That is, who the individual is determines their actions and what they strive to do.
It is important to acknowledge the confusion that may arise in this process. Identity confusion is a person’s inability to develop a set of goals and commitments, resulting in the lack of a stable foundation for the person’s identity; this is especially important in adolescents who are still heavily developing their values and goals (Erikson, 1950, 1968). Furthermore, artificial intelligence (AI) created by individuals and institutions also shapes these same groups’ knowledge of computer science. AI and machine learning (ML) have been used to legitimize researchers’ findings in many fields; however, in scientific and social research, AI and ML often reinforce existing social prejudices, especially in disability and LGBTQ+ spaces (Keyes et al., 2021). Although seen as a neutral and objective instrument, AI and ML affect users by reinforcing these negative biases.
Artifacts, or objects made by people and used in everyday life, are often given significance by the people who created or used them (LeCompte & Ludwig, 2007). These objects may hold an individual’s identity, a person’s assimilated identity, or an intersection of the former identities meant to aid the individual in developing relationships with others (Oring, 1994). As technology is man-made and maintains sociopolitical systems, technology can be considered an artifact. For example, television has evolved the concept of community within the trans community by providing exposure to what it means to be trans, expanding the meaning of gender, and introducing a sense of community within these individuals (D. B. Hill, 2005). In this manner, television serves as an artifact that has shaped individuals’ personal and communal identities. An artifact’s qualities extend past its physical characteristics and embody contextual characteristics (Klenk, 2020).
Technology itself can behave as a political object that furthers anyone’s political agenda (Winner, 1980). Just as technology affects our everyday lives, GenAI is both shaped by and can shape society’s future technology, social practices, and rhetorical and cultural narratives (Keyes et al., 2021). The phrase “Human in the loop” perfectly sums up technology’s role within society, as technology depends on the human to make the critical decisions when training models, and can maintain long-lasting implications by upholding sociopolitical structures; at the same time, people can also disrupt this to create more objective and inclusive models (Keyes & Creel, 2022).
Keyes agrees with Winner’s (1980) statement on the politicalness of technology, arguing that technology is influenced by the social environment and sociopolitical factors (Keyes et al., 2021). As much as society is influenced by technology, technology itself forms society. In Keyes’ words, “the creation of new technologies of measurement has always led to—and in many cases been driven by—the opportunity to change conceptions of what is being measured” (2021, p. 163). Therefore, technology sustains and enforces perceived biases and sociopolitical systems that its creators and users are accustomed to.

2.2. Identity in Teamwork Perceptions

Teamwork in fields such as engineering is especially complex given the existing sociopolitical and cultural factors present in the field. While some members of the engineering disciplines may claim it to be objective and depoliticized, undergraduate design teams tend to follow the sociopolitical landscape—that is, historically marginalized groups tend to feel discouraged from participating in team efforts as they tend to consider the social aspects rather than completely favor the more technical aspects (Henderson, 2024a). Furthermore, certain student characteristics (i.e., race, sex, ethnicity) display “patterns of idea contributions and idea enactments” present in team projects (Henderson, 2024b). In this social context, it is clear that the student’s identity may not feel welcomed in this environment and, as such, creates a negative personal experience for them.
Human perceptions of AI, especially when AI is a teammate, are greatly affected by people’s emotions (Flathmann et al., 2023). For example, studies have shown that teams are more willing to adapt an AI teammate’s knowledge when the teammate perceives warmth and competence; competence in particular emerges from the teammate’s perceptions of the AI’s intelligence, skill, and efficiency (Harris-Watson et al., 2023). An AI teammate’s responsibility greatly determines the human teammates’ views on it, as human teammates generally have a better perception of the AI teammate when the AI is able to meet their responsibilities; however, this changes when the AI significantly increases its workload, as human teammates can feel threatened by the technology’s perceived ability to take away their workload—e.g., job security (Flathmann et al., 2023). Additionally, emotions are strongly associated with AI. As AI can copy human speech and text, people may feel their identity is threatened by the technology and its mimicked characteristics of human cognition (Alessandro et al., 2025).

3. AI Tools for Learning

Generative AI or GenAI is a type of machine learning system that is trained on very large datasets, often scraped from text and pictures available on the Internet, in order to generate new content that is similar to the content from the datasets (Zewe, 2023). Older machine learning models often focused on smaller unlabeled and labeled datasets with the purpose of creating classification tools (See Tang et al., 2015), such as recognizing written addresses on an envelope for the purposes of mail sorting (S. Basu et al., 2010).
Unlike these older models, GenAI systems can generate large bodies of text, images, audio, and video content (Zewe, 2023). For example, OpenAI tools can be used to create video from text and photo inputs in a variety of formats that can be used to simulate many different kinds of environments (Qin et al., 2024; OpenAI, 2024). In educational settings, students have used tools like ChatGPT for essay refinement and programming assistance (AlAfnan et al., 2023; Teng, 2024; Yilmaz & Karaoglan Yilmaz, 2023). GenAI also can be used in a variety of STEM contexts, such as creating and evaluating benchmark datasets or assisting with experimental design (Reddy & Shojaee, 2024).
While AI and other automated learning tools have existed in schools for some time, GenAI tools largely entered public consciousness with ChatGPT’s start in 2022. Unlike other tools used in education, such as Smart Boards or Google Classroom, that were adopted by district leaders and teachers into classrooms, GenAI tools have arrived largely through student introduction (Reich, 2020). This has meant that educators, in some ways, have to catch up to students’ familiarity with these tools to properly leverage them in the classroom (Klopfer et al., 2024). AI tools for learning have potential benefits, such as personalized learning, rapid feedback, and predictive modeling, but also notable downsides, including possible academic conduct violations and the perpetuation of biases found in the datasets of LLMs (Alier et al., 2024; Heeg & Avraamidou, 2023).
For example, research into African American English (AAE) found that large language models are more likely to exhibit negative stereotypes about speakers of AAE than speakers of Standard American English (SAE) (Hofmann et al., 2024). To examine this dialect prejudice, researchers showed various LLMs text with the same meaning in both AAE and SAE (“I be so happy…” vs. “I am so happy…”) and found that the LLMs were more likely to rate texts with AAE as “lazy” or “stupid” than those with SAE (Hofmann et al., 2024). Additionally, LLMs associated speakers of AAE with less prestigious careers and harsher judicial sentencing, showing how LLMs used in decision making may replicate human biases in insidious ways (Hofmann et al., 2024).
Baker and Hawn (2022) also reviewed which populations are most studied in the design of artificial intelligence for education (AIED). They found that across papers, certain subgroups were less likely to be included, such as students with disabilities, international students, and students from low socioeconomic status. Even across commonly studied larger groups, like race and ethnicity, groups of learners were excluded, such as Indigenous learners, or inappropriately grouped, such as in the grouping of all Latinx students under one identity without accounting for the variation within these groups. Without concerted efforts to improve data collection and increased consideration about what datasets these models are trained on, there is a potential risk of developing models that cannot be broadly applied to all students (Baker & Hawn, 2022). In light of these known biases in both design and implementation, it is important to consider how GenAI use in the classroom could perpetuate stereotypes if utilized poorly by teachers and students.

GenAI Usage and Identity

Though GenAI tools may feel near-ubiquitous, the majority of people do not use GenAI tools. In a recent updated report for 2025, Pew found that 34% of Americans have used ChatGPT, and young people (58%) and those with a postgraduate degree (52%) continue to be the largest user base (Sidoti & McClain, 2025). Around 20% of adults have still never heard of ChatGPT, and this percentage is higher among those with education at the high school or less level (34%). There are clear adoption trends among these groups over the last two years as well. Given that younger people and people with advanced education might be leading the charge on GenAI tools, it leads to the question of why these people use these tools more than others and what the implications may be for learning in preparing students for the potential of using these tools in their future careers.
Other studies focused on higher education have found similar results. Drawing on survey and interview data, Brown et al. (2025) reported that 25% of students and 31% of staff had never used AI tools. According to the authors, both students and staff vastly overestimated the AI usage of their peers. Additionally, students and staff who identify as male and high-socioeconomic-status reported more frequent GenAI use (Brown et al., 2025). The majority of participants expressed concerns about a lack of clarity in AI use guidelines at their institution (Brown et al., 2025).
A gender gap exists in GenAI adoption that should also be noted. From a study of people who have at least an undergraduate degree from Asia and Africa, women lagged behind in GenAI tool usage, with 68% of women reporting that they were unaware of any GenAI tools (Ahmad et al., 2024). Though the authors note that men are more likely to have access to computing tools, this is over double the percentage of men who were unaware of GenAI tools (31%) (Ahmad et al., 2024).
Prior investigations have shown that GenAI use can be discipline-dependent (Qu & Wang, 2025). Though general use in applied disciplines, such as engineering, is higher across routine and creative tasks, students in pure disciplines, such as mathematics, more often consider the ethical implications of AI use in their work and are less likely to justify or rationalize their use of GenAI (Qu & Wang, 2025).
There are also cultural reasons why certain groups of people may be reluctant to interact with GenAI. In a literature review about GenAI adoption, Kelly et al. (2023) noted that there may be some scenarios where traditional and cultural practices cannot be replaced by GenAI tools, as people in these scenarios seek human contact as a priority.

4. Conceptual Framework: Concerns and Potential Applications of GenAI in Classroom Practice

The advent of GenAI in formal and informal engineering learning experiences represents a promising learning opportunity for fostering students’ cognitive and moral development regarding ethical engagement with GenAI and data practices in higher education. However, fostering such learning outcomes requires instructors to carefully conceptualize the nature of learning outcomes, students’ developmental progress, and learning activities inside and outside the classroom.
Drawing on King’s (2009) framing of the discussion of moral development theories around the aims of higher education, we can examine the educational goals of the curricular and co-curricular learning experiences that students participate in. While some scholars frame the goal of higher education around economic return on investment, workforce development, and economic competitiveness (Hutcheson, 2007), others have long posited that a core aim of higher education is fostering engaged citizenship, reducing inequality in American society and globally, fostering public service, and improving other outcomes associated with public engagement (Bowen, 2018; The National Task Force on Civic Learning and Democratic Engagement, 2012). Herein, we joined other scholars in positing that GenAI represents both a threat to, as well as an opportunity for, ethics—academic integrity and fairness in higher education and public welfare (Zlotnikova et al., 2025), particularly in the education of engineers and scientists.
We note a tendency for discourse about GenAI in education, both internally at our institution and externally (e.g., at professional conferences, on social media, and in legacy media) to center on academic misconduct. Sensational headlines, like “Everyone is Cheating Their Way Through College” (Walsh, 2025), have bolstered widespread skepticism about the utility of higher education and undermined the potential for educators to use GenAI as an opportunity for cognitive and moral development. Mismatches between universities’ policies on GenAI usage and student and teacher understandings of these policies also drive confusion about what, if any, is appropriate usage of GenAI tools in the classroom. Younger students tend to be more enthusiastic about incorporating GenAI tools, while educators from older generations are more wary about its potential implications on current education systems (Chan & Lee, 2023). Research has suggested that students and teachers hold contrasting views on the purpose of GenAI in the classroom and what should count as academic misconduct (Duah & McGivern, 2024). Students tended to see GenAI tools as important writing companions while educators framed students’ GenAI use as violations of academic integrity (Duah & McGivern, 2024), a broad concern in higher education about the ethical use of these technological tools (Zlotnikova et al., 2025). Additionally, current tools for GenAI detection in written work are fraught with issues and possible exploits, which further confuses students and teachers (Klopfer et al., 2024; Oravec, 2023; Reich, 2020; Sadasivan et al., 2025). That is, as the popular association between GenAI and academic misconduct grows, faculty may be more likely to institute blanket prohibitions on the use of GenAI in learning activities. Said simply, we believe this to be a pedagogical mistake.
Rather than forego opportunities to learn from and with the technology, we suggest the careful and purposeful integration of GenAI into the students’ learning activities to accomplish educational goals related to social responsibility. This approach entails more than delivering content about what is right, just, democratic, appropriate, or fair. Instead, King (2009) argues that cognitive and moral development entails the “reorganization of skills” that “allow individuals to manage more complex units of information, perspectives, and tasks”(p. 598). For example, Dewan et al. (2025) found, from surveys of college faculty with teaching responsibilities, that those who integrate AI into their curriculum may have more opportunities to speak with students about ethical concerns and use of these tools. We posit that rather than placing blanket bans on the use of GenAI that frame the use of these technologies as flagrant academic misconduct, educators must develop learning activities that reorganize students’ competencies, including their ways of thinking, knowing, and doing for analyzing socioacademic contexts to make determinations about fair and appropriate use of GenAI in their learning activities, as well as their professional work after college.
Current research has shown numerous avenues for the thoughtful integration of GenAI in classrooms, mostly with applications in cognitive domains and to a lesser extent in behavioral and affective domains (Ariza et al., 2025). Common themes include personalized learning opportunities wherein students can access materials in ways that they prefer, using AI tools as a teaching assistant in large classes and automated assessments (Alasadi & Baiz, 2023). A study of automated grading of students’ written short responses found that GPT-4’s scores were on par with expert human graders, which could allow for students to engage in more formative assessments without unduly increasing grading burdens on teachers (Henkel et al., 2025). However, educators must be wary of using these tools in the creation of multiple-choice assessments, as the outputs have the potential to violate major multiple-choice question writing guidelines that lead to measurement error (May et al., 2025). Additionally, while LLMs are improving over time, these tools still struggle with high-level math assessment grading (Gandolfi, 2025), making it important for teachers to understand the domains in which GenAI can be useful.
Using AI tools as a writing aid has also been a key feature of this research. Novice writing students find these tools helpful in drafting initial ideas, and teachers can use these tools as a way to talk to students about thinking critically about the output of these tools (Dumin, 2024). Using these tools as writing assistants can also be helpful in raising feelings of self-efficacy and motivation to write for English-Language Learners (Teng, 2024).
Outside of writing assistants for students and researchers (Stokel-Walker & Van Noorden, 2023), GenAI tools have been explored as coding assistants as well. As an example, in an introductory robotics class, instructors encouraged undergraduate students to use AI tools to help them write Python programs and generate documentation (Xu et al., 2024). Through this course, students’ attitudes towards AI tools shifted from largely negative to more positive, but they gained greater awareness of the potential drawbacks of GenAI for learning and engineering (Xu et al., 2024). This work suggests that students’ attitudes towards these tools are malleable and can be affected by how instructors position the usefulness and pitfalls of AI. However, there is also the potential for an over-reliance on these tools that hinders students’ understanding of core programming concepts (Kazemitabaar et al., 2023). The way instructors position these tools and the extent of their usefulness is critical for students’ understanding of AI’s limitations.
Theories of cognitive and moral development frequently locate students’ reasoning along a continuum where “lower-level” reasoning competencies suggest binary, concrete ways of thinking, knowing, and doing, while “higher-level” competencies reflect an understanding of knowledge as socially negotiated, context-sensitive, and evolving (Richardson, 2013). For example, in Perry’s (1970) Scheme, students begin from a position of dualism, characterized by beliefs that knowledge, rules, and facts descend from Authorities (e.g., professors, parents, experts) and that ideas and even people can be categorized as “good/right” or “bad/wrong.” Similarly, in Stage 1 of King and Kitchener’s (2012) Reflective Judgment Model, which was developed “to account for the complex monitoring that is involved when…adults are faced with ill-structured problems” (p. 37), knowledge is assumed to be concrete. Instructors who assume students’ cognitive development to be located at lower levels might implement course policies and practices that reflect binary, concrete, yes–no/right–wrong policies.
The mismatch between instructional practices and students’ cognitive and moral developmental levels poses problems for instructors and students alike. If instructors assume students view course expectations and policies as concrete, implementing blanket prohibitions on the use of GenAI, but students understand the appropriate use of GenAI to be the result of ongoing negotiations and “interpretive considerations” (King & Kitchener, 2012), there arise opportunities for students to understand their learning experiences to be incompatible with the course policies. Here, we argue that higher education has not yet developed pedagogical strategies for engaging students about fair, ethical use of GenAI in engineering learning and engineering work in a developmental way.

4.1. GenAI Design and Identity

There exists a general misconception that tools like ChatGPT search any query a user inputs in a large database and synthesize an answer based on recent information or that it can “think” about the answer in a traditional sense. As a simplified text generation example, a machine learning model could output the next word that makes the most sense based on what text input it has already seen in the dataset. For example, the next likely word phrase in the sentence, “Students go ____?” might be “to school” or “to see the teacher,” because students, school, and teacher are all lexically tied together. The next word is almost certainly not “to the casino,” because casino and students are probably not seen in the same context or piece of the dataset together with enough frequency to be meaningful.
Additionally, LLMs are trained on these large datasets, often scraped from the Internet, meaning that the training data can have anything from factual encyclopedic knowledge to false social media posts to proprietary and trademarked information. This situation makes GenAI tools propense to misinformation, disinformation and hallucinations (Bandara, 2024; Janéafik & Dusek, 2024). As these datasets reflect the Internet and people’s biases and stereotypes, there is a potential for bias and inaccurate information to appear in outputs that results in misuse. These AI biases and inaccuracies are dangerous because they reinforce human biases over time and create feedback loops where new LLMs are trained on this data (Kidd & Birhane, 2023). To this end, it is important to consider who has a hand in both designing and implementing GenAI systems. Only 26% of all mathematics and computer science PhDs are earned by women, with similar numbers for master’s (35%) and undergraduate degrees (26%) (National Center for Science and Engineering Statistics [NCSES], 2023). Across ethnic groups, Latino and Black students make up 9% and 6%, respectively, of all students enrolled in mathematics and computer science PhDs (National Center for Science and Engineering Statistics [NCSES], 2023). Given these enrollment numbers, it stands to reason that GenAI designers could be engineers and scientists that do not reflect the broader population, suggesting that not all biases may be considered in the design of these tools, such as in the AAE example above (Hofmann et al., 2024), which can become troublesome when these tools are used to make final decisions without human input (medical insurance claims, admissions, identity profiling, etc.).

4.2. GenAI and Multilingual Support in STEM Education

Scholars have increasingly examined the benefits of adopting GenAI in connection with language identity. In particular, researchers and practitioners have used GenAI tools in STEM education, workplace contexts, and with multilingual/multidialectal students to improve communications between linguistically diverse groups (i.e., automated translation) and facilitate language acquisition (Creely, 2024; Getto et al., 2025; Gilstrap et al., 2024; Law, 2024; Yang, 2024). However, the LLMs within GenAI are most adept at processing commands in English (Choudhury, 2023; Kshetri, 2024). For instance, Zhang et al. (2023) tested a series of commands in GPT from least to most demanding language-dependent tasks based on the categories of reasoning, knowledge access, and articulation. The least language-dependent task, reasoning, revolved around content that is consistent across languages, such as mathematical operations (i.e., What is 100 + 100?) and universal scientific principles (i.e., What are the three laws of thermodynamics?). Knowledge access refers to the capability of the LLM to parse through the training data that was utilized to develop the model, retrieve the relevant information, and accordingly generate an accurate answer. For example, this could be responding to word problems in STEM education that are based on procedural or more advanced conceptual knowledge (i.e., Explain how fluid flow through a solid impacts the solid’s resulting malleability, viscosity, and density). The final category, articulation, referred to tasks that involve the generation of some written communication; these tasks, such as writing an essay on world history or creating a technical report based on some provided information, require a deeper interlinguistic understanding to produce material in a culturally respectful manner (Abdou, 2017; Liu, 2007; Nordgren & Johansson, 2014; P. Zhu, 2023).
Zhang et al. (2023) found that generated results for each of the categories were most accurate when tasks were presented in English, but the accuracy gap differed by the type of task. While math reasoning resulted in an average accuracy gap of 10% between English and other languages, pun translations from English to other globally dominant languages (articulation task), including Spanish, French, and Chinese, resulted in accuracy gaps of over 50%. Therefore, ChatGPT—the version used by the authors was not specified in the study: “we use Chat-GPT, via the official web application, due to its availability”—is most accurate and linguistically accessible for monolingual English speakers or compound bilingual English speakers, meaning that the individual is as fluent in English as they are in another language. As a result, GenAI tools discriminate against coordinate and subordinate multilingual speakers who engage in translanguaging practices—"translanguaging is the deployment of a bi/multilingual speakers’ full semiotic repertoire, including multimodal, multisensory and multilingual, without regard for socially and politically constructed boundaries of named languages, registries and/or modalities” (Pérez et al., 2025a, p. 3)—and/or minoritized languages in engineering and science (i.e., Spanglish), as well as those who are less fluent in English than another language (Choudhury, 2023). Furthermore, there are discrepancies in the accuracy of GenAI results within languages labeled by Zampieri et al. (2020) as pluricentric due to their “multiple interacting standard forms in different countries” (p. 596). Language forms deemed as standardized across a nation (i.e., American English, British English) are thus privileged above minoritized or “non-standardized language varieties”, such as African American Vernacular English, in natural language processing (Zampieri et al., 2020, p. 597).
In an era where researchers, practitioners and technology developers seek to advance multimodal representations and diverse perspectives of scientific and technical reasoning through GenAI tools (Reddy & Shojaee, 2024), dominant languages remain preeminent in GenAI while the majority of instructional materials in STEM education (i.e., textbooks) are written in English and ill-translated to other selected languages (McDermott, 2023; Sravanthi, 2024). Multilingual and multidialectal students thus may see themselves as limited in their ability to leverage their full linguistic resources in science and engineering learning environments, as integrating their semiotic repertoires in multimodal communication could provide additional opportunities for students to describe phenomena with more context, description, and specificity (Pierson et al., 2021). This poses unique challenges to multilingual and multidialectal students who attempt to access accurate STEM information through GenAI. P. Basu and Mohanty (2024) present a GenAI-based process to create a glossary of STEM terms in non-English languages to improve the STEM education of indigenous populations, but research has been limited overall in this area. Given that language can serve as an index of social groups, such as race and ethnicity (Ashcroft, 2001), and status (Roberts, 2013), GenAI can perpetuate existing gaps in STEM education that have contributed to the underrepresentation of minoritized groups (Barry & Stephenson, 2025; James & Andrews, 2024).

5. The Cases of Ethics, Biases and Linguistic Identity in GenAI

5.1. Case I: Engineering Students Making Decisions Based on Data: The Case of Gender and Sexual Orientation in Learning from the Design of AI Tools

As an instructor, Henderson (Author 2) has used the advent of GenAI to foster students’ cognitive and moral development, particularly as it relates to fair and appropriate use of GenAI, as well as thinking critically about the impacts of GenAI on people and communities. For example, a series of student interactions surrounding their use of GenAI in our Introduction to Computing course delivered to first-year engineering students positioned me to offer students opportunities to think about equitable access, appropriate use during learning experiences, and the ways GenAI is already influencing systems of inequality in the United States. To begin, course policies, which were crafted to reflect discourse around GenAI use, as well as the specific context of an introductory engineering computing course, explicitly forbade the use of GenAI for crafting solutions to coding assignments (Danahy, n.d.). The course policy, as presented to students in the syllabus, is below:
Policy on AI
Artificial Intelligence (AI) has recently gained academic attention both for its ability to facilitate cheating and its potential to facilitate learning. Tufts University does not have an institution-wide policy on AI use in classes, on assignments, etc. Therefore, each class will have its own policies; it is your responsibility to be aware of the differing policies amongst your classes.
“Generative Artificial Intelligence” (GAI) includes, but is not limited to: Bing Chat Enterprise, ChatGPT, Google Bard, any other Large Language Model (LLM), DALL-E, Midjourney, any other stable diffusion method, and other algorithms/models/methods that can generate text, images, video, music, voice, program code, or other things. Submitting work created by a Generative AI as your own in any assignment is considered plagiarism, and therefore an academic integrity violation, just the same as copying work from any other source. (The only exception to this is if the assignment instructions explicitly tell you to.)
While there is potential for GAI to benefit learning, just as you would in collaboration with peers (brainstorming ideas, getting feedback, revising or editing your work, etc.), the concern is the output of GAI replacing your own voice and thoughts, reducing your ability to analyze ideas, and shortcoming the learning process. Because of the difficulty in self-determination of when GAI is facilitating-vs-hampering your own learning, the current rule in this class is to NOT allow the use of Generative AI on assignments. If a more refined approach is determined, this statement will be updated and an announcement will be made in class.
Much of the literature situates the discussion of students’ adoption of GenAI in the context of social and cultural norms around academic integrity. In our class, most students, drawing on long-held social and cultural mandates about plagiarism, understood that copying code segments directly from ChatGPT was a form of plagiarism. As a result, few students attempted such forms of misconduct.
However, students used GenAI in other ways that called into question the efficacy of the instructional team’s blanket prohibition on the use of GenAI in assignments, as well as the role of students’ subjectivities, including their cognitive and moral development, in their use of GenAI to complete class assignments. For example, one student used GenAI to write code that would complete a similar, yet different, coding task. Specifically, the student acknowledged using ChatGPT to query a similar coding activity that helped them solve the graded course activity, which we refer to as the “gradebook assignment” herein. Whereas the gradebook assignment asked students to assign letter grades to rows in a Pandas DataFrame, the student admitted to querying ChatGPT on how to assign a value to rows in a DataFrame based on other rows, copied the code from ChatGPT’s output, and modified the code to complete the task in the gradebook assignment. During my one-on-one discussion with the student, the student described the appropriateness of such approaches in the context of social and cultural norms, arguing that such practices were common amongst other students and that, as a result, they had not believed the practice would be considered academic misconduct. In this scenario, the students’ subjectivities were based on social norms and the behaviors of other students around them.
Another student acknowledged writing their own journal responses for a written reflection assignment but using GenAI to clarify ideas in their final submissions. During our one-on-one discussion, the student argued that, since assignment was not evaluated on the basis of writing proficiency, but on the completeness of ideas, using GenAI to correct grammatical errors or find other writing flaws did not violate the spirit of the learning activity. This student situated their decision on their understanding of the learning activity, as well as their subjective, context-sensitive beliefs about what constitutes academic misconduct. We asked, “Did such uses of GenAI violate course policies? And, if so, how, if at all, could instructors respond to such violations of course policies?”
In response to this situation, and after discussions with the instructional team, I decided to adopt a developmental view of students’ use of GenAI, choosing to advance activities that supported the learning aims of higher education by reorganizing students’ knowledge, skills, and ways of thinking about fair and appropriate use of GenAI. Rather than implement a strictly punitive response (i.e., reporting student academic misconduct to the Dean’s Office), I chose, instead, to ask students to reflect on their behaviors, explicate whether they believed their use of GenAI constituted fair use in context, and, if not, work collaboratively with me to think of appropriate responses. I began one class session with the following general announcement to the class:
The website that we use to teach the course implements approaches to detecting plagiarism, including the use of AI. As we discussed at the start of the semester, the use of AI on assignments is prohibited because we think it is important for you to engage with and struggle through these problems. That struggle is an important part of learning. Rather than think punitively, though, I want to give you all an opportunity to both show your integrity but also discuss our AI policy with me, so I am choosing to take a developmental lens. If you have used GenAI in your submissions, I will give you an opportunity to discuss it with me. You should email me which assignments you used GenAI on, how you used GenAI, and what you think an appropriate grade should be to rectify the issue.
Admittedly, I knew that such an announcement might cause a panic amongst students due, in part, to my prior experience as a student when my professor made a similar announcement. However, in adopting a developmental frame for the issue, my priority was to allow students to consider their own behaviors and subjectivities, as well as their cognitive and moral development, to engage in a discussion with me about fair use of GenAI in context. Rather than tell them individually what was “right” and “wrong,” positioning myself as an Authority from which knowledge about rules descended, I instead positioned students to think with me collaboratively.
Of course, students’ initial concerns were about the potential punitive measures (i.e., “Will I get in some form of trouble?”). I noted strategically worded emails—messages that admitted no wrongdoing but queried me about my perspective on fair use of GenAI in context, seemingly to assure their behaviors did not run afoul of my perspectives. In a team discussion, the instructional team concluded that students were more concerned about potential consequences than they were about defending their reasoning that their use of GenAI was appropriate. Indeed, several students appeared in my office in tears, concerned about the potential academic and social consequences of admitting wrongdoing, even when they articulated reasonable arguments that they believed their use of GenAI was fair and appropriate. We decided to reframe the discussion from one that decided how to punish students collaboratively, to one that allowed students to discuss their beliefs about fair and appropriate use.
To be clear, I never had any intention of reporting even the most flagrant use of GenAI due, in part, to my belief that the rules around GenAI were both ambiguous and inconsistent across academic contexts. That is, I was aware that some of my colleagues allowed the use of GenAI in ways that my course policies did not, and that such inconsistencies across students’ courses might be shaping what students viewed as fair and appropriate use of GenAI. Instead, I hoped to engage students through their ideas about fair use, offering feedback and my perspectives as they moved forward.
What occurred following this reframing was remarkable in its depiction of students’ moral reasoning. Approximately 50% of the class, several of whom had not been detected by the GenAI detector, came forward in meetings with me, with incredibly forthcoming information about the ways they had utilized GenAI. One student even got my attention at the end of the semester, noting that we had never gotten the chance to talk, but that he wanted to hear about my ideas about GenAI in education. Our discussions became centered on three issues: (a) Do you believe what you did was acceptable, fair, and appropriate? (b) If you believe what you did was acceptable, fair, and appropriate, explain why. (c) If you do not believe what you did was acceptable, fair, and appropriate, explain why you erred.
What occurred during these meetings was a range of student issues that are at the center of this paper. Some students articulated the use of GenAI to solve simple problems or instruct them on some aspect of the task as a legitimate apparatus for learning. They noted that their use of GenAI cleared confusion and allowed them to make progress on assignments (e.g., “I tried X, Y, and Z, but I could not get this small issue to work, so I threw it into ChatGPT.”) Others noted that they had used ChatGPT to check their answers. While their submission did not match the code produced by GenAI, they submitted responses only after checking their code against that produced by ChatGPT.
Other students mentioned structural issues about the nature of engineering education. Time constraints and the need to turn their attention to other courses limited students’ willingness to expend time on small coding issues (e.g., “The assignment was due on Sunday and the next office hours were on Tuesday. I couldn’t wait three days to make progress on this because of my other classes.”). Other students spoke about the nature of the engineering curriculum, noting that while the class required written assignments about sociotechnical issues, the writing training in engineering was limited: some core courses focused on writing in general, and their engineering courses focused on technical writing (e.g., report writing), but the sociotechnical writing was new, causing concerns.
The instructional team then began to think about ways to implement this developmental approach to discussing the development and use of GenAI beyond academic contexts. For instance, during the “Computing in the World” component of Henderson’s Introduction to Computing in Engineering course, students discussed the role of computing in shaping systems of inequality in our broader society. In one of the sessions, the students discussed GenAI, algorithmic bias, and how GenAI could reify systems of inequality (e.g., racism/sexism, homophobia) in our society. Figure 1 shows example slides from the GenAI Computing in the World activity.
While on one slide (Slide C), the students began discussing the use of AI tools for facial recognition, including a series of studies on “AI gaydar” technology that claimed to be able to determine one’s sexual orientation based on facial features. Students initially questioned the mathematics behind such algorithms, including the claims about the accuracy of the tools. However, others began to question the very nature of identity judgments in general—both discerned from AI tools as well as those that occur between people and communities. Henderson recognized that the activity engaged students in right–wrong thinking, but students instead understood the utility and appropriateness of such tools to be more complex. As a result, Henderson engaged the students in a discussion about the nature and meaning of identity and what it meant for any algorithm to determine someone’s identity. The class began with the moral and ethical question of naming another person’s identity without their knowledge or input. Henderson (a phenotypically Black man) continued with a discussion of racial identity, asking a White male student, Bradley (all student names are pseudonyms) how he might describe his race.
Bradley: I’m White.
[Henderson then pointed to a second White male student for confirmation]
Henderson: Okay, Cameron. Bradley says that he’s White. Would you agree?
Cameron: Yeah.
Henderson: And how do you know?
Cameron: Because he said that he’s White.
Henderson: So, if I walked into the room and said “Hello everyone. My name is Dr. Henderson and I am a White man.” Would that be sufficient?
[The class laughed at the absurdity of the claim, but quickly returned to the idea of identity.]
Henderson: So, identity is of course what you call yourself, but identity may also be related to something else. Why did you all laugh? What was wrong with me calling myself White?
Chelsea: Well, you’re obviously not White. You don’t look White?
Henderson: There’s lots wrapped up in that comment. There is the “how I look” part. The word we use for that is “phenotype.” And there’s how you identified me. So, my racial identity is some unique combination of how I describe myself, how I look, and by extension, how others would describe me.
As a class, we then discussed the story of Tyler Clementi, a Rutger’s University student who died by suicide in 2010 after his college roommate used a web camera to capture Clementi in an intimate, private, encounter that was then viewed and shared online. We noted that many news outlets referred to Tyler Clementi as a “gay man” who had been “outed” by his roommate. These news outlets perhaps made such claims based on the intimate moments captured in the videos or the stories from Clementi’s family about his recent process of “coming out” the summer before his death. “Was it appropriate for these news outlets to label Tyler Clementi as gay?” we asked. We noted that it is possible for one’s identity to evolve over time or across contexts. To return to the racial identity example, we noted that a person might call themselves Black in one context but Nigerian in another, a change with significant social meaning across contexts. It was possible, then, that Tyler Clementi may have come to describe himself differently before his death, or that he might have used a different label to describe himself to the New York Times than he would have with his friends and family. While some felt comfortable drawing conclusions about Tyler Clementi’s sexual identity based on the video, his family’s stories, or other sources, others argued that only Tyler was fit to describe his identity.
The purpose of the exercise was not to turn the computing class into a sociology session, nor was the goal to tell students unilaterally that it was wrong for engineers to develop algorithms for determining individuals’ identities. Indeed, the discussion also elevated the role and accuracy of mathematical models, the potential to improve those models, and the opportunities such algorithms might have in important sociopolitical applications. Instead, the goal was to position students to think critically about and problematize the ideas they were enacting and embodying, at times, without explicitly realizing it. That is, the purpose of the activity was to position students to consider their own cognitive and moral reasoning about the use of AI tools in our society, and the role of their subjectivities in reifying or challenging the use of such tools. The exercise was a step towards questioning the appropriateness of algorithms for making social judgements, like claims about people’s identities. The discussions also elicited critical conversations about students as engineers and their role in developing, deploying, and utilizing emerging technologies in their work, such as GenAI.
The complexity of the conversation grew as students considered their own cognitive and moral reasoning, the role of GenAI in producing and reifying inequities in our society, and their own subjective commitments, such as their beliefs about the role and work of engineers, the purpose of developing new technologies, and the appropriateness of making social decisions based purely on mathematics and science. Students asked questions about the usefulness of GenAI tools, wondering if GenAI tools might be useful to people if engineers could improve the accuracy of these technologies. Students also discussed the acceptable level of error given that that error could cause damaging harm to individuals and communities. Such a discussion, we contend, constitutes an important learning activity in students’ cognitive and moral development about the nature and use of emerging technologies that elevates the role of students’ subjectivities, as well as their cognitive and moral reasoning, in their belief about fair use of GenAI.
Thus, we contend that, rather than positioning students to think dualistically, engaging students in such discussions as a mechanism for fostering cognitive and moral development positions students to state their judgments explicitly. As a result, educators might facilitate more appropriate avenues for fostering their educational development, with implications for their longer-term ways of thinking, knowing, and doing. Engaging students about their ways of thinking, rather than explaining appropriate ways of thinking, may be a more promising way of fostering students’ cognitive and moral development about fair and appropriate use of GenAI.

5.2. Case II: What Can Be Learned from STEM’s Past? Students Conceptualize the Potential Implications of Biases in the Development of GenAI Tools

Being an educator, Pierre wanted to discuss the use of GenAI tools with her students and urge them to explore how subjectivities could influence their use, thinking directly about how identity may present opportunities or limitations to its use. During the spring semester of 2025, she designed and taught a course titled “Cultural and Critical Perspectives in STEM Education” to hold space for students to reflect on how these subjectivities (e.g., unknown and known biases) can take shape when considering innovation for the future of STEM [education]. The class mostly included students who were current STEM educators and/or earning graduate degrees in various STEM disciplines. Students were also newly introduced to conducting and conceptualizing research studies. The author facilitated conversations about how individuals can shape their work to be socially just and to consider the biases we have as humans in STEM disciplines, despite STEM disciplines being perceived as objective (Pierre et al., forthcoming; Robinson & Shankar, 2025). Invited scholars held careers in STEM disciplines and visited throughout the semester to discuss how they were successful in ethically incorporating socially just practices in their research and teaching as early-career university professors. Students were tasked with writing reflective journals about topics discussed in class, in addition to the conversations generated following lectures. Topics within this section are a mixture of discourses that occurred among students during in-class discussions and a synthesis of thoughts shared by students in their reflective journals throughout the semester.
To ground students’ critical thinking throughout the semester, Pierre engaged students in discussions of how historically STEM has negatively affected marginalized groups (Pierre et al., forthcoming). By guiding students’ engagement with historical scholarship, Pierre’s goal was to encourage them to conceptualize how past actions in STEM history could be repeated. Learning from invited scholars who visited the course throughout the semester and learning about the origin stories of biases in research and medical practices in the U.S., students were challenged to serve as change agents in their respective fields. Specifically, students were challenged to consider ethical and moral dilemmas of introducing topics and practices in STEM education, given the historical ways STEM has been weaponized against vulnerable populations. One example of this challenge was considering how history could inform the use of GenAI—especially as we consider racial identity.
With each lesson focused on a respective discipline bi-weekly, during the semester, students were encouraged to consider engineering and technology topics, with Pierre stimulating discussions about possible ethical considerations and responsibilities students held as potential innovators. Facilitating a topic on GenAI, Pierre drew on historical examples of biases in science and technology, and engaged students in interrogating GenAI tools in the present time. First, students read the foundational literature about U.S. history, particularly the disturbing origins of current legislation related to scientific research. Specifically, students read about the history of Eugenics and were provided with a case in the U.S. northeast (Lombardo, 2024), the Tuskegee Syphilis experiment (Tobin, 2022), and the mishandling of cells taken from Henrietta Lacks (Baptiste et al., 2022). These readings set the stage for students to grapple with the biases that might follow the use of GenAI in practice. The following week, Pierre facilitated a discussion on how biases and discrimination occurred in these cases and asked students to consider how, similarly, these biases could influence GenAI tools, which claim to be innovative and objective (Muldoon & Wu, 2023). Topics generated from the discussion inspired students to discuss concerns and solutions. For instance, students made connections with other course readings on how racial and ethnic discrimination have transcended other forms of tools and technologies in our everyday lives. Students highlighted how the automated sensor for bathroom faucets and soap dispensers has a hard time recognizing people with darker skin (Ren & Heacock, 2022).
The course dialog sparked conversations about ethics, surveillance, immigration, and harm across class discussions and student reflective journals. As growing tensions occurred during the spring semester of 2025 about immigration in the United States, exacerbated by the abduction of Rümeysa Öztürk, a fifth-year doctoral student in the Eliot-Pearson Child Study and Human Development program at Tufts University (Patel et al., 2025), many felt that a dark turn could take place with the use of GenAI. Students discussed the potential for increased surveillance of immigrants living in the United States through GenAI tools. Research has shown that their thoughts were accurate, as recent studies have detailed an increase in surveillance through GenAI tools to track and deport undocumented migrants (See Nalbandian, 2022) and government reports showcase newly employed satellite imagery that documents migration patterns of what are perceived as potential border threats (Center for Accelerating Operational Efficiency, 2025). Additionally, it is reported that there will be a ramping up the use of facial recognition devices in public areas to reprimand alleged illegal immigrants (Reuters, 2025) and the launching of the “Catch and Revoke” effort to cancel visas of foreign-born students’ whose social media accounts are deemed suspicious (Caputo, 2025). Similarly to student reactions to the increase in surveillance through technologies, the concern drawn from scholars due to these GenAI tools is that, as with any other tool developed by humans, their accuracy may be compromised (Reuters, 2025). Discriminatory biases may translate into these tools, and marginalized populations can bear the brunt of errors made from GenAI tools, such as in the case of facial recognition (Almeida et al., 2021; K. Hill, 2024).
Additionally, as students learned from the literature that foreign-born professionals make up a large number of STEM and health professionals in the U.S. (National Science Board & National Science Foundation, 2020; New American Economy Research Fund, 2018), further concerns emerged about the possible limitations to discovery and innovation that could occur through deportations made in error. Indeed, juxtaposed to the literature in past years, this theory, drawn from discussions with students, is also plausible; it has been reported that U.S. immigration policies serve as a potential block to innovation and access to foreign-born AI talent (Arnold et al., 2019).
Following robust and thoughtful conversations with students, Pierre drew important conclusions from class dialogs and reflections made by students. Specifically, while it is easy to condition these lessons as a means to search for biases and discuss politicized topics, the true purpose was to ensure that, as we are developing technologies, we must make an effort to minimize harm. Additionally, it is important to inform the next generation of innovators as they create to include diverse thinkers and to study and learn from the past. Historically, marginalized individuals have been affected by STEM-specific biases and the technologies that have been developed. Considering the pervasiveness of GenAI in our lives as an innovative technology, there should be attention to the potential harms and opportunities of the application of these technologies in society with respect to identities. When thinking about the generation of new AI tools that are and will be incorporated, attention to the biases and opportunities offered by these technologies will allow us to address their implications, particularly associated with disparities for historically marginalized groups in society. Ultimately, while there are limitations to the current use of GenAI tools (e.g., discrimination and surveillance), there are also opportunities to invite various cultural perspectives through the use of GenAI tools. This includes the support and inclusion of diverse communities and perspectives, and conversations with the next generation of thinkers.
If faculty and researchers develop and facilitate learning opportunities for students to understand the affordances of GenAI tools, we can strive to mitigate the biases produced from unidimensional and positivist perspectives on the role that these technologies play. Those learning to become scientists and engineers, and those designing learning opportunities for them, may fall into the trap of the objective nature of the STEM disciplines. This can lead them to think about and incorporate GenAI in contexts without necessarily being intentional about who designs GenAI tools and what their biases are.

5.3. Case III: Problem Scoping in Engineering Design: The Case of Linguistic Identities and GenAI Tools

During the fall of 2024, Pérez attended STEM4girls at the University of Massachusetts Dartmouth, an event with over 280 middle school girls and numerous scholars discussing STEM pathways. After presenting her research on problem scoping in engineering design (Pérez & Marvez, 2024; Pérez & Sheppard, 2024), including a study on undergraduate students’ design considerations when addressing the humanitarian and political challenges at the US–Mexico border, Prof. Stroup asked Pérez, “Have you tried to pose the same question to GenAI?” People, not technology, were at the center of this conversation. In thinking about people as agents of change and students and teachers as creators of technologies, Pérez wondered about the opportunities and perils of human, computer, and non-human interactions. She wondered how educators, engineers, and researchers can design engineering learning environments of human–GenAI augmentation to learn about developing solutions to ill-defined and socially oriented problems in undergraduate education. Through our reflection about this conversation on the US–Mexico border design task (See Pérez [Jöhnk], 2021 for details about this activity), we seek to illustrate and (re)imagine how people use GenAI in engineering design education through the perspectives of their cultural and linguistic identity. In particular, we highlight the role of language and culture—understood as the set of experiences, values, and practices of a community—in engagement with GenAI tools for developing sociotechnical solutions to large-scale societal challenges, from climate change to food insecurity.
Inspired by Atman and colleagues’ work on problem scoping (Atman et al., 2008), the US–Mexico border is a design task created to engage students in thinking about community resources—language practices, values, experiences, and cultural understandings (Pérez [Jöhnk], 2021). The activity is a problem scoping task in engineering design (Pérez et al., 2024) where students are asked to think about their language and cultural identities and that of their communities (who students are in relation to how they speak and their experiences stemming from their racial/ethnic groups). Through this example, we imagine possibilities related to the challenges and opportunities of thinking about the potential role of students’ language practices and cultural understandings in the use of GenAI tools. First, we imagine the affordances of GenAI tools to support students’ thinking in the problem scoping task about the US–Mexico border. Then, we present a thought exercise about what would happen if an undergraduate student shared information about their identity (or the identity of communities involved in the design) in the development of engineering solutions to ill-defined problems. Finally, drawing on the current research on GenAI tools for learning (Wang & Fan, 2025), we propose ideas about the potential use of the GenAI tool as a thought partner, intelligent tutor and educational tool in engineering design.
GenAI tools present affordances and challenges when supporting students’ thinking and identity during problem definition activities in engineering design, particularly when provided with information about the identity of designers and communities. Because GenAI tools provide an output dependent on the input (Borah et al., 2024), we asked the GenAI to take the identity of a fictional undergraduate student majoring in engineering. We provided information about this persona to the AI (e.g., age, major, community, racial and ethnic background and language), mimicking the practice engineers engage in when creating a target user in design (Karolita et al., 2023). Then, we asked the GenAI tools to respond to the task assuming the role of such persona. For instance, we described this hypothetical person as Yokasta, a 19-years-old undergraduate student of mechanical engineering who identifies as Afro-Latina and has ancestry from the Dominican Republic. Her family lives in Linwood, CA, and she grew up in San Ysidro, a town in San Diego directly bordering Tijuana, Mexico. Additionally, we described Yokasta as a Spanish speaker who interacts in Caribbean Spanglish at home and with friends while speaking mostly in English at the university she attends (a private, top research institution, in the Northeast). After providing context about this hypothetical student, we asked the GenAI tool to engage in translanguaging to answer the problem scoping task. Prior to starting the description, we provided a definition of translanguaging as described by García (2025) and Pérez et al. (2025a): The dynamic use of people’s semiotic systems—multilingual, multimodal, and multisensory—by minoritized individuals to make meaning, engage in social interaction, and achieve goals. We presented the task, written in a mix of Spanish and English, as was performed for one of the groups in the original study. Then, we asked the GenAI tool to answer the task.
In a second scenario, instead of focusing on the identity of the undergraduate student, we wondered about the use of GenAI tools as thought partners during problem scoping activities. We see opportunities for GenAI tools to support students’ thinking process about social frames and systems in technical disciplines. For instance, through GenAI literacy frameworks with ethical guidelines, educators can provide student-centered learning opportunities for learners to critically assess GenAI-generated materials (Kangwa et al., 2025). When supported by such frameworks, GenAI tools enable students to expand technological boundaries by assessing the extent to which GenAI-generated materials consider social factors and broader issues in engineering design and prompting the tools to account for sociocultural issues.
In our experiences teaching engineering design courses, there are many ways to go about incorporating sociotechnical thinking in engineering through the use of GenAI tools. For instance, we can prompt students to expand their repertoires of design considerations using GenAI tools after they have worked on the task independently and/or challenge them to improve the number of social, human, and environmental factors in their responses to the design task. In both instances, we, as faculty, depart from the assumption that the design considerations initially generated by GenAI tools may not represent the lived realities of communities, hence the importance of engaging students in collective conversations about the perils and opportunities of GenAI tools in engineering work. When students fail to question how social aspects of communities are represented or neglected in GenAI outputs, students risk reinforcing technologies that flatten the linguistic and cultural diversity of the very people and communities for whom and with whom students are being trained to co-create solutions. This is particularly true for the communities along the US–Mexico border, the context of the engineering task that sparked these questions and pedagogical wonderings. In the original study, students in the bilingual group were more likely to contribute social factors when answering the task. Students were inspired by their lived experiences in the border region, which they drew from to make decisions about what mattered in narrowing down the problem. However, when presented with the same task, GenAI tools, as we know them today, have limitations. GenAI tools imitated the realities of people as they have no lived experiences.
From our thought exercises using GenAI tools to answer the US–Mexico border task, two interesting aspects of problem scoping responses stand out, particularly related to contextual social awareness (Pérez et al., 2020, 2021) and patterns of language use (Pérez [Jöhnk], 2021; Pérez et al., 2024). First, when we used GenAI tools to solve the problem scoping task (see Table 1 for the responses to the task by three popular GenAI tools), the responses included reasonable factors with bias towards prioritizing the perspectives of the dominant society. For instance, in the GenAI-generated responses, most of the terms typically associated with technical and scientific disciplines were in English and the more colloquial portions of the responses in Spanish, with a vague and shallow overemphasis on justice and equity. As in the original study, the students’ responses in the English-only group tended to privilege the language of the dominant society for technical matters (Pérez [Jöhnk], 2021). Different from the responses of students in the bilingual group, who have experiences living along the US–Mexico border or grew up in immigrant communities, GenAI tools tended to neglect the lived realities of people as factors in the design, ignoring traditions, languages, and cultural values. Similarly to how first year engineering students dedicated less time to defining the problem compared to experts (Atman et al., 2007), GenAI-generated responses mostly provided solutions rather than factors to consider for assessing the problem space, which has been linked to less customer satisfaction and poor design outcomes (Jain & Sobek, 2006). Students in the bilingual group offered factors associated with community and people (Pérez et al., 2024), while the GenAI tools failed at producing such considerations.
The GenAI tools primarily focused on the technical aspects of the problem, including frameworks, policies, or technology development with limited attention to people and communities, except for ChatGPT 5. In its responses, ChatGPT 5 provided general ideas but was the only tool that generated factors with attention to the language of people in the community; however it did so with an emphasis on the economy of the region. There were similarities between Co-Pilot and ChatGPT 5 in the content of the responses, with differences only in how these tools coded their factors. For example, Co-Pilot labeled factors as “Adaptive Design”, emphasizing the design and technical aspects of how engineers change the environment to adapt, while ChatGPT 5 labeled similar factors as “Environmental and Geographical”, highlighting the context in which design takes place. Co-Pilot was the only tool that generated factors related to the existence of a safe and efficient binational transportation system.
In terms of language use, identity, and culture, GenAI tools engage in boundary crossing between languages, but their responses may be perceived as unnatural. For instance, when Gemini—integrated into Google Search—was asked to perform the task from the perspective of Yokasta, it was unable to provide a response but offered a list of links with resources and a message that read “An AI overview is not available for this search”. As we read the factors provided by the GenAI tools, we could not imagine a person in the real world who would speak in such a way. There are many plausible explanations for this situation: People talk differently in the physical world compared to how people write in virtual settings (primary source of information for the GenAI-generated materials); and people have a sense of languages developed through social and cultural interactions that machines lack and poorly mimic—a clear limitation of the GenAI–human augmentation of incorporating contextual factors (Gao et al., 2025). After giving the GenAI tools the problem scoping task, the AI engaged in language use behaviors that mirrored societal expectations of language and culture, which may not be reflective of how people engage with one another in their everyday life within multilingual and multidialectal communities.

6. Discussion

This manuscript begins by asking about the role that identities and subjectivities—of students, others, communities, and technology itself—play in the use of GenAI in engineering and science learning, including the opportunities and limitations for adopting GenAI in STEM education. The cases presented in this manuscript highlight the significant challenges that exist in fostering students’ cognitive, moral, and linguistic resources, which we contend shape their beliefs and subjectivities about fair and appropriate use of GenAI for STEM learning. We suggest these issues have implications for faculty pedagogies and student learning, and we tackle these ideas from a perspective of identity and subjectivity as dynamic and developed in interactions with the self, others, and the environment.
The cases described in this paper illustrate that students’ and faculty beliefs about “fair and appropriate use” are context-sensitive and are informed by personal identities and subjectivities. In Case I, students articulated the ways that socio-academic context shaped their beliefs about the appropriateness of utilizing GenAI tools in STEM learning environments. Case I also demonstrates the ways learning activities can position students to problematize the use of algorithms for making social judgements. Case II demonstrates the ways researchers and faculty can facilitate cultural and critical learning experiences in STEM education for students to reflect about and mitigate biases in ideas and technologies. Finally, Case III demonstrates the ways faculty can imagine language and cultural opportunities for designing solutions in multilingual and multidialectal contexts (Case III). All of these cases, while examples of the commonly cited concerns about GenAI in STEM education (e.g., academic misconduct, ethical (mis)use), still suggest that there are potentially powerful opportunities for GenAI to foster important learning outcomes.
These cases also underscore the affordances of faculty and students adopting GenAI in STEM learning environments. That is, what students can do with GenAI in an introductory computing course might differ from what they can do with GenAI in a multilingual problem scoping activity. Given GenAI’s capabilities to generate both code and natural-language instructions, it offers scholars and educators a new frontier for collectively leveraging technological tools to expand knowledge and innovation (Prather et al., 2023). Yet, as the examples presented here suggest, such use must remain grounded in context. For instance, in Case I, where students discussed GenAI tools for facial recognition that claimed to determine one’s sexual orientation based on facial features, students acknowledged that the purpose of the technology was unclear. However, the learning activities positioned students to discuss the many assumptions people make about others’ identities when developing these tools or using them. What ensued was a discussion about whether the technology was an unethical use of AI that oversimplifies identity (i.e., sexual orientation), as well as a discussion about the ways engineering designers could develop more ethical technologies. Similarly, in Case III, when students and faculty asked the GenAI tool to answer the problem without any prior background information, the response was narrow in the kind of factors the tool considered, and the technology oversimplified people’s language and cultural background. Such an activity positioned students to discuss the ways GenAI tools can exacerbate systems of social inequality, such as racism, sexism, and xenophobia.
Herein lies a pedagogical conundrum about the opportunities and limitations of thinking about contextualized identity and subjectivity in the use of GenAI in STEM learning environments. Research is emerging about the ways GenAI can support student learning. For example, while research has documented the coping strategies students use to address social anxiety across learning environments, such as avoiding speaking or asking questions (Russell & Topham, 2012), GenAI tools present a concrete advantage for teaching and learning because students do not have to worry about asking poorly formulated questions or be afraid of being judged. GenAI tools make students feel heard and validated, as the technology performs well in extracting information from questions structured in different ways, moving students to further engage with these technologies. In STEM contexts, students’ use of GenAI tools in ways that are meaningful to them may lead to increased interest in learning as long as the necessary pedagogical strategies and resources are in place for students to expand their own perspective through the use of these technologies (Monzon & Hays, 2025).
However, we also argue that blanket prohibitions on the use of GenAI constrain opportunities for faculty to engage students on fair and appropriate use of these technological tools in context. Some instructors may argue that blanket prohibitions on the use of GenAI are a reasonable pedagogical approach, because such a policy is broadly applicable across academic contexts. As a result, faculty need not to make context-sensitive decisions about fair use of GenAI in their respective courses if GenAI should not be used at all. Moreover, there is no confusion for students if all academic courses in which they enroll adopt the same policy. However, through our work, we indicate the unrealistic nature of blanket prohibitions. We argued that such prohibitions are ill-suited for fostering opportunities to engage students’ perspectives and beliefs about fair and appropriate use of emerging technologies, as well as the identities and subjectivities of themselves, others, and AI tools, which can foster sociotechnical and socioscientific learning outcomes.
While ethical considerations have emerged from the activities described in the cases, questions remain about the future use of GenAI tools and their impact on communities, particularly marginalized groups. Used effectively, GenAI tools have the potential to create inclusive environments that prioritize equity, justice, and inclusion among other productive forms of STEM research and teaching (see Price & Grover, 2025). Still, scholars have also posited that more efforts are needed to consider how GenAI could alleviate potential biases by considering multiple perspectives and cultures, as the use of GenAI tools designed through particular lenses is rampant and far-reaching (Yusuf et al., 2024).

7. Conclusions and Future Directions

This paper proposed a subjective intelligence framework with the potential to serve as a powerful lens to imagine and problematize uses of GenAI technologies in STEM learning environments while examining its implications for ethical and social responsibility. Through illustrative examples from STEM courses, we unpack the affordances of thinking about identity and subjectivity in fostering innovative use of GenAI in STEM education. We joined other scholars in highlighting the relevance of addressing AI biases, misinformation and disinformation in STEM education (Zhai & Nehm, 2023). Particularly, the framework underline ideas on cognitive and moral development in data ethics, biases in technology design, and linguistic identity in the use of AI systems.
The most profound impact of this framework lies in its capacity to catalyze a shift for the integration of sociotechnical and socioscientific issues in GenAI use, positioning faculty and students to think about the impact of ideas and technology on society. Sociotechnical and socioscientific issues have become an increasingly important aspect of the STEM curriculum, particularly in engineering education where students are tasked with developing technologies and solutions for communities (Pérez et al., 2025b). However, scholars have critiqued common approaches to sociotechnical and socioscientific STEM education because they focus on individual decision making or post hoc analyses of existing ideas and technology, rather than asking students to engage critically with the influence of engineering and scientific decisions and emerging technologies on structural issues in our society. In our framework, we posit that asking students and faculty to consider fair and appropriate use of GenAI—and future evolutions of these technologies—and its impact on people, communities, and society represents a unique opportunity for sociotechnical learning experiences. Such opportunities ask students and faculty to engage collaboratively with each other and the broader society to think critically about access to, use, and impacts of AI systems. Simply removing GenAI from learning contexts undermines sociotechnical and socioscientific integration in engineering and science, as well as the development of engaged citizen engineers and scientists, which is a core goal of STEM higher education.
As we consider the future of GenAI tools, we invite faculty, scholars, and students to think carefully about what has been learned from technological advancements in the past. Put directly, while many advancements in STEM have historically yielded broad benefits to society, certain developments have also carried negative consequences for vulnerable populations. Therefore, we must ourselves critically examine these disciplinary histories, especially today, as a growing body of research captures the potential of biases to transcend and shape the development of GenAI tools and future emerging technologies. As we evolve into a more technologically advanced society, it becomes imperative to encourage dialog about the critical examination of and action towards addressing biases and ethical considerations of GenAI tools in STEM education.
The ways in which STEM faculty and students in higher education reflect on their own identities and the identities of others—both human (e.g., local community) and non-human (e.g., GenAI tools)—in the use of emerging technologies for teaching and learning shape the paradigms reinforced in STEM education. If undergraduate students engage with GenAI when solving problem scoping tasks (Case III), they may tend to associate engineering as a discipline primarily with the dominant language. Distorted perceptions of communities may be considered, if in any way, in how solutions are developed, suggesting that technical expertise is tied exclusively to the dominant language and relegating the subjective realities of others, thereby limiting how students imagine STEM pathways for the production of knowledge and solution.
AI technologies are poised to fundamentally transform every aspect of human and other-than-human life, yet the benefits and risks will not be distributed evenly. For historically minoritized groups, this situation may widen existing opportunity gaps. As engineering and science disciplines advance ideas and solutions intended for all, it is crucial to recognize that GenAI technologies—and subsequent evolutions of these systems, such as Agentic AI, which may be making its own determinations in the future while providing personalized assistance and automating repetitive tasks in teaching, learning and the workforce (Pop et al., 2025)—may reproduce biases, perpetuate stereotypes, and deepen societal inequities. Acknowledging these ethical and moral challenges is essential for ensuring a future where these technologies expand opportunities for learning instead.

Recommendations for Future Research and Imagined Futures

Decades of research have charted young people’s and college students’ cognitive and moral development (e.g., King & Kitchener, 2012; Kohlberg & Hersh, 1977; Piaget, 1970; Perry, 1970). More recently, scholars have turned attention to the role of technology, such as video games, podcasts, and social media algorithms, in influencing young people’s personal values and moral development (e.g., Bulmash, 2024; Young, 2015). The advent of GenAI represents a new and unique research opportunity. Whereas other technology is characterized by its static nature (i.e., a video game or podcast must be published for users to interact with it), GenAI is rapidly evolving and is, in many ways, evolving in response to the ways people (e.g., students, communities) utilize it. As a result, students are not responsive to GenAI as it exists today, but how they project or predict it might exist in the future. Thus, future research should examine how students’ beliefs and values regarding GenAI reflect their projections about its future.
Furthermore, it is quite possible that, similar to cases experienced in the authors’ classrooms, students are unaware of past discriminatory practices enacted through science, medicine, and technology. Future research should assess student attitudes and beliefs surrounding ethics and GenAI. This approach can unearth possible strategies to ensure these topics are incorporated into the curriculum of scholars across STEM disciplines.
Through the example of linguistic identity and GenAI tools in engineering design, we begin to think about affordances of these emerging technologies for learning and a future dominated by Agentic AI—where students “can easily create any virtual reality or simulated scene and interact with agents embodied within the virtual environment” (Durante et al., 2024, p. 2). Future research should explore the challenges and possibilities of what we currently know about how people learn to incorporate sociotechnical and socioscientific dimensions in engineering and science (Pérez et al., 2025b), as well as the ways in which we preserve students’ agency in the use of AI technologies and support students’ understanding of STEM concepts and practices through its use.
If we were to create an AI agent, we could train the technology to stay informed about particular communities, for instance, on geopolitical issues and/or local and international news. This agentic future could bring the possibility of dynamic and community-oriented AI models for learning that can be connected to one another through agentic architectures of neural networks for artificial intelligence (Windland et al., n.d.; NVIDIA, n.d.). As STEM researchers and practitioners engage in the conversation of charting the future of AI for learning, future scholarship should explore how Agentic AI tools and architectures learn from and emulate community experiences while pondering the ethical and moral dilemmas of its use. This simulation of a person will never be the same as interacting with a real human. Simulations of someone will have to be based on a stereotype of a person, which will fall short. When AI becomes agentic and personified, we may have as many, if not more, AIs as people on the planet (grandiose but a digital twin of humanity), each with their own personality; this will be incredibly powerful to support students in thinking about and engaging with different perspectives. As a result, the STEM education community will be able to better understand how AI learns and set up guardrails for its future use. Otherwise, if we have learned from the history of ideas and technological advances in engineering and science, most likely GenAI and Agentic AI will offer limited, imperfect, and biased portrayals of communities and, in the best case scenario, distorted pictures of experiences and values of particular groups, making invisible in the process the use of AI tools for engineering design education.
There is identity in people, but we need to think about what the identity of AI is while also considering its moral and cognitive development. If AI can have an unlimited number of identities, these identities may change its perspectives, the lens through which AI solves problems and creates solutions, resulting in different outcomes of the technology and in different human–AI interactions for learning. This calls us to question whether or not we should create AI tools with identity. And if so, which identities? Today, GenAI tools have limited views of identity, where the students can choose a voice, changing the output without necessarily changing the model. But even when the AI becomes fully agentic, it will not have a personal idea of who the AI is, but a sense of self set up initially by the parameters of a model that will serve as the foundation for such an AI persona.
In a future where the AI itself could start probing students by learning from them and asking them to incorporate social factors in their designs, students may be able to push the AI to think in new ways and the AI may push students to question their design considerations. If learning scientists and STEM scholars were to develop and understand how engineering and science students learn from and with AI helpers, we could imagine and investigate learning environments where students learn together or discuss ideas with these technologies by framing the AI to think about aspects of particular communities.
Usually, the AI responds with what people want to hear; even if students ask for strong pushback from the technology, the technology gives possible rebuttals of students’ positions. However, as soon as the students disagree with the ideas, the technology conforms to what the students think. A future where the AI is agentic may open possibilities for rebuttals coming from the technology itself, where students can learn to engage in STEM discourse and stretch the possibilities of their ideas and solutions through the use of AI technologies, or, if we fail to think deeply about human–AI learning, the technology might just reinforce what students already think.

Author Contributions

Conceptualization, G.P., T.H., T.P., G.R.M., A.V., P.E. and Y.P.P.; methodology, G.P., T.H. and T.P.; resources, G.P., T.H., T.P. and G.R.M.; data curation, G.P., T.H., T.P. and A.V.; writing—original draft preparation, G.P., T.H., T.P., G.R.M., A.V., P.E. and Y.P.P.; writing—review and editing, G.P., T.H., T.P., G.R.M. and A.V.; visualization, G.P., T.H. and T.P.; supervision, G.P., T.H. and T.P.; project administration, G.P., T.H. and T.P. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Institutional Review Board Statement

The study was conducted in accordance with the Declaration of Helsinki, and approved by the Institutional Review Board of Tufts University (protocol code 00002282 and 7 February 2022).

Informed Consent Statement

When applicable, informed consent was obtained from all subjects involved in the study.

Data Availability Statement

The original contributions presented in this study are included in the article. Further inquiries can be directed to the corresponding author.

Acknowledgments

GenAI has been use as a data source. The authors thank the students who inspire the ideas in this paper and help us imagine transformative engineering futures for students.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Abdou, E. D. (2017). Toward embracing multiple perspectives in world history curricula: Interrogating representations of intercultural exchanges between ancient civilizations in Quebec textbooks. Theory & Research in Social Education, 45(3), 378–412. [Google Scholar] [CrossRef]
  2. Ahmad, M., Subih, M., Fawaz, M., Alnuqaidan, H., Abuejheisheh, A., Naqshbandi, V., & Alhalaiqa, F. (2024). Awareness, benefits, threats, attitudes, and satisfaction with AI tools among Asian and African higher education staff and students. Journal of Applied Learning and Teaching, 7(1), 1. [Google Scholar] [CrossRef]
  3. AlAfnan, M. A., Dishari, S., Jovic, M., & Lomidze, K. (2023). ChatGPT as an educational tool: Opportunities, challenges, and recommendations for communication, business writing, and composition courses. Journal of Artificial Intelligence and Technology, 3(2), 60–68. [Google Scholar] [CrossRef]
  4. Alasadi, E. A., & Baiz, C. R. (2023). Generative AI in education and research: Opportunities, concerns, and solutions. Journal of Chemical Education, 100(8), 2965–2971. [Google Scholar] [CrossRef]
  5. Alessandro, G., Dimitri, O., Cristina, B., & Anna, M. (2025). The emotional impact of generative AI: Negative emotions and perception of threat. Behaviour & Information Technology, 44(4), 676–693. [Google Scholar] [CrossRef]
  6. Alier, M., García-Peñalvo, F.-J., & Camba, J. D. (2024). Generative artificial intelligence in education: From deceptive to disruptive. International Journal of Interactive Multimedia and Artificial Intelligence, 8(5), 5. [Google Scholar] [CrossRef]
  7. Al-kfairy, M., Mustafa, D., Kshetri, N., Insiew, M., & Alfandi, O. (2024). Ethical challenges and solutions of generative AI: An interdisciplinary perspective. Informatics, 11(58), 58. [Google Scholar] [CrossRef]
  8. Almeida, D., Shmarko, K., & Lomas, E. (2021). The ethics of facial recognition technologies, surveillance, and accountability in an age of artificial intelligence: A comparative analysis of US, EU, and UK regulatory frameworks. AI and Ethics, 2(3), 377–387. [Google Scholar] [CrossRef]
  9. Ariza, J. Á., Restrepo, M. B., & Hernández, C. H. (2025). Generative AI in engineering and computing education: A scoping review of empirical studies and educational practices. IEEE Access, 13, 30789–30810. [Google Scholar] [CrossRef]
  10. Arnold, Z., Heston, R., Zwetsloot, R., & Huang, T. (2019). Immigration policy and the U.S. AI sector: A preliminary assessment. Center for Security and Emerging Technology. [Google Scholar] [CrossRef]
  11. Ashcroft, B. (2001). Language and race. Social Identities, 7(3), 311–328. [Google Scholar] [CrossRef]
  12. Atman, C. J., Adams, R. S., Cardella, M. E., Turns, J., Mosborg, S., & Saleem, J. (2007). Engineering design processes: A comparison of students and expert practitioners. Journal of Engineering Education, 96(4), 359–379. [Google Scholar] [CrossRef]
  13. Atman, C. J., Yasuhara, K., Adams, R. S., Barker, T. J., Turns, J., & Rhone, E. (2008). Breadth in problem scoping: A comparison of freshman and senior engineering students. International Journal of Engineering Education, 24(2), 13. [Google Scholar]
  14. Avraamidou, L. (2019). “I am a young immigrant woman doing physics and on top of that I am Muslim”: Identities, intersections, and negotiations. Journal of Research in Science Teaching, 57(3), 311–341. [Google Scholar] [CrossRef]
  15. Baker, R. S., & Hawn, A. (2022). Algorithmic bias in education. International Journal of Artificial Intelligence in Education, 32(4), 1052–1092. [Google Scholar] [CrossRef]
  16. Bandara, C. (2024). Hallucination as disinformation: The role of LLMs in amplifying conspiracy theories and fake news. Journal of Applied Cybersecurity Analytics, Intelligence, and Decision-Making Systems, 14(12), 65–76. Available online: https://sciencespress.com/index.php/JACAIDMS/article/view/14 (accessed on 2 July 2025).
  17. Baptiste, D., Caviness-Ashe, N., Josiah, N., Commodore-Mensah, Y., Arscott, J., Wilson, P. R., & Starks, S. (2022). Henrietta lacks and America’s dark history of research involving African Americans. Nursing Open, 9(5), 2236–2238. [Google Scholar] [CrossRef]
  18. Barry, I., & Stephenson, E. (2025). The gendered, epistemic injustices of generative AI. Australian Feminist Studies, 49(123), 1–21. [Google Scholar] [CrossRef]
  19. Basu, P., & Mohanty, S. S. (2024). Developing multilingual glossaries for STEM terminology using AI-NLP. In S. S. Mohanty, S. R. Dash, & S. Parida (Eds.), Applying AI-based tools and technologies towards revitalization of indigenous and endangered languages (pp. 115–122). Springer. [Google Scholar] [CrossRef]
  20. Basu, S., Das, N., Sarkar, R., Kundu, M., Nasipuri, M., & Kumar Basu, D. (2010). A novel framework for automatic sorting of postal documents with multi-script address blocks. Pattern Recognition, 43(10), 3507–3521. [Google Scholar] [CrossRef]
  21. Bender, E. M., Gebru, T., McMillan-Major, A., & Shmitchell, S. (2021). On the dangers of stochastic parrots: Can language models be too big? In Proceedings of the 2021 ACM conference on fairness, accountability, and transparency (FAccT ’21) (pp. 610–623). Association for Computing Machinery. [Google Scholar] [CrossRef]
  22. Bittle, K., & El-Gayar, O. (2025). Generative AI and academic integrity in higher education: A systematic review and research agenda. Information, 16(4), 296. [Google Scholar] [CrossRef]
  23. Borah, A. R., Nischith, T. N., & Gupta, S. (2024, January 4–6). Improved learning based on GenAI. 2024 2nd International Conference on Intelligent Data Communication Technologies and Internet of Things (IDCIoT) (pp. 1527–1532), Bengaluru, India. [Google Scholar] [CrossRef]
  24. Bowen, H. (Ed.). (2018). Investment in learning: The individual and social value of American higher education. Routledge. [Google Scholar]
  25. Brown, R., Sillence, E., & Branley-Bell, D. (2025). AcademAI: Investigating AI usage, attitudes, and literacy in higher education and research. Journal of Educational Technology Systems, 54(1), 6–33. [Google Scholar] [CrossRef]
  26. Bulmash, B. (2024). Social media use and mistrust in authority: An examination of Kohlberg’s moral development model. Journal of Information, Communication and Ethics in Society, 22(4), 466–477. [Google Scholar] [CrossRef]
  27. Bura, C., & Myakala, P. K. (2024). Advancing transformative education: Generative AI as a catalyst for equity and innovation [Preprint]. arXiv. [Google Scholar] [CrossRef]
  28. Calabrese Barton, A., & Tan, E. (2010). We be burnin’! Agency, identity, and science learning. Journal of the Learning Sciences, 19(2), 187–229. [Google Scholar] [CrossRef]
  29. Caputo, M. (2025, March 6). Scoop: State Dept. to use AI to revoke visas of foreign students who appear “pro-Hamas”. Axios. Available online: https://www.axios.com/2025/03/06/state-department-ai-revoke-foreign-student-visas-hamas (accessed on 20 August 2025).
  30. Carlone, H. B., & Johnson, A. (2007). Understanding the science experiences of successful women of color: Science identity as an analytic lens. Journal of Research in Science Teaching, 44(8), 1187–1218. [Google Scholar] [CrossRef]
  31. Center for Accelerating Operational Efficiency. (2025). Center for accelerating operational efficiency (CAOE) fact sheet. U.S. Department of Homeland Security. Available online: https://www.dhs.gov/science-and-technology/publication/center-accelerating-operational-efficiency-fact-sheet (accessed on 2 July 2025).
  32. Chan, C. K. Y., & Hu, W. (2023). Students’ voices on generative AI: Perceptions, benefits, and challenges in higher education. International Journal of Educational Technology in Higher Education, 20(1), 43. [Google Scholar] [CrossRef]
  33. Chan, C. K. Y., & Lee, K. K. W. (2023). The AI generation gap: Are Gen Z students more interested in adopting generative AI such as ChatGPT in teaching and learning than their Gen X and millennial generation teachers? Smart Learning Environments, 10(1), 60. [Google Scholar] [CrossRef]
  34. Chinta, S. V., Wang, Z., Yin, Z., Hoang, N., Gonzalez, M., Le Quy, T., & Zhang, W. (2024). FairAIED: Navigating fairness, bias, and ethics in educational AI applications [Preprint]. arXiv. [Google Scholar] [CrossRef]
  35. Choudhury, M. (2023). Generative AI has a language problem. Nature Human Behavior, 7, 1802–1803. [Google Scholar] [CrossRef] [PubMed]
  36. Corbin, T., Tai, J., & Flenady, G. (2025). Understanding the place and value of GenAI feedback: A recognition-based framework. Assessment & Evaluation in Higher Education, 50, 718–731. [Google Scholar] [CrossRef]
  37. Creely, E. (2024). Exploring the role of generative AI in enhancing language learning: Opportunities and challenges. International Journal of Changes in Education, 1(2), 158–167. [Google Scholar] [CrossRef]
  38. Danahy, E. (n.d.). Course AI policies. Available online: https://provost.tufts.edu/celt/online-resources/artificial-intelligence/ai-syllabus-statements/ (accessed on 1 July 2025).
  39. Dewan, U., Hingle, A., McDonald, N., & Johri, A. (2025). Engineering educators’ perspectives on the impact of generative AI in higher education. arXiv. [Google Scholar] [CrossRef]
  40. Duah, J. E., & McGivern, P. (2024). How generative artificial intelligence has blurred notions of authorial identity and academic norms in higher education, necessitating clear university usage policies. The International Journal of Information and Learning Technology, 41(2), 180–193. [Google Scholar] [CrossRef]
  41. Dumin, L. (2024). 8. AI and writing classrooms: A study of purposeful use and student responses to the technology. In Teaching and generative AI: Pedagogical possibilities and productive tensions (Paper 35). Oklahoma State University. Available online: https://digitalcommons.usu.edu/teachingai/35 (accessed on 15 August 2025).
  42. Durante, Z., Huang, Q., Wake, N., Gong, R., Park, J. S., Sarkar, B., Taori, R., Noda, Y., Terzopoulos, D., Choi, Y., Ikeuchi, K., Vo, H., Fei-Fei, L., & Gao, J. (2024). Agent ai: Surveying the horizons of multimodal interaction. arXiv. [Google Scholar] [CrossRef]
  43. Eccles, J. (2009). Who am I and what am I going to do with my life? Personal and collective identities as motivators of action. Educational Psychologist, 44(2), 78–89. [Google Scholar] [CrossRef]
  44. El Fathi, T., Saad, A., Larhzil, H., Lamri, D., & Al Ibrahmi, E. M. (2025). Integrating generative AI into STEM education: Enhancing conceptual understanding, addressing misconceptions, and assessing student acceptance. Disciplinary and Interdisciplinary Science Education Research, 7(6), 1–21. [Google Scholar] [CrossRef]
  45. Erikson, E. H. (1950). Childhood and society. W. W. Norton. [Google Scholar]
  46. Erikson, E. H. (1968). Identity, youth, and crisis: Youth and crisis. W. W. Norton. [Google Scholar]
  47. Flathmann, C., Schelble, B. G., McNeese, N. J., Knijnenburg, B., Gramopadhye, A. K., & Chalil Madathil, K. (2023). The Purposeful Presentation of AI Teammates: Impacts on Human Acceptance and Perception. International Journal of Human–Computer Interaction, 40(20), 6510–6527. [Google Scholar] [CrossRef]
  48. Gandolfi, A. (2025). GPT-4 in Education: Evaluating Aptness, Reliability, and Loss of Coherence in Solving Calculus Problems and Grading Submissions. International Journal of Artificial Intelligence in Education, 35(1), 367–397. [Google Scholar] [CrossRef]
  49. Gao, Y., Zhai, X., Li, M., Lee, G., & Liu, X. (2025). A multimodal interactive framework for science assessment in the era of generative artificial intelligence. Journal of Research in Science Teaching, 62, 2014–2028. [Google Scholar] [CrossRef]
  50. García, O. (2025). Understanding entrenzados: A commentary. Journal of Research in Science Teaching, 62(1), 379–387. [Google Scholar] [CrossRef]
  51. Getto, G., Kelley, S., & Vance, B. (2025). How to write with genAI: A framework for using generative AI to automate writing tasks in technical communication. Journal of Technical Writing and Communication, 55(3), 1–34. [Google Scholar] [CrossRef]
  52. Gilstrap, C., Bacic, D., & Gilstrap, C. (2024, May 20–24). Understanding the adoption of generative artificial intelligence within communities of practice: A cross-practice, machine learning-based lexical study. Proceedings of the International Convention MIPRO, Opatija, Croatia. [Google Scholar] [CrossRef]
  53. Godwin, A. (2016, June 26–29). The development of a measure of engineering identity. 2016 ASEE Annual Conference & Exposition Proceedings, New Orleans, LA, USA. [Google Scholar] [CrossRef]
  54. Grotevant, H. D. (1987). Toward a process model of identity formation. Journal of Adolescent Research, 2(3), 203–222. [Google Scholar] [CrossRef]
  55. Guettala, M., Bourekkache, S., Kazar, O., & Harous, S. (2024). Generative artificial intelligence in education: Advancing adaptive and personalized learning. Acta Informatica Pragensia, 13(3), 460–489. [Google Scholar] [CrossRef]
  56. Guilbeault, D., Delecourt, S., Hull, T., Desikan, B. S., Chu, M., & Nadler, E. (2024). Online images amplify gender bias. Nature, 626(8001), 1049–1055. [Google Scholar] [CrossRef] [PubMed]
  57. Harris-Watson, A. M., Larson, L. E., Lauharatanahirun, N., DeChurch, L. A., & Contractor, N. S. (2023). Social perception in Human-AI teams: Warmth and competence predict receptivity to AI teammates. Computers in Human Behavior. [Google Scholar] [CrossRef]
  58. Heeg, D. M., & Avraamidou, L. (2023). The use of artificial intelligence in school science: A systematic literature review. Educational Media International, 60(2), 125–150. [Google Scholar] [CrossRef]
  59. Henderson, T. S. (2024a). Eurocentric epistemologies in engineering: Manifestations in first-year student design teams and consequences for student learning. Journal of Engineering Education, 113(2), 360–382. [Google Scholar] [CrossRef]
  60. Henderson, T. S. (2024b). Understanding the relationship between idea contributions and idea enactments in student design teams: A social network analysis approach. Journal of Engineering Education, 113(2), 225–250. [Google Scholar] [CrossRef]
  61. Henkel, O., Hills, L., Roberts, B., & McGrane, J. (2025). Can LLMs grade open response reading comprehension questions? An empirical study using the ROARs dataset. International Journal of Artificial Intelligence in Education, 35(2), 651–676. [Google Scholar] [CrossRef]
  62. Hill, D. B. (2005). Coming to terms: Using technology to know identity. Sexuality and Culture, 9(3), 24–52. [Google Scholar] [CrossRef]
  63. Hill, K. (2024, June 29). Facial recognition led to wrongful arrests. So Detroit is making changes. The New York Times. Available online: https://www.nytimes.com/2024/06/29/technology/detroit-facial-recognition-false-arrests.html (accessed on 1 July 2025).
  64. Hofmann, V., Kalluri, P. R., Jurafsky, D., & King, S. (2024). AI generates covertly racist decisions about people based on their dialect. Nature, 633(8028), 147–154. [Google Scholar] [CrossRef]
  65. Hutcheson, P. A. (2007). Setting the nation’s agenda for higher education: A review of selected national commission reports, 1947–2006. History of Education Quarterly, 47(3), 359–367. [Google Scholar] [CrossRef]
  66. Jain, V. K., & Sobek, D. K. (2006). Linking design process to customer satisfaction through virtual design of experiments. Research in Engineering Design, 17(2), 59–71. [Google Scholar] [CrossRef]
  67. James, T., & Andrews, G. (2024). Levelling the playing field through GenAI: Harnessing artificial intelligence to bridge educational gaps for equity and disadvantaged students. Widening Participation and Lifelong Learning, 26(3), 250–260. [Google Scholar] [CrossRef]
  68. Janéafik, A., & Dusek, O. (2024, October). The problem of AI hallucination and how to solve it. In European conference on e-learning (pp. 122–128). Academic Conferences International Limited. [Google Scholar]
  69. Ji, Z., Lee, N., Frieske, R., Yu, T., Su, D., Xu, Y., Ishii, E., Bang, Y. J., Madotto, A., & Fung, P. (2023). Survey of hallucination in natural language generation. ACM Computing Surveys, 55(12), 1–38. [Google Scholar] [CrossRef]
  70. Johri, A. (2020). Artificial intelligence and engineering education. Journal of Engineering Education, 109(3), 358–361. [Google Scholar] [CrossRef]
  71. Kangwa, D., Msafiri, M. M., & Fute, A. (2025). Exploring the factors that promote a balance between academic integrity and the effective use of GenAI tools in higher education: A systematic review. Journal of Computer Assisted Learning, 41(5), e70109. [Google Scholar] [CrossRef]
  72. Karolita, D., McIntosh, J., Kanij, T., Grundy, J., & Obie, H. O. (2023). Use of personas in Requirements Engineering: A systematic mapping study. Information and Software Technology, 162, 107264. [Google Scholar] [CrossRef]
  73. Kazemitabaar, M., Hou, X., Henley, A., Ericson, B. J., Weintrop, D., & Grossman, T. (2023). How novices use LLM-based code generators to solve CS1 coding tasks in a self-paced learning environment. arXiv. [Google Scholar] [CrossRef]
  74. Kelly, S., Kaye, S.-A., & Oviedo-Trespalacios, O. (2023). What factors contribute to the acceptance of artificial intelligence? A systematic review. Telematics and Informatics, 77, 1–33. [Google Scholar] [CrossRef]
  75. Kerpelman, J. L., Pittman, J. F., & Lamke, L. K. (1997). Toward a microprocess perspective on adolescent identity development. Journal of Adolescent Research, 12(3), 325–346. [Google Scholar] [CrossRef]
  76. Keyes, O., & Creel, K. (2022). Artificial knowing otherwise. Feminist Philosophy Quarterly, 8(3/4), 1–25. [Google Scholar] [CrossRef]
  77. Keyes, O., Hitzig, Z., & Blell, M. (2021). Truth from the machine: Artificial intelligence and the materialization of identity. Interdisciplinary Science Reviews, 46(1–2), 158–175. [Google Scholar] [CrossRef]
  78. Kidd, C., & Birhane, A. (2023). How AI can distort human beliefs. Science, 380(6651), 1222–1223. [Google Scholar] [CrossRef] [PubMed]
  79. King, P. M. (2009). Principles of development and developmental change underlying theories of cognitive and moral development. Journal of College Student Development, 50(6), 597–620. [Google Scholar] [CrossRef]
  80. King, P. M., & Kitchener, K. S. (2012). The reflective judgment model: Twenty years of research on epistemic cognition. In Personal epistemology (pp. 37–61). Routledge. [Google Scholar]
  81. Klenk, M. (2020). How do technological artefacts embody moral values? Philosophy & Technology, 34(3), 525–544. [Google Scholar] [CrossRef]
  82. Klopfer, E., Reich, J., Abelson, H., & Breazeal, C. (2024). Generative AI and K-12 education: An MIT perspective. In An MIT exploration of generative AI. MIT. [Google Scholar] [CrossRef]
  83. Kohlberg, L., & Hersh, R. H. (1977). Moral development: A review of the theory. Theory Into Practice, 16(2), 53–59. Available online: https://www.jstor.org/stable/1475172 (accessed on 1 July 2025). [CrossRef]
  84. Kozlowski, D., Murray, D. S., Bell, A., Hulsey, W., Larivière, V., Monroe-White, T., & Sugimoto, C. R. (2022). Avoiding bias when inferring race using name-based approaches. PLoS ONE, 17(3), e0264270. [Google Scholar] [CrossRef]
  85. Kshetri, N. (2024). Linguistic challenges in generative artificial intelligence: Implications for low-resource languages in the developing world. Journal of Global Information Technology Management, 27(2), 95–99. [Google Scholar] [CrossRef]
  86. Law, L. (2024). Application of generative artificial intelligence (GenAI) in language teaching and learning: A scoping literature review. Computers and Education Open, 6, 100174. [Google Scholar] [CrossRef]
  87. LeCompte, M. D., & Ludwig, S. A. (2007). I am my identity kit: Using artifact data in research identity. EMIGRA Working Papers, 111. Available online: https://ddd.uab.cat/pub/emigrawp/emigrawp_a2007n111/emigrawp_a2007n111p1.pdf (accessed on 24 July 2025).
  88. Leydens, J., Morgan, T. K., & Lucena, J. (2017, June 25–28). Mechanisms by which indigenous students achieved a sense of belonging and identity in engineering education. 2017 ASEE Annual Conference & Exposition Proceedings, Columbus, OH, USA. [Google Scholar] [CrossRef]
  89. Liu, J. (2007). Intercultural communication in letters of recommendation. Journal of Intercultural Communication, 7(1), 1–9. [Google Scholar] [CrossRef]
  90. Lombardo, P. A. (2024). “Ridding the race of his defective blood”—Eugenics in the journal, 1906–1948. The New England Journal of Medicine, 390(10), 869–873. [Google Scholar] [CrossRef]
  91. Luyckx, K., Schwartz, S. J., Goossens, L., Beyers, W., & Missotten, L. (2011). Processes of personal identity formation and evaluation. In S. Schwartz, K. Luyckx, & V. Vignoles (Eds.), Handbook of identity theory and research. Springer. [Google Scholar] [CrossRef]
  92. Mai, D. T. T., Da, C. V., & Hanh, N. V. (2024). The use of ChatGPT in teaching and learning: A systematic review through SWOT analysis approach. Frontiers in Education, 9, 1328769. [Google Scholar] [CrossRef]
  93. Marcia, J. E. (1980). Identity in adolescence. In J. Adelson (Ed.), Handbook of adolescent psychology (pp. 159–187). Wiley. [Google Scholar]
  94. May, T. A., Fan, Y. K., Stone, G. E., Koskey, K. L. K., Sondergeld, C. J., Folger, T. D., Archer, J. N., Provinzano, K., & Johnson, C. C. (2025). An effectiveness study of generative Artificial Intelligence tools used to develop multiple-choice test items. Education Sciences, 15(2), 144. [Google Scholar] [CrossRef]
  95. McDermott, A. (2023). English is the go-to language of science, but students often do better when taught in more tongues. Proceedings of the National Academy of Sciences of the United States of America, 120(40), e2315792120. [Google Scholar] [CrossRef] [PubMed]
  96. Mensah, F. M., & Pierre, T. (2025). Troubling the definition of black resilience in STEM-CS education. Journal of Research in Science Teaching, 62(4), 1159–1163. [Google Scholar] [CrossRef]
  97. Meyers, K., Ohland, M., Pawley, A., Stephen, S., & Smith, K. (2012). Factors relating to engineering identity. Global Journal of Engineering Education, 14(1), 119–131. [Google Scholar]
  98. Monzon, N., & Hays, F. A. (2025). Leveraging generative artificial intelligence to improve motivation and retrieval in higher education learners. JMIR Medical Education, 11(1), e59210. [Google Scholar] [CrossRef]
  99. Muldoon, J., & Wu, B. A. (2023). Artificial intelligence in the colonial matrix of power. Philosophy & Technology, 36(4), 80. [Google Scholar] [CrossRef]
  100. Munaye, Y. Y., Admass, W., Belayneh, Y., Molla, A., & Asmare, M. (2025). ChatGPT in education: A systematic review on opportunities, challenges, and future directions. Algorithms, 18(6), 352. [Google Scholar] [CrossRef]
  101. Nalbandian, L. (2022). An eye for an ‘I’: A critical assessment of artificial intelligence tools in migration and asylum management. Comparative Migration Studies, 10(1), 32. [Google Scholar] [CrossRef]
  102. National Center for Science and Engineering Statistics (NCSES). (2023). Diversity and STEM: Women, minorities, and persons with disabilities 2023 (No. special report NSF 23-315). National Science Foundation. Available online: https://nsf-gov-resources.nsf.gov/doc_library/nsf23315-report.pdf?VersionId=OfErRcu.MTEp_KCzcuAfyfIkh0cU_XQO (accessed on 1 June 2025).
  103. National Science Board & National Science Foundation. (2020). Science and engineering indicators 2020: The state of U.S. science and engineering (NSB-2020-1). National Science Board. Available online: https://ncses.nsf.gov/pubs/nsb20201/ (accessed on 20 August 2025).
  104. New American Economy Research Fund. (2018). Power of the purse: How Sub-Saharan Africans contribute to the U.S. economy. Available online: https://research.newamericaneconomy.org/wp-content/uploads/sites/2/2018/01/NAE_African_V6.pdf (accessed on 15 August 2025).
  105. Nordgren, K., & Johansson, M. (2014). Intercultural historical learning: A conceptual framework. Journal of Curriculum Studies, 47(1), 1–25. [Google Scholar] [CrossRef]
  106. Nvidia. (n.d.). Deep learning. Nvidia Developer. Available online: https://developer.nvidia.com/deep-learning#:~:text=Their%20highly%20flexible%20architectures%20can,computationally%2Dintensive%20deep%20neural%20networks (accessed on 1 July 2025).
  107. Nyaaba, M., & Zhai, X. (2024). Generative AI professional development needs for teacher educators. Journal of AI, 8(1), 1–13. [Google Scholar] [CrossRef]
  108. OpenAI. (2024, March 21). Video generation models as world simulators. Available online: https://openai.com/index/video-generation-models-as-world-simulators/ (accessed on 30 June 2025).
  109. Oravec, J. A. (2023). Artificial intelligence implications for academic cheating: Expanding the dimensions of responsible human-AI collaboration with ChatGPT. Journal of Interactive Learning Research, 34(2), 213–237. Available online: https://philarchive.org/rec/ORAAII (accessed on 2 July 2025). [CrossRef]
  110. Oring, E. (1994). The arts, artifacts, and artifices of identity. The Journal of American Folklore, 107(424), 211. [Google Scholar] [CrossRef]
  111. Patel, V., Hartocollis, A., & Shwayder, M. (2025). Tufts student returns to Massachusetts after 6 weeks in immigration detention. The New York Times. Available online: https://www.nytimes.com/2025/05/10/us/tufts-rumeysa-ozturk-release.html?register=google&auth=register-google# (accessed on 15 June 2025).
  112. Pérez, G., Danner, P. M., Gilmartin, S. K., Muller, C. B., & Sheppard, S. (2020, June 22–26). Design problems in context: Placing communities and society at the center of the engineering process. 2020 American Society for Engineering Education Annual Conference and Exposition (pp. 1–24), Virtual Conference. Available online: https://peer.asee.org/34428.pdf (accessed on 1 July 2025).
  113. Pérez, G., Gonzalez-Howard, M., & Suárez, E. (2025a). Bienvenidos a la conversación: Examinations of translanguaging across science and engineering education research. Journal of Research in Science Teaching, 62(1), 3–14. [Google Scholar] [CrossRef]
  114. Pérez, G., Henderson, T., & Wendell, K. B. (2025b). Addressing media and information literacy in engineering design education: Learning to design technologies in the era of science denial and misinformation. Journal of Research in Science Teaching, 62(6), 1546–1579. [Google Scholar] [CrossRef]
  115. Pérez, G., Mabour, L. C., Marvez, G. R., & Pino, Y. I. P. (2024, June 23–26). Engineering learning among Black and Latinx/e/a/o students: Considering language and culture to reengineer learning environments. 2024 ASEE Annual Conference & Exposition (pp. 1–14), Portland, OR, USA. Available online: https://peer.asee.org/47286 (accessed on 1 July 2025).
  116. Pérez, G., & Marvez, G. R. (2024). On problem scoping in engineering design: Notes about language practices of multicompetent learners. In Pursuing language and metalinguistics in K–12 classrooms (pp. 170–190). Routledge. [Google Scholar] [CrossRef]
  117. Pérez, G., Nittala, S., Sheppard, S., & Muller, C. B. (2021, July 19–26). Contextual social awareness in design: Engineering education as a catalyst for change. 2021 ASEE Annual Conference and Exposition (pp. 1–27), Virtual Conference. Available online: https://peer.asee.org/36843 (accessed on 1 July 2025).
  118. Pérez, G., & Sheppard, S. (2024). Strengthening the link between Latine communities and engineering: Multicompetent learners’ expansive design perspectives. International Journal of Engineering Education, 40(6), 1539–1551. [Google Scholar]
  119. Pérez [Jöhnk], G. A. (2021). Beyond representation: Uncovering the role of language and cognition for multicompetent students in engineering and science [Doctoral dissertation, Stanford University]. [Google Scholar]
  120. Perry, W. G. (1970). Forms of intellectual and ethical development in the college years; A scheme. Holt, Rinehart and Winston. [Google Scholar]
  121. Piaget, J. (1970). Piaget’s theory. In P. H. Mussen (Ed.), Carmichael’s handbook of child psychology (pp. 703–732). Wiley. [Google Scholar]
  122. Pierre, T., Smith, T. S., & Upadhyay, B. (Forthcoming). Unpacking the Eu[ro]thanization of Blackness in STEM Education. In S. Tolbert, R. Aghasaleh, K. Scantlebury, & B. Upadhyay (Eds.), Phronetic science morally-guided and praxis-oriented science education. Peter Lang International Academic Publisher. [Google Scholar]
  123. Pierson, A. E., Clark, D. B., & Brady, C. E. (2021). Scientific modeling and translanguaging: A multilingual and multimodal approach to support science learning and engagement. Science Education, 105(4), 776–813. [Google Scholar] [CrossRef]
  124. Pop, M. V., Tonț, G., Flonta, F. V., & Flore, M. (2025). Agentic AI in STEM education: Enhancing cognitive flexibility and workforce readiness. Broad Research in Artificial Intelligence and Neuroscience, 16(1), 239–249. [Google Scholar] [CrossRef]
  125. Prather, J., Denny, P., Leinonen, J., Becker, B. A., Albluwi, I., Craig, M., Keuning, H., Kiesler, N., Kohn, T., Luxton-Reilly, A., MacNeil, S., Petersen, A., Pettit, R., Reeves, B. N., & Savelka, J. (2023, July 7–12). The robots are here: Navigating the generative AI revolution in computing education. ITICSE-WGR 2023: Proceedings of the 2023 Working Group Reports of Innovation and Technology in Computer Science Education (pp. 108–159), Turku, Finland. [Google Scholar] [CrossRef]
  126. Price, J. F., & Grover, S. (2025). Generative AI in STEM teaching: Opportunities and tradeoffs. Community for Advancing Discovery Research in Education (CADRE). Education Development Center, Inc. Available online: https://files.eric.ed.gov/fulltext/ED672718.pdf (accessed on 24 July 2025).
  127. Qadir, J. (2023, May 1–4). Engineering education in the era of ChatGPT: Promise and pitfalls of generative AI for education. 2023 IEEE Global Engineering Education Conference (EDUCON) (pp. 1–9), Kuwait City, Kuwait. [Google Scholar] [CrossRef]
  128. Qin, Y., Shi, Z., Yu, J., Wang, X., Zhou, E., Li, L., Yin, Z., Liu, X., Sheng, L., Shao, J., Bai, L., Ouyang, W., & Zhang, R. (2024). WorldSimBench: Towards video generation models as world simulators. arXiv. [Google Scholar] [CrossRef]
  129. Qu, Y., & Wang, J. (2025). The impact of ai guilt on students’ use of ChatGPT for Academic tasks: Examining disciplinary differences. Journal of Academic Ethics, 23, 2087–2110. [Google Scholar] [CrossRef]
  130. Ramirez, D., Rodriguez, S. L., Lehman, K. J., & Sax, L. J. (2024). “I started seeing myself as a computing person”: Exploring Latina women’s computing identity development in college. JCSCORE, 10(2), 23–43. [Google Scholar] [CrossRef]
  131. Ravšelj, D., Keržič, D., Tomaževič, N., Umek, L., Brezovar, N., Iahad, N. A., Abdulla, A. A., Akopyan, A., Segura, M. W. A., AlHumaid, J., & Allam, M. F. (2025). Higher education students’ perceptions of ChatGPT: A global study of early reactions. PLoS ONE, 20(2), e0315011. [Google Scholar] [CrossRef]
  132. Reddy, C. K., & Shojaee, P. (2024). Towards scientific discovery with Generative AI: Progress, opportunities, and challenges. arXiv. [Google Scholar] [CrossRef]
  133. Reich, J. (2020). Failure to disrupt: Why technology alone can’t transform education. Harvard University Press. [Google Scholar]
  134. Ren, X. Q., & Heacock, H. (2022). Sensitivity of infrared sensor faucet on different skin colours and how it can potentially effect equity in public health. BCIT Environmental Public Health Journal. [Google Scholar] [CrossRef]
  135. Reuters. (2025, April 18). How AI is aiding Trump’s immigration crackdown. The Economic Times. Available online: https://economictimes.indiatimes.com/news/international/uae/from-ports-to-policies-how-dubai-is-rewiring-the-future-of-trade/articleshow/121932633.cms (accessed on 20 June 2025).
  136. Richardson, J. T. E. (2013). Epistemological development in higher education. Educational Research Review, 9, 191–206. [Google Scholar] [CrossRef]
  137. Roberts, G. (2013). Perspectives on language as a source of social markers. Language and Linguistics Compass, 7(12), 619–632. [Google Scholar] [CrossRef]
  138. Robinson, K. A., & Shankar, S. (2025). Motivational trajectories and experiences of minoritized students in science, technology, engineering, and mathematics: A critical quantitative examination of existing data. Journal of Educational Psychology, 117(3), 337–360. [Google Scholar] [CrossRef]
  139. Russell, G., & Topham, P. (2012). The impact of social anxiety on student learning and well-being in higher education. Journal of Mental Health, 21(4), 375–385. [Google Scholar] [CrossRef]
  140. Sadasivan, V. S., Kumar, A., Balasubramanian, S., Wang, W., & Feizi, S. (2025). Can AI-Generated text be reliably detected? arXiv. [Google Scholar] [CrossRef]
  141. Schachter, E. P. (2005). Context and identity formation. Journal of Adolescent Research, 20(3), 375–395. [Google Scholar] [CrossRef]
  142. Shoval, H. (2025). Artificial intelligence in higher education: Bridging or widening the gap for diverse student populations? Education Sciences, 15(5), 637. [Google Scholar] [CrossRef]
  143. Sidoti, O., & McClain, C. (2025, June 25). 34% of U.S. adults have used ChatGPT, about double the share in 2023. Pew Research Center. Available online: https://www.pewresearch.org/short-reads/2025/06/25/34-of-us-adults-have-used-chatgpt-about-double-the-share-in-2023/ (accessed on 1 July 2025).
  144. Sravanthi, V. (2024). The role of English in STEM education: Bridging knowledge gaps—An analysis. International Journal of Research Publication and Reviews, 5(10), 5073–5080. [Google Scholar] [CrossRef]
  145. Stokel-Walker, C., & Van Noorden, R. (2023). What ChatGPT and generative AI means for science. Nature, 614, 214–216. [Google Scholar] [CrossRef]
  146. Tang, J., Deng, C., & Huang, G. B. (2015). Extreme learning machine for multilayer perceptron. IEEE Transactions on Neural Networks and Learning Systems, 27(4), 809–821. [Google Scholar] [CrossRef]
  147. Teng, M. F. (2024). “ChatGPT is the companion, not enemies”: EFL learners’ perceptions and experiences in using ChatGPT for feedback in writing. Computers and Education: Artificial Intelligence, 7, 100270. [Google Scholar] [CrossRef]
  148. The National Task Force on Civic Learning and Democratic Engagement. (2012). A crucible moment: College learning and democracy’s future. Association of American Colleges and Universities. [Google Scholar]
  149. Tobin, M. J. (2022). Fiftieth anniversary of uncovering the Tuskegee Syphilis Study: The story and timeless lessons. American Journal of Respiratory and Critical Care Medicine, 205(10), 1145–1158. [Google Scholar] [CrossRef]
  150. Valeri, F., Nilsson, P., & Cederqvist, A. M. (2025). Exploring students’ experience of ChatGPT in STEM education. Computers and Education: Artificial Intelligence, 8, 100360. [Google Scholar] [CrossRef]
  151. Walsh, J. D. (2025, May 7). Everyone is cheating their way through college. Intelligencer. Available online: https://nymag.com/intelligencer/article/openai-chatgpt-ai-cheating-education-college-students-school.html (accessed on 1 July 2025).
  152. Wang, J., & Fan, W. (2025). The effect of ChatGPT on students’ learning performance, learning perception, and higher-order thinking: Insights from a meta-analysis. Humanities and Social Sciences Communications, 12, 621. [Google Scholar] [CrossRef]
  153. Windland, V., Bozorg, J., & Stryker, C. (n.d.). What is agentic architecture? IBM. Available online: https://www.ibm.com/think/topics/agentic-architecture (accessed on 22 June 2025).
  154. Winner, L. (1980). Do Artifacts Have Politics? Daedalus, 109(1), 121–136. Available online: http://www.jstor.org/stable/20024652 (accessed on 1 July 2025).
  155. Xu, Y. M., Danahy, E. E., & Church, W. (2024, June 23–26). Re-design introductory engineering course for tinkering with generative AI and the shifts in students’ perceptions of using AI for learning. 2024 ASEE Annual Conference & Exposition, Portland, OR, USA. Available online: https://peer.asee.org/re-design-introductory-engineering-course-for-tinkering-with-generative-ai-and-the-shifts-in-students-perceptions-of-using-ai-for-learning (accessed on 10 June 2025).
  156. Yang, A. (2024). Challenges and opportunities for foreign language teachers in the era of artificial intelligence. International Journal of Education and Humanities, 4(1), 39–50. [Google Scholar] [CrossRef]
  157. Yilmaz, R., & Karaoglan Yilmaz, F. G. (2023). Augmented intelligence in programming learning: Examining student views on the use of ChatGPT for programming learning. Computers in Human Behavior: Artificial Humans, 1(2), 100005. [Google Scholar] [CrossRef]
  158. Young, G. (2015). Violent video games and morality: A meta-ethical approach. Ethics and Information Technology, 17(4), 311–321. [Google Scholar] [CrossRef]
  159. Yusuf, A., Pervin, N., & Román-González, M. (2024). Generative AI and the future of higher education: A threat to academic integrity or reformation? Evidence from multicultural perspectives. International Journal of Educational Technology in Higher Education, 21(1), 21. [Google Scholar] [CrossRef]
  160. Zampieri, M., Nakov, P., & Scherrer, Y. (2020). Natural language processing for similar languages, varieties, and dialects: A survey. Natural Language Engineering, 26(6), 595–612. [Google Scholar] [CrossRef]
  161. Zewe, A. (2023, November 9). Explained: Generative AI. MIT News|Massachusetts Institute of Technology. Available online: https://news.mit.edu/2023/explained-generative-ai-1109 (accessed on 2 July 2025).
  162. Zhai, X., & Nehm, R. H. (2023). AI and formative assessment: The train has left the station. Journal of Research in Science Teaching, 60(6), 1390–1398. [Google Scholar] [CrossRef]
  163. Zhang, X., Li, S., Hauer, B., Shi, N., & Kondrak, G. (2023, December 6–10). Don’t trust ChatGPT when your question is not in English: A study of multilingual abilities and types of LLMs. 2023 Conference on Empirical Methods in Natural Language Processing (pp. 7915–7927), Singapore. [Google Scholar] [CrossRef]
  164. Zhu, C., Sun, M., Luo, J., Li, T., & Wang, M. (2023). How to harness the potential of ChatGPT in education? Knowledge Management & E-Learning, 15(2), 133–152. [Google Scholar] [CrossRef]
  165. Zhu, P. (2023). Cultural dimensions as guidelines in handling language problems for effective written communication across cultures. International Journal of Linguistics, Literature and Translation, 6(12), 85–95. [Google Scholar] [CrossRef]
  166. Zlotnikova, I., Hlomani, H., Mokgetse, T., & Bagai, K. (2025). Establishing ethical standards for GenAI in university education: A roadmap for academic integrity and fairness. Journal of Information, Communication and Ethics in Society, 23(2), 188–216. [Google Scholar] [CrossRef]
Figure 1. Slide from student learning activity about the use of AI to “detect” user sexual identities. Slide A (top left): Students discuss a framing definition for algorithmic bias. Slide B (top right) students view an example of algorithmic bias in Google Image searches. Slide C (bottom left): Students discuss the use of AI for identifying and labeling individual identities characteristics. Slide D (bottom right): Students examine research comparing algorithmic bias in images and text. (Guilbeault et al., 2024).
Figure 1. Slide from student learning activity about the use of AI to “detect” user sexual identities. Slide A (top left): Students discuss a framing definition for algorithmic bias. Slide B (top right) students view an example of algorithmic bias in Google Image searches. Slide C (bottom left): Students discuss the use of AI for identifying and labeling individual identities characteristics. Slide D (bottom right): Students examine research comparing algorithmic bias in images and text. (Guilbeault et al., 2024).
Education 15 01571 g001
Table 1. Responses from three popular GenAI tools to the US–Mexico border problem scoping task.
Table 1. Responses from three popular GenAI tools to the US–Mexico border problem scoping task.
FactorsGeminiChatGPT 5Co-Pilot
TechnologyN/A
  • Smart monitoring systems (drones, cameras, sensors).
  • Efficient checkpoints and verification to reduce bottlenecks.
  • Cybersecurity: Protect identity and migration data.
Security
  • Layered security approach (surveillance towers, drones).
  • Regional adaptation of security based on geography.
  • Biometric technology to automate travel ID verification.
  • U.S.-Mexico cooperation to target illicit activity.
  • Smart infrastructure (sensors, drones, AI surveillance).
  • Checkpoints and patrol routes for quick deployment during emergencies.
  • Cybersecurity: Protect border tech systems from hacking or misuse.
Political
  • Binational cooperation aligning with policy goals.
  • Adaptable infrastructure for shifting policies or migration.
  • Jurisdiction clarity that integrates law enforcement, customs, and immigration.
GovernanceN/A
Humanitarian
  • Rapid screening of vulnerable individuals (asylum seekers and migrants).
  • Design ports of entry with areas for mental and health screening.
  • Humanitarian checkpoints with water, shelter and first aid.
  • Safe passage of vulnerable populations (asylum seekers and migrants).
  • Shelters and medical access at crossing points.
  • Non-lethal security with enforcement that minimizes physical harm.
  • Safe passage for vulnerable populations (asylum seekers and migrants).
  • Avoid militarized esthetics to promote calm and order.
  • Family reunification centers for supporting families separated during migration.
Social
  • Avoid isolating border communities to allow cultural/economic exchange, and family connections.
  • Incorporate multilingual signage and communication.
  • Legal pathways for seasonal work.
CulturalN/A
Geographical
  • Design elements to minimize impact on habitat and migration.
  • All-weather roads to prevent erosion and allow for law enforcement operations.
  • Hydrological considerations for the natural water flow and wildlife.
  • Environmental monitoring and mitigation.
  • Sustainable design to avoid harming fragile ecosystems.
  • Climate resilience to account for flooding, heat, and drought.
  • Wildlife corridors to integrate passages for migratory species.
Environmental
  • Eco-Friendly materials to minimize ecological disruption
  • Wildlife corridors for animal migration and biodiversity preservation.
  • Cultural respect of heritage in design elements to foster community pride and reduce alienation.
Economic
  • X-rays and gamma-ray scanners to reduce wait times for commerce.
  • Automated border control to improve the flow of legal traffic.
  • Investment in border towns that support security and economies.
  • Modular architecture that allows for flexibility and scalability.
N/A
InfrastructureN/A
Oversight
  • Privacy protections for the use of biometric data and AI surveillance.
  • Ethical AI deployment that does not create/perpetuate injustice.
  • Governance structure that includes clear accountability for the actions of border agents and technological systems.
N/A
Ethical
  • Dignity-Centered design that recognizes that borders affect people, not just politics.
  • Design for children, elderly, and disabled travelers.
  • Balance protection of national security with humanitarian obligations.
Equity
Community Integration
  • Binational collaboration zones for trade, education, and cultural exchange.
  • Economic hubs to support job centers, and legal commerce.
  • Public transit to facilitate movement for workers and residents.
Adaptive Design
  • Scalable architecture that can be expanded or reconfigured based on changing needs.
  • Climate resilience to account for extreme heat, flooding, and climate challenges.
  • Data-driven feedback loops with sensors and analytics to adjust design over time.
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Pérez, G.; Henderson, T.; Pierre, T.; Marvez, G.R.; Vasquez, A.; Eshun, P.; Polanco Pino, Y. Subjective Intelligence: A Framework for Generative AI in STEM Education. Educ. Sci. 2025, 15, 1571. https://doi.org/10.3390/educsci15121571

AMA Style

Pérez G, Henderson T, Pierre T, Marvez GR, Vasquez A, Eshun P, Polanco Pino Y. Subjective Intelligence: A Framework for Generative AI in STEM Education. Education Sciences. 2025; 15(12):1571. https://doi.org/10.3390/educsci15121571

Chicago/Turabian Style

Pérez, Greses, Trevion Henderson, Takeshia Pierre, G. R. Marvez, Alejandra Vasquez, Philippa Eshun, and Ymbar Polanco Pino. 2025. "Subjective Intelligence: A Framework for Generative AI in STEM Education" Education Sciences 15, no. 12: 1571. https://doi.org/10.3390/educsci15121571

APA Style

Pérez, G., Henderson, T., Pierre, T., Marvez, G. R., Vasquez, A., Eshun, P., & Polanco Pino, Y. (2025). Subjective Intelligence: A Framework for Generative AI in STEM Education. Education Sciences, 15(12), 1571. https://doi.org/10.3390/educsci15121571

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop