1. Introduction
The concept of artificial intelligence (AI) was formally introduced at the Dartmouth Conference in 1956, where researchers began exploring how machines could simulate human intelligence [
1,
2]. Early applications of AI took the form of expert systems, designed to replicate decision-making in domains such as medicine, law, and accounting [
3]. Over time, these systems evolved from rule-based logic to more complex, data-driven models capable of adaptive learning [
4]. In recent years, the development of generative AI tools such as ChatGPT (GPT-4, OpenAI, 2025), GitHub Copilot (version available as of 2025), and Meta AI technologies (including Meta AI Assistant and LLaMA models, versions available as of 2025) has accelerated the integration of AI into everyday life, shifting AI from domain-specific tools to widely accessible platforms used in education, communication, and entertainment [
5,
6,
7]. These advances have dramatically expanded both the capabilities of AI systems and the scope of ethical and privacy concerns, particularly for young digital citizens interacting with these technologies in their daily lives [
8,
9].
Often defined as individuals who have grown up in a digital environment, young digital citizens routinely engage with AI-driven applications that shape their online behaviors and social experiences [
10,
11]. They regularly interact with AI-based services such as social media platforms, virtual assistants, and educational tools [
12]. While these technologies offer benefits, such as personalized experiences and enhanced engagement, they also raise critical privacy concerns. AI systems typically rely on extensive data collection, automated decision-making, and complex, often unclear algorithms, leading to concerns about data ownership, user control, trust, and transparency [
13].
The growing dependence on AI-driven systems has heightened concerns about privacy management, algorithmic accountability, and the ethical use of data. Research indicates that young users often lack a clear understanding of how their personal data are processed, shared, and monetized [
14]. Moreover, many AI technologies function as “black-box” systems, which limits transparency, making it difficult for users to make informed privacy decisions [
15]. Factors such as parental guidance, regulatory frameworks, and digital literacy initiatives play a crucial role in how young digital citizens navigate privacy risks [
16]. However, much of the existing research focuses on adult users or general privacy concerns, leaving a significant gap in understanding how youth-specific factors influence privacy management behaviors in AI ecosystems [
17]. In addition, the current literature often overlooks the perspectives of varying stakeholders, such as educators, policymakers, and AI developers, whose insight could provide a more comprehensive understanding of the privacy challenges facing young digital citizens. The lack of diverse stakeholder perspectives limits the development of ethical and stakeholder-driven policies and practices that prioritize the privacy and well-being of young users.
To address these gaps, this study employs the Privacy–Ethics Alignment in AI (PEA-AI) model to explore how young digital citizens and key stakeholders, including parents, educators, and AI professionals, perceive and manage privacy in AI-driven environments. The PEA-AI model emerges from an inductive data-driven approach to understanding how stakeholder perspectives shape ethical AI development. In this study, ethical AI development refers to the design and deployment of AI systems that are transparent, privacy-preserving, inclusive, and responsive to the needs of diverse users, particularly youth. It entails integrating participatory governance practices, user-centric design principles, clear consent and data transparency mechanisms, and safeguards that promote fairness and trust.
Developed using grounded theory [
18], this model analyzes the complex relationship between privacy constructs, stakeholder roles, and ethical AI frameworks. Grounded theory provides a systematic, comparative analysis of privacy concerns, behaviors, and decision-making patterns across different stakeholder groups, allowing for a structured comparison of their perspectives, rather than identifying causal relationships. A more detailed explanation of the grounded theory approach used in this study is provided in the Methodology
Section 3.2 (Research Approach and Analytical Framework). By leveraging empirical data from young digital citizens, parents and/or educators, and AI professionals, the PEA-AI model provides a negotiation-based framework for privacy governance in AI systems. This approach ensures that variations in privacy expectations, control mechanisms, and transparency demands are examined in alignment with each group’s role in the AI ecosystem.
This research focuses on five key constructs influencing privacy management: Data Ownership and Control (DOC) examines perceptions of data ownership, user autonomy, and AI governance mechanisms [
10]; Parental Data Sharing (PDS) investigates the role of parental intervention in shaping youth privacy attitudes and AI literacy [
13]; Perceived Risks and Benefits (PRB) evaluates how youth balance AI-related risks (privacy breaches, data misuse) against perceived benefits (personalization, efficiency) [
14]; Transparency and Trust (TT) explores the relationship between algorithmic explainability, user trust, and privacy decision-making [
15]; and Education and Awareness (EA) assesses the impact of AI literacy, privacy education, and digital awareness on privacy management behaviors [
17]. Together, these constructs and the PEA-AI model aim to provide a comprehensive understanding of how diverse stakeholder perspectives can inform ethical privacy governance in AI systems.
This study uses a mixed-methods approach, incorporating quantitative survey data with qualitative insights from open-ended questions, interviews, and focus groups. Participants include young digital citizens (aged 16–19), parents and educators, and AI professionals, providing a comprehensive view of privacy challenges across multiple perspectives. By combining empirical evidence with theoretical insights, this research seeks to contribute to the development of AI privacy policies, ethical AI frameworks, and user-centric design principles.
The rest of this paper is structured as follows.
Section 2 reviews the relevant literature.
Section 3 outlines the methodology.
Section 4 presents key findings.
Section 5 discusses the implications, policy recommendations, and future research directions.
Section 6 concludes the paper by summarizing core insights into AI privacy management for young digital citizens.
3. Methodology
3.1. Research Goal and Questions
The primary objective of this study is to explore how young digital citizens and key stakeholders—parents, educators, and AI professionals—navigate privacy concerns in AI-driven environments. Using grounded theory [
18], this research examines emerging themes related to data ownership, trust, transparency, parental influence, and AI literacy. This study takes an inductive approach to identifying how these factors shape youth privacy behaviors. The study is guided by five key research questions.
RQ1: How do young digital citizens, parents/educators, and AI professionals perceive privacy risks and responsibilities in AI-driven ecosystems?
RQ2: What are the key factors influencing data ownership, user control, and privacy decision-making among different stakeholder groups in AI environments?
RQ3: How do varying levels of AI literacy and digital awareness impact privacy management behaviors and attitudes toward transparency?
RQ4: What role does stakeholder collaboration (youth, parents/educators, and AI professionals) play in shaping effective AI privacy governance frameworks?
RQ5: How can participatory design approaches and adaptive privacy policies improve AI transparency, trust, and ethical AI system development?
These variables, informed by the research questions, converge to provide actionable insights for ethical AI development. The definitions of these constructs are outlined in
Table 1, which details their scope and focus within the study.
3.2. Research Approach and Analytical Framework
This study employs a grounded theory approach to explore how young digital citizens, parents and/or educators, and AI professionals perceive and navigate privacy concerns in AI-driven environments. Grounded theory was applied as an inductive analytical method, allowing themes to emerge organically from the data, rather than being constrained by pre-existing theoretical frameworks. Through an analysis of both quantitative and qualitative responses, recurring privacy patterns were identified across stakeholder groups. These patterns were systematically categorized into thirteen generalized privacy themes, including data control importance, perceived data control, comfort with data sharing, parental data sharing, parental data rights, AI privacy concerns, perceived data benefits, data usage transparency, transparency perception, system data trust, privacy protection knowledge, digital privacy education, and AI privacy awareness. These themes formed the foundation for the development of the PEA-AI model.
The study was structured around five core privacy constructs, DOC, PDS, PRB, TT, and EA, which initially guided data collection. However, rather than assuming fixed relationships between these constructs, a thematic analysis was conducted at the item level to examine how specific survey responses reflected variations in privacy concerns, risk perceptions, and governance expectations across different stakeholder groups. The survey responses were analyzed by calculating mean values for each item across young digital citizens, parents and/or educators, and AI professionals, providing a structured way to compare privacy attitudes and identify patterns in stakeholder perspectives. These findings were further contextualized through qualitative insights from open-ended responses, interviews, and focus groups, ensuring that the emerging privacy themes were grounded in real-world stakeholder concerns rather than predefined categories.
Through this iterative, comparative process, emerging themes were synthesized into broader stakeholder-driven privacy categories, allowing for a structured understanding of privacy dynamics. The thematic analysis revealed distinct stakeholder perspectives on data control, parental mediation, AI risks, transparency, and digital literacy, highlighting key differences in how each group conceptualizes privacy responsibilities. Based on these comparative insights, the PEA-AI model was developed to provide a structured framework for the analysis of how privacy governance evolves through stakeholder engagement, disagreements, and negotiated privacy boundaries. Rather than treating privacy as a static concept, this model reflects how trust, transparency, and data control expectations shift based on stakeholder interactions and the broader ethical discourse surrounding AI systems.
The model is introduced in
Section 4.4 (The PEA-AI Model: The Negotiation Framework), where it serves as an analytical tool to compare stakeholder-driven privacy dynamics and ethical AI governance strategies. By organizing stakeholder responses into generalized privacy themes, this study offers a comparative framework that highlights key tensions, alignments, and gaps in privacy governance, ensuring that AI policies and design principles reflect diverse perspectives. This approach ensures that privacy governance is understood as an evolving discourse shaped by real-world stakeholder interactions, fostering a more adaptive, inclusive, and user-centered understanding of AI ethics, rather than relying on rigid theoretical assumptions.
3.3. Research Design
The present study received ethics approval from the Vancouver Island University Research Ethics Board (VIU-REB). The approval reference number #103116 was given for the behavioral application/amendment forms, consent forms, interview and focus group scripts, and questionnaires. An initial pilot study was conducted with 6 participants, including members of the empirical research specialists from the University of Saskatchewan and Vancouver Island University. The pilot study aimed to evaluate the feasibility and duration of the research approach while refining the study design. Participants provided general feedback on the questionnaire, which guided the modification and restructuring of the final survey. The revised research model was then tested by collecting survey data. Survey data were collected by recruiting participants through flyers, personal networks, emails, and social networking sites, namely LinkedIn and Reddit. To reach the targeted youth demographic, several Vancouver Island school districts were contacted to assist in distributing the survey to their high-school students. Participation in the study was entirely voluntary and uncompensated. Participants were required to read and accept a consent form before starting the questionnaire, indicating their understanding of the study conditions outlined in the form. Online surveys were conducted through Microsoft Forms, with participants responding based on three designated demographics: AI researchers and developers, parents and teachers, and young digital citizens (aged 16–19).
In addition to the questionnaires, interviews and focus groups were conducted with AI professionals, parents, and educators. A section of the questionnaire invited participants to provide their email addresses if they were interested in participating in interviews and/or focus groups. After contacting consenting participants, 12 interviews and 2 focus groups were conducted: one with 4 AI professionals and another with 5 parents and/or educators. Before the interviews and focus groups, all participants reviewed and accepted a consent form. Sessions were conducted and transcribed using Microsoft Teams, with participants instructed to keep their videos off to ensure anonymity.
Young digital citizens were not included in the interviews or focus groups due to ethical and procedural constraints. Although the Tri-Council Policy Statement (TCPS 2) does not specify a fixed age of consent and allows for flexibility based on a youth’s capacity to understand the research and its risks, the school district’s ethics office requires parental consent for minors when audio or video recordings are involved. As all interviews and focus groups were recorded in accordance with institutional policy, we were unable to include participants under 18 in these formats. To ensure that youth perspectives were still captured meaningfully, we prioritized their inclusion through the anonymous survey component of the study. This approach allowed us to respect ethical standards while still obtaining valuable insights from young digital citizens.
The survey instruments were adapted from constructs validated in prior studies [
35,
36,
37,
38,
39,
40,
41,
42,
43,
44,
45,
46,
47,
48]. The instruments consisted of 3 indicators for DOC, 2 indicators for PDS, 4 indicators for PRB, 3 indicators for TT, 3 indicators for EA, and 3 open-ended discussion questions. The items (questions) within these constructs are outlined in
Table 2. Survey responses were measured on a 5-point Likert scale, with most items used for the quantitative analysis. Notably, to ensure consistency in outcomes, we reversed the scale for items in PRB for AI professionals and swapped items 1 and 2 in PDS for educators/parents to align contextually with the items for other demographics. The qualitative analysis utilized open-ended questions, two indicators from PRB, interview responses, and focus group discussions.
The following naming conventions were used for qualitative responses: survey participants were labeled as (S-YDC #X) for young digital citizens, (S-PE #X) for parents and educators, and (S-AIP #X) for AI professionals. Interview participants were referred to as (I-[Role] #X), specifying their role (e.g., I-Parent #1 or I-Educator #2). For focus group participants, we use a group identifier and role, such as (FG1-Educator #3).
3.4. Participant Demographics
Out of 482 participants, 461 completed the survey questionnaire: 176 young digital citizens (aged 16–19), 132 parents and/or educators, and 153 AI professionals. After data cleaning, we retained 127 valid responses from educators and/or parents, 146 from AI professionals, and 151 from young digital citizens for analysis. Of the 127 valid responses from educators and/or parents, 54 identified as parents, 46 identified as educators, and 28 identified as both. Among the 146 valid responses from AI professionals, 46 identified as AI developers, 98 as AI researchers, and 2 as both. Twelve interviews were conducted, with 9 interviewees identifying as parents and/or educators and 3 as AI professionals. Two focus groups were conducted, with 4 participants identifying as AI professionals and 5 as parents and/or educators.
Table 3 highlights the demographic characteristics of the participants.
4. Results and Analysis
This study expands upon our previous research by incorporating additional responses from young digital citizens, allowing for a more detailed analysis of their perspectives on privacy in AI systems. This study employs the PEA-AI model as an analytical framework to examine how privacy expectations, transparency concerns, and AI governance strategies vary across stakeholder groups. Derived through a comparative thematic analysis of stakeholder responses, the model offers a structured approach to understanding how young digital citizens, parents and/or educators, and AI professionals negotiate privacy in AI-driven environments. Rather than establishing direct causal relationships, the model identifies key tensions, alignments, and differences in privacy perceptions across the five core constructs: DOC, PDS, PRB, TT, and EA.
This study employs a dual-layered analytical approach to explore stakeholder privacy concerns in the AI ecosystem, combining quantitative and qualitative methods to uncover critical privacy negotiation points and demonstrate how multi-stakeholder discourse shapes privacy governance.
For quantitative analysis, Microsoft Excel (Office 365 version, 2025) was used to structure, clean, and manage the collected survey data, ensuring consistency in the dataset. A descriptive statistical analysis was conducted across four distinct groups: young digital citizens, parents and educators, AI developers and researchers, and a combined dataset consolidating all responses. This analysis allowed for a structured comparison of the responses, revealing trends in data control, transparency concerns, perceived risks, and awareness levels. Mean values were calculated and sentiment levels were categorized accordingly for the key constructs: DOC, PDS, PRB, TT, and EA. By systematically comparing these constructs, we identified how different stakeholder groups perceived and prioritized privacy-related concerns, illustrating the thematic contrasts in their expectations and decision-making processes. This approach ensured that the analysis evaluated the interplay of privacy constructs within and across groups, rather than examining them in isolation.
To contextualize these findings, qualitative data from open-ended responses, interviews, and focus group discussions were analyzed thematically. This revealed common patterns in participant concerns, perceptions, and recommendations. Together, these methods offer a comprehensive understanding of privacy concerns and values, allowing for a deeper exploration of stakeholder priorities, knowledge gaps, and expectations from AI privacy frameworks.
4.1. Descriptive Statistics
Our quantitative survey used a five-point Likert scale to compare the mean responses across the five key constructs: DOC, PDS, PRB, TT, and EA. The mean scores for each construct varied across the three key demographics—young digital citizens, parents and/or educators, and AI professionals. The results are visually represented in
Figure 1 (heatmap) and detailed in
Table 4 (comparative analysis). Overall, while adults agree on user control, transparency, and engagement with AI, youth show lower trust and awareness, highlighting the need for targeted interventions to bridge this gap.
4.2. Comparative Analysis of AI Privacy Constructs Across Stakeholder Groups
The comparative analysis of privacy constructs across young digital citizens, parents and/or educators, and AI professionals reveals distinct variations in how different stakeholders perceive and engage with AI-driven privacy concerns. By examining the mean scores across thematic categories, key differences emerge in the conceptualization of data control, parental data sharing, perceived risks and benefits, transparency, trust, and education.
4.2.1. Data Ownership and Control
Data Control Importance: Parents/educators rated this the highest (4.46), followed by AI professionals (4.39) and youth (4.08). This suggests that adult stakeholders, especially educators and researchers, strongly advocate for user control over personal data, reinforcing its role in ethical AI development.
Perceived Data Control: AI professionals (3.64) reported feeling more in control over their data compared to parents/educators (3.40) and youth (3.35). The relatively lower score among youth suggests a potential gap in privacy self-efficacy, necessitating better user-centric privacy mechanisms.
Comfort with Data Sharing: AI professionals (3.79) displayed the highest comfort in sharing personal data, followed by parents/educators (3.39), with youth reporting the lowest comfort (2.83). This reflects a generational divide in risk perception, with youth demonstrating greater apprehension toward personal data disclosure.
4.2.2. Parental Data Sharing
Parental Data Sharing Practices: AI professionals reported the lowest support for parental data sharing (1.81), followed by parents/educators (2.46) and youth (2.52). These relatively low scores indicate widespread concern about the appropriateness of parental involvement in youth data decisions.
Parental Data Rights: Parents/educators (3.39) rated parental data rights the highest, while AI professionals (2.90) and youth (2.51) expressed lower confidence in this construct. Notably, AI professionals favored youth consent mechanisms, prioritizing autonomy over parental governance in data-related decisions.
4.2.3. Perceived Risks and Benefits
AI Privacy Concerns: All three groups expressed strong privacy concerns, with AI professionals scoring the highest (4.31), followed by parents/educators (4.25) and youth (4.09). This consensus highlights the universal recognition of the ethical challenges posed by AI data governance.
Perceived Data Benefits: AI professionals (4.53) rated data benefits significantly higher than both youth (3.86) and parents/educators (3.50). These findings suggest that, while AI professionals see tangible advantages in data-driven AI advancements, youth and educators remain more cautious, reflecting a trust gap in AI benefit perception.
4.2.4. Transparency and Trust
Data Usage Transparency: Transparency was considered highly important across all stakeholder groups, with parents/educators scoring the highest (4.34), followed by youth (4.19) and AI professionals (4.17). This reinforces the demand for increased transparency mechanisms in AI governance.
Transparency Perception: Despite valuing transparency, the stakeholders perceived existing AI transparency measures as insufficient. Parents/educators rated transparency perception the lowest (1.96), followed by AI professionals (2.11), and youth (2.41). These results indicate a strong disparity between their expectations and the current implementation of transparency in AI systems.
System Data Trust: AI professionals (4.18), followed by parents/educators (4.07), exhibited relatively higher trust in AI systems, while youth expressed significantly lower trust (3.09). These findings suggest that youth are more skeptical of AI governance practices, reinforcing the necessity of improved explainability measures.
4.2.5. Education and Awareness
Privacy Protection Knowledge: AI professionals reported the highest levels of privacy knowledge (4.04), followed by parents/educators (3.29) and youth (3.04). The significant gap between professionals and youth suggests an urgent need for targeted AI privacy education initiatives.
Digital Privacy Education: AI professionals rated privacy education substantially higher (3.79) compared to youth (2.93) and parents/educators (2.50). These results highlight a potential divide in AI literacy, where non-technical stakeholders may lack the resources to fully understand privacy frameworks.
AI Privacy Awareness: All groups strongly agreed on the importance of AI privacy education, with AI professionals rating it the highest (4.63), followed by parents/educators (4.50) and youth (3.68). The widespread alignment in this area suggests broad recognition of the need for continuous privacy education programs.
4.3. Qualitative Findings
In addition to the quantitative findings, the qualitative data revealed key stakeholder tensions regarding privacy governance in AI-driven environments. A significant divide emerged between young digital citizens and parents/educators, particularly in the realm of parental consent and data-sharing authority [
49]. While youth participants expressed frustration over their lack of control and transparency in how their data are handled, many parents and educators viewed youth privacy as something that should be actively managed rather than autonomously controlled. One young respondent voiced their concern, stating, “I am mainly concerned about what data is being taken and how it is used, as I feel we often aren’t informed clearly about what data is being taken and used” (S-YDC #5). Conversely, an educator emphasized the role of awareness rather than outright control, explaining, “Many children and adolescents will use AI without considering their own privacy (similar to how many use social media). A lack of education regarding the risks of sharing personal information on the internet can lead to students potentially misusing AI” (S-PE #40). This tension highlights a fundamental gap between youth demands for autonomy and parental concerns about informed decision-making in AI privacy governance.
Another point of contention emerged between educators and AI professionals regarding the trustworthiness of AI applications. Educators largely expressed skepticism over whether AI systems genuinely safeguard personal data, citing concerns about long-term data retention, algorithmic biases, and the lack of oversight in AI-driven decision-making. One educator articulated this skepticism, stating, “I do not trust that information gathered by AI will be used presently or in the future in an informed manner for the benefit of the individual, but rather fear its exploitation on both an individual and mass level” (S-PE #10). However, AI professionals generally framed privacy risks as technical challenges that could be addressed through improved security measures rather than as inherent flaws in AI systems. An AI researcher highlighted the difficulty in tracing data flows, stating, “Once data goes into an AI system, it’s tough to know where it ends up or who else can see it” (S-AIP #45). This divergence suggests that, while educators advocate for broader regulatory oversight and ethical considerations, AI professionals emphasize governance through internal safeguards and privacy-preserving technologies.
The issue of AI-driven surveillance and profiling further highlights the complex negotiation between young digital citizens and AI professionals. Youth participants expressed deep concern over AI’s ability to track, categorize, and potentially manipulate their behaviors without their explicit consent. One respondent remarked, “I feel uncomfortable knowing AI can recognize my face in public places” (S-YDC #93), while another feared long-term profiling, stating, “AI will guess everything about us. Sensitive topics I research could be recorded forever” (S-YDC #84). AI professionals, however, tended to view these issues through the lens of data governance rather than direct surveillance. A researcher explained, “AI might accidentally recreate sensitive data from its training sets, exposing private information” (S-AIP #85). This suggests that, while youth stakeholders perceive AI surveillance as an immediate privacy violation, AI professionals frame it as a solvable issue through stricter data management practices. These findings collectively highlight the necessity of a multi-stakeholder approach to AI privacy governance, one that not only strengthens technical safeguards but also considers the lived experiences of youth and the ethical concerns raised by educators and parents.
4.4. The PEA-AI Model: The Negotiation Framework
The PEA-AI model, presented in
Figure 2, provides an organized framework for an understanding of the development of privacy concerns and ethical considerations through multi-stakeholder engagement in AI governance. Stakeholders engage with and shape privacy constructs based on their perceptions and experiences. These constructs, in turn, inform the design and governance of ethical AI systems. Feedback loops represent how the outcomes of AI governance recursively influence stakeholder attitudes, behaviors, and future expectations, reinforcing a continuous cycle of negotiation and adaptation. Using survey data and qualitative insights, this model compares and contrasts the privacy attitudes of the three main stakeholder groups: young digital citizens, parents and educators, and AI professionals. Unlike traditional models that impose rigid privacy standards through prescriptive frameworks, the PEA-AI model views the creation of ethical AI as an iterative negotiation process in which stakeholders’ expectations of privacy and governance mechanisms evolve through interaction.
The PEA-AI model introduces a multi-stakeholder privacy negotiation framework, where privacy constructs evolve through stakeholder engagement. This framework addresses four key tensions.
Data Control vs. Trust: Balancing youth autonomy with AI developers’ risk mitigation measures.
Transparency vs. Perception: Addressing the gap between AI’s claimed transparency and user perception.
Parental Rights vs. Youth Autonomy: Negotiating consent mechanisms that respect youth agency while addressing parental concerns
Privacy Education vs. Awareness Deficit: Strengthening digital literacy to enable informed AI interactions and empower users to navigate privacy challenges effectively.
This model was developed by systematically comparing the responses to survey items concerning five fundamental privacy constructs: DOC, PDS, PRB, TT, and EA. By combining stakeholder responses into broad themes, the model focuses on how privacy perspectives influence governance and AI ethics, rather than establishing fixed causal relationships.
As early adopters of AI, young people demonstrate a nuanced view of personal data protection. While they recognize the importance of data security, they often prioritize convenience over control in their privacy standards. Their responses suggest a willingness to allow AI-driven data collection, as the benefits of personalization, algorithmic recommendations, and social connectivity outweigh their privacy concerns. In contrast, educators and parents view privacy as a precaution, drawing attention to the risks associated with youth data disclosure and the need for regulatory oversight. Nevertheless, their perspectives reveal limited AI literacy, which hampers their ability to effectively guide youth in managing AI privacy settings. AI professionals, who are responsible for designing and implementing privacy safeguards, primarily view privacy through the lens of system performance and risk mitigation. While they acknowledge the challenges in achieving full transparency and user agency in AI-driven settings, their focus remains on compliance with existing regulations.
The PEA-AI model highlights how stakeholder tensions shape privacy governance in the AI ecosystem. Despite all three categories engaging with AI systems, their expectations around data control, privacy, and trust often diverge. A comparative analysis reveals significant differences in privacy perceptions across themes. While all groups aim to control and manage data, their priorities vary: parents and educators prioritize security, young digital citizens value autonomy, and AI experts focus on technological limitations. Transparency and trust emerge as critical issues, with educators highlighting a lack of user-friendly disclosure, AI professionals acknowledging the challenges of comprehensive explainability, and young users demanding clarity and accessibility in AI systems.
By framing privacy as an ongoing negotiation rather than a static regulatory process, the PEA-AI model advances AI development, stakeholder-driven governance, and policy interventions. It emphasizes the importance of aligning AI design with ethical considerations, ensuring that privacy measures reflect real user behavior rather than hypothetical assumptions. The model advocates for the importance of multi-stakeholder involvement in governance, addressing the interests and constraints of various user groups. On the policy front, it guides adaptive regulation measures that balance the technical concerns of AI professionals, the privacy demands of young digital citizens, and the ethical considerations of parents and educators. Rather than prescribing a specific privacy solution, the model provides a comparative approach to understanding privacy conflicts in AI ecosystems, encouraging active participation from young digital citizens, educators, and AI professionals in the creation of ethical AI practices.
The PEA-AI model makes several key contributions to the field of AI privacy governance. First, it introduces a stakeholder-driven approach to AI governance, unlike traditional models that focus solely on individual privacy attitudes. By integrating stakeholder negotiation, the model ensures that governance frameworks reflect the diverse needs and perspectives of all stakeholders, including young digital citizens, parents/educators, and AI professionals. Second, the model has significant policy implications, as it supports data protection frameworks that advocate for dual-consent mechanisms in AI data governance, balancing youth autonomy with parental oversight. Third, the model provides a foundation for AI design frameworks, enabling AI professionals to develop user-centered, privacy-enhancing technologies that prioritize transparency, accessibility, and ethical considerations. As AI evolves, the PEA-AI model stresses the need for privacy governance to adapt to shifting stakeholder expectations, fostering a more inclusive and equitable digital society.
5. Discussion
This section interprets the study’s findings with a sustained focus on policy, design, education, governance, and theoretical implications. It addresses stakeholder-specific challenges and proposes actionable solutions guided by the PEA-AI model.
5.1. Privacy Perception in AI: Bridging Stakeholder Disparities
The findings reveal significant disparities in how privacy is perceived and managed across stakeholder groups. Young digital citizens tend to prioritize autonomy, personalization, and seamless interactions with AI but often lack the knowledge or confidence to make informed privacy decisions. Parents and educators, while concerned about youth exposure to algorithmic systems, frequently lack the technical grounding to guide them effectively. AI professionals, on the other hand, focus primarily on technical efficacy, data security, and compliance, often overlooking the lived experiences and usability challenges of end-users. These disparate perceptions underscore the fragmented nature of AI privacy governance, revealing a pressing need for coordinated efforts that bridge understanding, expectations, and responsibilities.
5.2. Stakeholder Tensions in AI Privacy Management: Diverging Priorities and Overlapping Concerns
Tensions emerge when stakeholders navigate competing priorities. Young digital citizens call for greater agency and transparency, while parents and educators adopt more protective postures, often resorting to restrictive mediation. AI professionals emphasize system integrity and regulatory alignment, framing privacy as a technical problem to be solved. However, these tensions unfold within a broader landscape shaped by corporate control and limited regulatory intervention. While AI professionals, parents, and educators influence implementation-level decisions, they often operate within systems architected by powerful digital corporations that define data collection norms, platform logic, and interface constraints. The lack of robust privacy legislation and minimal enforcement further concentrates decision-making power in the hands of platform owners, constraining both transparency and accountability. As several stakeholders in this study noted, the opacity of AI systems is not merely a design oversight but a structural feature of the current business models. Therefore, framing privacy governance as a stakeholder negotiation must be approached critically: such negotiation is only meaningful if corporate and governmental entities—those who currently determine the boundaries of digital privacy—are also engaged and held accountable. A stakeholder-inclusive governance framework must thus go beyond surface-level participation to address these power asymmetries directly. Without structural checks, efforts toward ethical design and participatory engagement risk being subsumed within the very systems that they seek to reform.
5.3. A Stakeholder-Driven Framework for Ethical AI Governance
The PEA-AI model developed in this study offers a comparative framework for an understanding of how stakeholder values shape privacy governance in AI ecosystems. Unlike traditional top-down models, the PEA-AI conceptualizes privacy as a dynamic process, negotiated across roles and contexts. It reframes ethical AI development not as a static checklist but as an evolving set of practices grounded in participatory engagement. The model integrates user control, risk–benefit awareness, transparency, and educational needs into a coherent structure, facilitating ethical AI design that adapts to the needs of young digital citizens, caregivers, and system developers alike. To operationalize the PEA-AI model in real-world contexts, we propose three primary avenues for implementation. First, in AI system design, developers should embed participatory design methodologies that involve youth and caregivers in iterative co-design sessions, usability testing, and privacy feature validation. These sessions should focus on surfacing age-appropriate expectations and ensuring that transparency tools are understandable and actionable. Second, in regulation, policymakers should institutionalize stakeholder-informed privacy audits and mandate that youth-facing AI platforms include consent mechanisms and data practices developed with direct youth input. Regulatory frameworks must also integrate youth advisory councils to inform legislative updates and oversight. Third, in education, school systems should integrate AI ethics and privacy modules into digital literacy curricula, co-developed with educators and technologists and supported by experiential learning tools such as simulations and interactive scenarios. These initiatives would not only support informed engagement with AI but also cultivate a generation of privacy-conscious digital citizens.
5.4. Bridging the Gap Between Privacy Awareness and Practical Implementation
While the awareness of AI-related privacy risks is growing, significant gaps remain in translating this into practical, effective privacy management strategies. Many young digital citizens understand the stakes of data sharing but encounter barriers when navigating complex or opaque privacy settings. Educators and parents frequently lack the resources to support youth in interpreting algorithmic behaviors or configuring privacy protections. Meanwhile, developers often prioritize technical feasibility and compliance over accessibility and usability. The PEA-AI model underscores the need for bridge mechanisms that convert privacy knowledge into practical, user-friendly measures. This includes addressing gaps in literacy, designing intuitive privacy controls, and incorporating participatory governance models that adapt to user needs. Effective AI privacy solutions should simplify privacy controls, provide clear explanations of data usage, and offer real-time feedback to users. Multi-stakeholder educational initiatives, such as gamified AI ethics courses, youth-centric design concepts, and interactive privacy dashboards, can empower users to make informed decisions. Additionally, AI developers should conduct usability testing with diverse stakeholders to ensure that privacy measures are accessible and easy to comprehend. Future research should explore longitudinal methods to see if increased awareness leads to sustained privacy protection, ensuring that privacy literacy translates into meaningful action.
5.5. The Role of Transparency in Building Trust and User Autonomy
Transparency remains a critical but under-implemented component of AI privacy governance. While most stakeholders agree on its importance, the mechanisms used to deliver transparency often fall short. Legalistic privacy notices, vague permissions, and generalized statements of compliance do little to build meaningful trust. Youth participants in particular expressed frustration with opaque explanations of data collection and AI logic. Developers and policymakers must move beyond minimal disclosure standards and toward context-sensitive transparency. This includes visual privacy indicators, interactive consent tools, and simplified explanations of AI processes tailored to different user demographics. Adaptive transparency models, which allow users to choose the level of insight they receive about AI data processing, can enhance user agency and decision-making. Future research should explore transparency-enhancing technologies (TETs), such as AI-generated privacy summaries or interactive data flow diagrams, to empower users while maintaining system efficiency. Transparency must be a foundational factor of ethical AI design, ensuring accessibility, user agency, and trust.
5.6. Strengthening AI Privacy Through User-Centric and Stakeholder-Inclusive Policy Interventions
Policy interventions that are user-centric and balance technical advancements with ethical data practices are necessary for effective AI privacy control. While all stakeholder groups recognize the importance of privacy measures, there is often a disconnect between policy development and practical implementation. AI privacy policies need to be co-designed with direct stakeholder involvement, ensuring that interventions are explicit, practical, and adaptable for different users. Regulations should mandate simplified privacy settings to enable young users and their guardians to manage their data effectively without needing technical expertise. Age-appropriate privacy standards, similar to child protection laws, should be expanded to address AI-specific risks, such as data-driven profiling and content manipulation. To further enable users to comprehend, evaluate, and manage privacy concerns, consistent digital literacy education should be introduced into national curricula. Beyond user education, corporate accountability and transparent enforcement mechanisms are essential to guarantee that AI systems adhere to ethical data governance standards. To ensure that privacy issues are detected and addressed before AI systems are deployed, these organizations must conduct multi-stakeholder privacy impact studies. Additionally, advisory boards that include young digital citizens, educators, and AI professionals can provide real-time policy suggestions as privacy challenges evolve. Future studies should investigate the effects of user-driven policy design on AI trust and adoption, ensuring that regulatory frameworks adapt to the needs and expectations of diverse stakeholders. Policy interventions that focus on users can promote responsible and transparent AI development while protecting users’ privacy rights by closing the gap between legislative safeguards and their actual use. However, without legal safeguards and accountability mechanisms targeting the structural power of dominant digital platforms, participatory design alone will not be sufficient to ensure equitable and enforceable privacy outcomes.
5.7. Theoretical Implications for AI Privacy Governance
This study contributes to a growing body of research that frames privacy not as a static entitlement but as a relational and situational construct. The PEA-AI model underscores how privacy expectations are shaped by role-based experiences, contextual trust, and varying levels of control. It challenges binary frameworks that position privacy as either granted or withheld, advocating instead for a participatory paradigm in which privacy is actively co-negotiated. This theoretical repositioning invites a broader exploration of how digital agency is constructed, constrained, and expressed within AI-mediated environments. It also reinforces the need for longitudinal research that tracks how these dynamics evolve in response to changes in technology, regulation, and social norms.
5.8. Limitations and Future Work
While this study offers useful insights regarding AI privacy governance from a stakeholder-driven perspective, it has several limitations. First, the perspectives of policymakers and regulatory bodies were not included, potentially overlooking institutional factors that influence AI privacy governance. Similarly, underrepresented youth populations such as those from rural, Indigenous, or lower socio-economic backgrounds were not systematically captured in the sample, which may limit the inclusiveness of the findings. Expanding the sample to include these stakeholder groups in future research would enhance the model’s generalizability and practical relevance. Second, the study may have been subject to regional or demographic bias, as data were drawn from specific geographic and educational contexts. Broader sampling across jurisdictions and cultural backgrounds could reveal important variations in privacy perceptions and regulatory expectations that were not fully captured in this study. Third, the cross-sectional design reflects privacy attitudes and behaviors at a single point in time, limiting the ability to assess how these perceptions evolve in response to ongoing technological change, regulatory developments, or shifts in AI literacy. Longitudinal studies tracking stakeholder perspectives over time would offer a more comprehensive view of privacy adaptation within dynamic AI ecosystems. Fourth, the use of qualitative interviews and self-reported survey responses introduces potential bias. Participants may have limited technical knowledge or respond in ways that they perceive as socially desirable, which may influence the reliability of the findings. To mitigate these limitations, future research should incorporate triangulation strategies, including behavioral data, experimental approaches, and real-world case studies of AI privacy solutions, to validate and enrich the qualitative insights. Additionally, while the PEA-AI model provides a robust structure for the analysis of stakeholder privacy concerns, its applicability across diverse domains has not yet been tested. Applying the model in contexts such as healthcare, education, finance, or smart cities could uncover domain-specific risks and governance needs. Finally, as AI technologies and data governance frameworks continue to evolve, future research should prioritize adaptive privacy models that integrate emerging privacy-enhancing technologies, flexible regulatory mechanisms, and participatory design strategies. Co-developing privacy safeguards with end-users will be essential in ensuring that they remain transparent, inclusive, and aligned with the changing expectations of digital citizens.
6. Conclusions
This study adopted a stakeholder-driven approach to examine privacy governance in AI ecosystems, drawing insights from young digital citizens, parents, educators, and AI professionals. Through grounded theory analysis, the research identified key concerns, including limited digital literacy, regulatory inconsistencies, and a lack of algorithmic transparency, which constrain effective privacy management. In response, we propose the PEA-AI model, which is structured around five core constructs: DOC, PDS, PRB, TT, and EA. The findings show that privacy education enhances user agency and risk awareness, while trust and transparency serve as critical enablers of meaningful engagement with AI systems. The study also highlights ongoing tensions, such as the restrictiveness of parental mediation, which may limit youth autonomy in managing personal data. To advance ethical AI development, privacy controls must be accessible, user-friendly, and shaped through multi-stakeholder participation. The PEA-AI model contributes to ethical AI policymaking by offering a structured, comparative lens to align stakeholder expectations with governance mechanisms. While the model enhances our theoretical understanding of AI privacy, future research should broaden demographic inclusion, refine construct measurement, and employ longitudinal methods to assess evolving privacy attitudes. Embedding participatory design into AI development and policymaking processes will be key to ensuring that privacy frameworks remain adaptive, transparent, and aligned with the expectations of diverse digital citizens.