Next Article in Journal
Antecedents of Sustainable Usage Behaviors Through Mobile Payment Technology for Digital Financial Inclusion in Ghana
Previous Article in Journal
Evolutionary Game of Medical Knowledge Sharing Among Chinese Hospitals Under Government Regulation
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Privacy Ethics Alignment in AI: A Stakeholder-Centric Framework for Ethical AI

Computer Science Department, Vancouver Island University, Nanaimo, BC V9R 5S5, Canada
*
Author to whom correspondence should be addressed.
Systems 2025, 13(6), 455; https://doi.org/10.3390/systems13060455
Submission received: 21 April 2025 / Revised: 25 May 2025 / Accepted: 5 June 2025 / Published: 9 June 2025

Abstract

:
The increasing integration of artificial intelligence (AI) in digital ecosystems has reshaped privacy dynamics, particularly for young digital citizens navigating data-driven environments. This study explores evolving privacy concerns across three key stakeholder groups—young digital citizens, parents/educators, and AI professionals—and assesses differences in data ownership, trust, transparency, parental mediation, education, and risk–benefit perceptions. Employing a grounded theory methodology, this research synthesizes insights from key participants through structured surveys, qualitative interviews, and focus groups to identify distinct privacy expectations. Young digital citizens emphasized autonomy and digital agency, while parents and educators prioritized oversight and AI literacy. AI professionals focused on balancing ethical design with system performance. The analysis revealed significant gaps in transparency and digital literacy, underscoring the need for inclusive, stakeholder-driven privacy frameworks. Drawing on comparative thematic analysis, this study introduces the Privacy–Ethics Alignment in AI (PEA-AI) model, which conceptualizes privacy decision-making as a dynamic negotiation among stakeholders. By aligning empirical findings with governance implications, this research provides a scalable foundation for adaptive, youth-centered AI privacy governance.

1. Introduction

The concept of artificial intelligence (AI) was formally introduced at the Dartmouth Conference in 1956, where researchers began exploring how machines could simulate human intelligence [1,2]. Early applications of AI took the form of expert systems, designed to replicate decision-making in domains such as medicine, law, and accounting [3]. Over time, these systems evolved from rule-based logic to more complex, data-driven models capable of adaptive learning [4]. In recent years, the development of generative AI tools such as ChatGPT (GPT-4, OpenAI, 2025), GitHub Copilot (version available as of 2025), and Meta AI technologies (including Meta AI Assistant and LLaMA models, versions available as of 2025) has accelerated the integration of AI into everyday life, shifting AI from domain-specific tools to widely accessible platforms used in education, communication, and entertainment [5,6,7]. These advances have dramatically expanded both the capabilities of AI systems and the scope of ethical and privacy concerns, particularly for young digital citizens interacting with these technologies in their daily lives [8,9].
Often defined as individuals who have grown up in a digital environment, young digital citizens routinely engage with AI-driven applications that shape their online behaviors and social experiences [10,11]. They regularly interact with AI-based services such as social media platforms, virtual assistants, and educational tools [12]. While these technologies offer benefits, such as personalized experiences and enhanced engagement, they also raise critical privacy concerns. AI systems typically rely on extensive data collection, automated decision-making, and complex, often unclear algorithms, leading to concerns about data ownership, user control, trust, and transparency [13].
The growing dependence on AI-driven systems has heightened concerns about privacy management, algorithmic accountability, and the ethical use of data. Research indicates that young users often lack a clear understanding of how their personal data are processed, shared, and monetized [14]. Moreover, many AI technologies function as “black-box” systems, which limits transparency, making it difficult for users to make informed privacy decisions [15]. Factors such as parental guidance, regulatory frameworks, and digital literacy initiatives play a crucial role in how young digital citizens navigate privacy risks [16]. However, much of the existing research focuses on adult users or general privacy concerns, leaving a significant gap in understanding how youth-specific factors influence privacy management behaviors in AI ecosystems [17]. In addition, the current literature often overlooks the perspectives of varying stakeholders, such as educators, policymakers, and AI developers, whose insight could provide a more comprehensive understanding of the privacy challenges facing young digital citizens. The lack of diverse stakeholder perspectives limits the development of ethical and stakeholder-driven policies and practices that prioritize the privacy and well-being of young users.
To address these gaps, this study employs the Privacy–Ethics Alignment in AI (PEA-AI) model to explore how young digital citizens and key stakeholders, including parents, educators, and AI professionals, perceive and manage privacy in AI-driven environments. The PEA-AI model emerges from an inductive data-driven approach to understanding how stakeholder perspectives shape ethical AI development. In this study, ethical AI development refers to the design and deployment of AI systems that are transparent, privacy-preserving, inclusive, and responsive to the needs of diverse users, particularly youth. It entails integrating participatory governance practices, user-centric design principles, clear consent and data transparency mechanisms, and safeguards that promote fairness and trust.
Developed using grounded theory [18], this model analyzes the complex relationship between privacy constructs, stakeholder roles, and ethical AI frameworks. Grounded theory provides a systematic, comparative analysis of privacy concerns, behaviors, and decision-making patterns across different stakeholder groups, allowing for a structured comparison of their perspectives, rather than identifying causal relationships. A more detailed explanation of the grounded theory approach used in this study is provided in the Methodology Section 3.2 (Research Approach and Analytical Framework). By leveraging empirical data from young digital citizens, parents and/or educators, and AI professionals, the PEA-AI model provides a negotiation-based framework for privacy governance in AI systems. This approach ensures that variations in privacy expectations, control mechanisms, and transparency demands are examined in alignment with each group’s role in the AI ecosystem.
This research focuses on five key constructs influencing privacy management: Data Ownership and Control (DOC) examines perceptions of data ownership, user autonomy, and AI governance mechanisms [10]; Parental Data Sharing (PDS) investigates the role of parental intervention in shaping youth privacy attitudes and AI literacy [13]; Perceived Risks and Benefits (PRB) evaluates how youth balance AI-related risks (privacy breaches, data misuse) against perceived benefits (personalization, efficiency) [14]; Transparency and Trust (TT) explores the relationship between algorithmic explainability, user trust, and privacy decision-making [15]; and Education and Awareness (EA) assesses the impact of AI literacy, privacy education, and digital awareness on privacy management behaviors [17]. Together, these constructs and the PEA-AI model aim to provide a comprehensive understanding of how diverse stakeholder perspectives can inform ethical privacy governance in AI systems.
This study uses a mixed-methods approach, incorporating quantitative survey data with qualitative insights from open-ended questions, interviews, and focus groups. Participants include young digital citizens (aged 16–19), parents and educators, and AI professionals, providing a comprehensive view of privacy challenges across multiple perspectives. By combining empirical evidence with theoretical insights, this research seeks to contribute to the development of AI privacy policies, ethical AI frameworks, and user-centric design principles.
The rest of this paper is structured as follows. Section 2 reviews the relevant literature. Section 3 outlines the methodology. Section 4 presents key findings. Section 5 discusses the implications, policy recommendations, and future research directions. Section 6 concludes the paper by summarizing core insights into AI privacy management for young digital citizens.

2. Overview

2.1. Privacy Challenges in AI-Driven Digital Ecosystems

The rapid expansion of AI-driven technologies has led to unprecedented levels of data collection and analysis, fundamentally transforming digital interactions and reshaping privacy expectations [10]. Young digital citizens, who regularly engage with AI-powered platforms, encounter unique privacy risks due to the cryptic design of AI systems, lack of user control, and constantly evolving data governance policies [12]. These challenges are further amplified by the socio-technical complexities of AI, where algorithmic decisions impact various aspects of daily life, from social media interactions to personalized educational experiences [16].
While existing research highlights several critical privacy concerns, it often focuses on adults or general user populations, overlooking how youth-specific factors, such as digital literacy, cognitive development, and parental involvement, affect privacy management [13]. Additionally, the current regulatory and technological frameworks tend to be reactive rather than proactive, lacking a systematic comparison of how different stakeholder groups perceive AI transparency, assess risk, and approach data-sharing behaviors [15]. This gap highlights the need for an integrated, multi-stakeholder approach that addresses comparative privacy concerns across diverse user groups.
While education and awareness have been suggested as essential interventions [17], research indicates that young users often struggle to grasp the intricacies of algorithmic processing and data ownership [16]. The growing commodification of personal data further complicates the issue, as AI-driven platforms operate within a framework of behavioral tracking and predictive analytics, often with minimal user consent or understanding [14].
This study addresses a critical gap by utilizing grounded theory [18] to explore how young digital citizens and key stakeholders (parents, educators, and AI professionals) conceptualize privacy, create trade-offs, and navigate AI-driven ecosystems. Unlike previous studies that focus on privacy attitudes in static digital environments, this research employs a comparative analysis framework to systematically examine privacy expectations, governance concerns, and risk perceptions across stakeholder groups. By addressing privacy concerns within dynamic, algorithmically mediated spaces, this study highlights the distinct ways in which different stakeholders interpret transparency, data control, and AI ethics. The findings aim to contribute to practical strategies for AI governance, ethical data practices, and privacy education, ensuring that AI systems align with the privacy expectations of young users.

2.2. Conceptualizing Privacy Through Key Constructs in AI-Driven Environments

Table 1 outlines the definitions of the five key constructs examined in this study: DOC, PDS, PRB, TT, and EA. Privacy in AI systems is influenced by the interplay of user agency, system transparency, and external factors, as articulated in Nissenbaum’s theory of contextual integrity and Westin’s privacy theory [19,20]. Nissenbaum emphasizes that privacy expectations are shaped by norms specific to social contexts, including the roles of stakeholders, the nature of the information, and the principles of transmission. Westin, on the other hand, categorizes individuals by their privacy orientations, ranging from privacy fundamentalists to the unconcerned, highlighting variations in sensitivity to data practices [20]. These frameworks inform our comparative lens by underscoring that stakeholder perceptions of privacy are shaped not only by technical realities but also by normative expectations and individual dispositions. Young digital citizens interact with AI applications that continuously collect, analyze, and process their data, yet they often lack the knowledge or authority to effectively manage their digital footprints [21]. While AI systems offer personalized experiences and enhanced user engagement, they also raise concerns about data ownership, trust, and perceived risks [14,22]. The complexity of AI-related privacy challenges calls for a structured comparative analysis of how different stakeholder groups (youth, parents and/or educators, and AI professionals) conceptualize privacy, navigate trade-offs, and evaluate transparency concerns within AI-driven ecosystems.
A significant challenge lies in the power imbalance surrounding data ownership and management. Young users are expected to interact with AI systems but are rarely afforded meaningful control over their personal information [13]. The cryptic nature of AI decision-making and algorithmic profiling exacerbates this issue, limiting users’ ability to opt out or influence how their data are collected and used [10]. Parental mediation further complicates matters, as efforts to protect youth may inadvertently lead to the oversharing of personal information, especially in AI-based family applications [23]. Research suggests that intergenerational privacy expectations often differ, with parents prioritizing security while younger individuals desire more autonomy and flexibility in managing their data [24].
Furthermore, the balance between privacy risks and perceived benefits significantly affects how young people interact with AI. Many young users recognize the advantages of personalized recommendations, social connectivity, and AI-enhanced learning tools but often underestimate the long-term privacy ramifications of sharing their data [25,26]. This aligns with the privacy paradox, where stakeholders express concerns about data exploitation while also participating in high-disclosure behaviors due to immediate perceived benefits [22]. Transparency is crucial in influencing these behaviors, as AI systems that clearly communicate data practices and provide accessible privacy settings are more likely to build trust and encourage responsible engagement [15]. Nevertheless, research suggests that most AI-driven platforms function with limited transparency, making it difficult for young users to evaluate who has access to their data and how they are being used [16].
Improving AI literacy and digital awareness is also essential in enabling young digital citizens to navigate these challenges effectively. Studies show that adolescents who receive structured privacy education and ethical AI training have enhanced data protection practices and risk assessment skills [17,23]. However, the current privacy education initiatives often fall short, as many programs fail to address the algorithmic intricacies and ethical dilemmas associated with AI-driven environments [14]. Interactive learning efforts and peer-led AI literacy programs have shown potential in bridging this gap, empowering young users with practical strategies to manage privacy settings, evaluate data-sharing trade-offs, and critically assess AI-generated content [24].
This research moves beyond the traditional discourse on privacy and AI ethics by investigating how young digital citizens actively negotiate privacy boundaries in practical AI interactions. The findings contribute to the development of youth-centric privacy policies, ethical AI frameworks, and regulatory measures that prioritize autonomy, transparency, and digital resilience.

2.3. Transparency, Trust, and User Autonomy in AI Privacy Through a Grounded Theory Perspective

Transparency and trust are integral to privacy management in AI-driven environments, particularly for young digital citizens who navigate cryptic algorithmic systems with limited agency [21]. While AI applications offer personalization and efficiency, their lack of explainability and user control has fueled skepticism about data collection, processing, and sharing practices [15]. Traditional research has treated transparency as a static variable, but a grounded theory approach enables a dynamic understanding of how young users conceptualize and negotiate trust in AI ecosystems [16]. By conducting a comparative analysis of stakeholder responses, this study identifies key discrepancies in trust formation, transparency expectations, and user autonomy between young digital citizens, parents and/or educators, and AI professionals, offering a structured evaluation of their privacy concerns [14].
Research indicates that trust in AI is influenced by contextual factors, with young users demonstrating higher trust in AI-powered tools when transparency aligns with their expectations of data control and ethical safeguards [22,23]. AI-driven applications in education and healthcare, for instance, demand the highest levels of transparency, as young users and stakeholders perceive the privacy risks in these domains as particularly significant [27]. In contrast, social media and entertainment platforms often operate under more lenient transparency expectations, despite their similarly extensive data collection practices [24]. This discrepancy highlights that transparency concerns among youth are not uniform, reinforcing the need for context-specific AI governance models rather than one-size-fits-all regulatory approaches [28].

2.4. Multi-Stakeholder Privacy Negotiation in AI Systems: An Analytical Perspective

Privacy governance within AI-driven environments is often structured as a top-down process, shaped largely by corporate interests and insufficient regulatory oversight. However, this centralized control has led to growing concerns about power asymmetries and the marginalization of user agency, as extensively discussed in critical scholarship on surveillance capitalism [29]. In this study, we adopt a normative perspective that frames privacy governance as an evolving negotiation among multiple stakeholders. In this context, multi-stakeholder privacy negotiation refers to the ongoing and dynamic process through which different groups, such as youth, parents, educators, and AI professionals, assert, reconcile, or contest their values, expectations, and responsibilities in shaping AI privacy practices. This negotiation occurs through both formal mechanisms, such as policy and design, and informal interactions rooted in daily experience. While grounded theory analysis provides a framework for the identification of emerging privacy themes, privacy expectations do not operate in isolation; rather, they are shaped through continuous interactions between young digital citizens, parents, educators, and AI professionals. These stakeholders hold divergent views on privacy autonomy, transparency requirements, and regulatory oversight, leading to negotiated trade-offs that influence the structure of AI governance [30].
Recent research highlights how privacy decision-making is often influenced by multiple contextual factors, including technological literacy, regulatory frameworks, and parental involvement [31,32]. Similarly, prior studies suggest that youth privacy behaviors are often driven by risk–benefit analysis, reflecting the need to assess AI governance through a comparative analytical lens rather than relying on static regulatory models [33]. While existing models explore privacy through theoretical frameworks, they often fail to capture the dynamic interplay between stakeholder priorities in AI-driven ecosystems.
To address these complexities, this study introduces a stakeholder-driven negotiation model, conceptualizing privacy governance as an interactive process shaped by competing priorities, ethical concerns, and governance expectations. Unlike static privacy frameworks, this model conceptualizes privacy tensions across diverse stakeholder groups as ideally shaped through ongoing, context-specific negotiation processes, rather than imposed by fixed policy structures. It emphasizes that, for privacy to be meaningfully protected in AI systems, youth, parents, educators, and AI professionals must be understood as active participants whose roles, expectations, and constraints interact in dynamic ways. By framing privacy governance as a dynamic negotiation process, this model offers a comparative framework to assess how stakeholders engage with AI transparency, data control mechanisms, and risk evaluations. The following section explores key research gaps, emphasizing why a negotiation-based perspective is crucial in developing adaptive, inclusive AI governance models that align with real-world privacy concerns.

2.5. Bridging Research Gaps: Toward a Youth-Centric AI Privacy Framework

Despite the growing awareness of AI-related privacy threats, the current research remains disjointed in its exploration of how young digital citizens participate in privacy decision-making [34]. Many studies focus primarily on adult privacy behaviors or regulation compliance, often overlooking the unique socio-cognitive elements that influence youth privacy perspectives within AI systems [23]. This gap is particularly evident in the absence of a structured comparative framework to evaluate how privacy concerns evolve differently across stakeholder groups, reflecting variations in risk perception, transparency expectations, and control mechanisms [25]. Existing AI governance models frequently overlook the agency of young users, treating them as passive beneficiaries of privacy protection rather than active participants in the conceptualization of AI governance [14]. Additionally, while digital literacy programs address privacy issues, they seldom provide systematic approaches to help young people to navigate the trade-offs between the benefits of personalization and the risks to data security [21]. These gaps highlight the need for a theoretical framework that captures the complexities of youth privacy management in AI-driven environments.
This research addresses these shortcomings by using grounded theory to provide a youth-centric AI privacy framework, offering a dynamic, empirical understanding of how young users perceive, navigate, and respond to privacy challenges in AI contexts [16]. This approach enables the detection of emerging privacy behaviors influenced by AI transparency, parental mediation, and trust dynamics, rather than relying on static privacy models [15]. This study also advances the dialog by advocating for adaptive privacy measures, participatory AI governance, and policy frameworks that integrate youth perspectives into AI design and legislation [22]. Furthermore, it highlights the development of interactive AI literacy initiatives that go beyond basic knowledge to equip young digital citizens with practical resources to safeguard their privacy [24]. By prioritizing youth agency and stakeholder collaborations, this research lays the groundwork for ethical AI regulations and privacy solutions that align with the evolving expectations and opinions of young users.

3. Methodology

3.1. Research Goal and Questions

The primary objective of this study is to explore how young digital citizens and key stakeholders—parents, educators, and AI professionals—navigate privacy concerns in AI-driven environments. Using grounded theory [18], this research examines emerging themes related to data ownership, trust, transparency, parental influence, and AI literacy. This study takes an inductive approach to identifying how these factors shape youth privacy behaviors. The study is guided by five key research questions.
  • RQ1: How do young digital citizens, parents/educators, and AI professionals perceive privacy risks and responsibilities in AI-driven ecosystems?
  • RQ2: What are the key factors influencing data ownership, user control, and privacy decision-making among different stakeholder groups in AI environments?
  • RQ3: How do varying levels of AI literacy and digital awareness impact privacy management behaviors and attitudes toward transparency?
  • RQ4: What role does stakeholder collaboration (youth, parents/educators, and AI professionals) play in shaping effective AI privacy governance frameworks?
  • RQ5: How can participatory design approaches and adaptive privacy policies improve AI transparency, trust, and ethical AI system development?
These variables, informed by the research questions, converge to provide actionable insights for ethical AI development. The definitions of these constructs are outlined in Table 1, which details their scope and focus within the study.

3.2. Research Approach and Analytical Framework

This study employs a grounded theory approach to explore how young digital citizens, parents and/or educators, and AI professionals perceive and navigate privacy concerns in AI-driven environments. Grounded theory was applied as an inductive analytical method, allowing themes to emerge organically from the data, rather than being constrained by pre-existing theoretical frameworks. Through an analysis of both quantitative and qualitative responses, recurring privacy patterns were identified across stakeholder groups. These patterns were systematically categorized into thirteen generalized privacy themes, including data control importance, perceived data control, comfort with data sharing, parental data sharing, parental data rights, AI privacy concerns, perceived data benefits, data usage transparency, transparency perception, system data trust, privacy protection knowledge, digital privacy education, and AI privacy awareness. These themes formed the foundation for the development of the PEA-AI model.
The study was structured around five core privacy constructs, DOC, PDS, PRB, TT, and EA, which initially guided data collection. However, rather than assuming fixed relationships between these constructs, a thematic analysis was conducted at the item level to examine how specific survey responses reflected variations in privacy concerns, risk perceptions, and governance expectations across different stakeholder groups. The survey responses were analyzed by calculating mean values for each item across young digital citizens, parents and/or educators, and AI professionals, providing a structured way to compare privacy attitudes and identify patterns in stakeholder perspectives. These findings were further contextualized through qualitative insights from open-ended responses, interviews, and focus groups, ensuring that the emerging privacy themes were grounded in real-world stakeholder concerns rather than predefined categories.
Through this iterative, comparative process, emerging themes were synthesized into broader stakeholder-driven privacy categories, allowing for a structured understanding of privacy dynamics. The thematic analysis revealed distinct stakeholder perspectives on data control, parental mediation, AI risks, transparency, and digital literacy, highlighting key differences in how each group conceptualizes privacy responsibilities. Based on these comparative insights, the PEA-AI model was developed to provide a structured framework for the analysis of how privacy governance evolves through stakeholder engagement, disagreements, and negotiated privacy boundaries. Rather than treating privacy as a static concept, this model reflects how trust, transparency, and data control expectations shift based on stakeholder interactions and the broader ethical discourse surrounding AI systems.
The model is introduced in Section 4.4 (The PEA-AI Model: The Negotiation Framework), where it serves as an analytical tool to compare stakeholder-driven privacy dynamics and ethical AI governance strategies. By organizing stakeholder responses into generalized privacy themes, this study offers a comparative framework that highlights key tensions, alignments, and gaps in privacy governance, ensuring that AI policies and design principles reflect diverse perspectives. This approach ensures that privacy governance is understood as an evolving discourse shaped by real-world stakeholder interactions, fostering a more adaptive, inclusive, and user-centered understanding of AI ethics, rather than relying on rigid theoretical assumptions.

3.3. Research Design

The present study received ethics approval from the Vancouver Island University Research Ethics Board (VIU-REB). The approval reference number #103116 was given for the behavioral application/amendment forms, consent forms, interview and focus group scripts, and questionnaires. An initial pilot study was conducted with 6 participants, including members of the empirical research specialists from the University of Saskatchewan and Vancouver Island University. The pilot study aimed to evaluate the feasibility and duration of the research approach while refining the study design. Participants provided general feedback on the questionnaire, which guided the modification and restructuring of the final survey. The revised research model was then tested by collecting survey data. Survey data were collected by recruiting participants through flyers, personal networks, emails, and social networking sites, namely LinkedIn and Reddit. To reach the targeted youth demographic, several Vancouver Island school districts were contacted to assist in distributing the survey to their high-school students. Participation in the study was entirely voluntary and uncompensated. Participants were required to read and accept a consent form before starting the questionnaire, indicating their understanding of the study conditions outlined in the form. Online surveys were conducted through Microsoft Forms, with participants responding based on three designated demographics: AI researchers and developers, parents and teachers, and young digital citizens (aged 16–19).
In addition to the questionnaires, interviews and focus groups were conducted with AI professionals, parents, and educators. A section of the questionnaire invited participants to provide their email addresses if they were interested in participating in interviews and/or focus groups. After contacting consenting participants, 12 interviews and 2 focus groups were conducted: one with 4 AI professionals and another with 5 parents and/or educators. Before the interviews and focus groups, all participants reviewed and accepted a consent form. Sessions were conducted and transcribed using Microsoft Teams, with participants instructed to keep their videos off to ensure anonymity.
Young digital citizens were not included in the interviews or focus groups due to ethical and procedural constraints. Although the Tri-Council Policy Statement (TCPS 2) does not specify a fixed age of consent and allows for flexibility based on a youth’s capacity to understand the research and its risks, the school district’s ethics office requires parental consent for minors when audio or video recordings are involved. As all interviews and focus groups were recorded in accordance with institutional policy, we were unable to include participants under 18 in these formats. To ensure that youth perspectives were still captured meaningfully, we prioritized their inclusion through the anonymous survey component of the study. This approach allowed us to respect ethical standards while still obtaining valuable insights from young digital citizens.
The survey instruments were adapted from constructs validated in prior studies [35,36,37,38,39,40,41,42,43,44,45,46,47,48]. The instruments consisted of 3 indicators for DOC, 2 indicators for PDS, 4 indicators for PRB, 3 indicators for TT, 3 indicators for EA, and 3 open-ended discussion questions. The items (questions) within these constructs are outlined in Table 2. Survey responses were measured on a 5-point Likert scale, with most items used for the quantitative analysis. Notably, to ensure consistency in outcomes, we reversed the scale for items in PRB for AI professionals and swapped items 1 and 2 in PDS for educators/parents to align contextually with the items for other demographics. The qualitative analysis utilized open-ended questions, two indicators from PRB, interview responses, and focus group discussions.
The following naming conventions were used for qualitative responses: survey participants were labeled as (S-YDC #X) for young digital citizens, (S-PE #X) for parents and educators, and (S-AIP #X) for AI professionals. Interview participants were referred to as (I-[Role] #X), specifying their role (e.g., I-Parent #1 or I-Educator #2). For focus group participants, we use a group identifier and role, such as (FG1-Educator #3).

3.4. Participant Demographics

Out of 482 participants, 461 completed the survey questionnaire: 176 young digital citizens (aged 16–19), 132 parents and/or educators, and 153 AI professionals. After data cleaning, we retained 127 valid responses from educators and/or parents, 146 from AI professionals, and 151 from young digital citizens for analysis. Of the 127 valid responses from educators and/or parents, 54 identified as parents, 46 identified as educators, and 28 identified as both. Among the 146 valid responses from AI professionals, 46 identified as AI developers, 98 as AI researchers, and 2 as both. Twelve interviews were conducted, with 9 interviewees identifying as parents and/or educators and 3 as AI professionals. Two focus groups were conducted, with 4 participants identifying as AI professionals and 5 as parents and/or educators. Table 3 highlights the demographic characteristics of the participants.

4. Results and Analysis

This study expands upon our previous research by incorporating additional responses from young digital citizens, allowing for a more detailed analysis of their perspectives on privacy in AI systems. This study employs the PEA-AI model as an analytical framework to examine how privacy expectations, transparency concerns, and AI governance strategies vary across stakeholder groups. Derived through a comparative thematic analysis of stakeholder responses, the model offers a structured approach to understanding how young digital citizens, parents and/or educators, and AI professionals negotiate privacy in AI-driven environments. Rather than establishing direct causal relationships, the model identifies key tensions, alignments, and differences in privacy perceptions across the five core constructs: DOC, PDS, PRB, TT, and EA.
This study employs a dual-layered analytical approach to explore stakeholder privacy concerns in the AI ecosystem, combining quantitative and qualitative methods to uncover critical privacy negotiation points and demonstrate how multi-stakeholder discourse shapes privacy governance.
For quantitative analysis, Microsoft Excel (Office 365 version, 2025) was used to structure, clean, and manage the collected survey data, ensuring consistency in the dataset. A descriptive statistical analysis was conducted across four distinct groups: young digital citizens, parents and educators, AI developers and researchers, and a combined dataset consolidating all responses. This analysis allowed for a structured comparison of the responses, revealing trends in data control, transparency concerns, perceived risks, and awareness levels. Mean values were calculated and sentiment levels were categorized accordingly for the key constructs: DOC, PDS, PRB, TT, and EA. By systematically comparing these constructs, we identified how different stakeholder groups perceived and prioritized privacy-related concerns, illustrating the thematic contrasts in their expectations and decision-making processes. This approach ensured that the analysis evaluated the interplay of privacy constructs within and across groups, rather than examining them in isolation.
To contextualize these findings, qualitative data from open-ended responses, interviews, and focus group discussions were analyzed thematically. This revealed common patterns in participant concerns, perceptions, and recommendations. Together, these methods offer a comprehensive understanding of privacy concerns and values, allowing for a deeper exploration of stakeholder priorities, knowledge gaps, and expectations from AI privacy frameworks.

4.1. Descriptive Statistics

Our quantitative survey used a five-point Likert scale to compare the mean responses across the five key constructs: DOC, PDS, PRB, TT, and EA. The mean scores for each construct varied across the three key demographics—young digital citizens, parents and/or educators, and AI professionals. The results are visually represented in Figure 1 (heatmap) and detailed in Table 4 (comparative analysis). Overall, while adults agree on user control, transparency, and engagement with AI, youth show lower trust and awareness, highlighting the need for targeted interventions to bridge this gap.

4.2. Comparative Analysis of AI Privacy Constructs Across Stakeholder Groups

The comparative analysis of privacy constructs across young digital citizens, parents and/or educators, and AI professionals reveals distinct variations in how different stakeholders perceive and engage with AI-driven privacy concerns. By examining the mean scores across thematic categories, key differences emerge in the conceptualization of data control, parental data sharing, perceived risks and benefits, transparency, trust, and education.

4.2.1. Data Ownership and Control

Data Control Importance: Parents/educators rated this the highest (4.46), followed by AI professionals (4.39) and youth (4.08). This suggests that adult stakeholders, especially educators and researchers, strongly advocate for user control over personal data, reinforcing its role in ethical AI development.
Perceived Data Control: AI professionals (3.64) reported feeling more in control over their data compared to parents/educators (3.40) and youth (3.35). The relatively lower score among youth suggests a potential gap in privacy self-efficacy, necessitating better user-centric privacy mechanisms.
Comfort with Data Sharing: AI professionals (3.79) displayed the highest comfort in sharing personal data, followed by parents/educators (3.39), with youth reporting the lowest comfort (2.83). This reflects a generational divide in risk perception, with youth demonstrating greater apprehension toward personal data disclosure.

4.2.2. Parental Data Sharing

Parental Data Sharing Practices: AI professionals reported the lowest support for parental data sharing (1.81), followed by parents/educators (2.46) and youth (2.52). These relatively low scores indicate widespread concern about the appropriateness of parental involvement in youth data decisions.
Parental Data Rights: Parents/educators (3.39) rated parental data rights the highest, while AI professionals (2.90) and youth (2.51) expressed lower confidence in this construct. Notably, AI professionals favored youth consent mechanisms, prioritizing autonomy over parental governance in data-related decisions.

4.2.3. Perceived Risks and Benefits

AI Privacy Concerns: All three groups expressed strong privacy concerns, with AI professionals scoring the highest (4.31), followed by parents/educators (4.25) and youth (4.09). This consensus highlights the universal recognition of the ethical challenges posed by AI data governance.
Perceived Data Benefits: AI professionals (4.53) rated data benefits significantly higher than both youth (3.86) and parents/educators (3.50). These findings suggest that, while AI professionals see tangible advantages in data-driven AI advancements, youth and educators remain more cautious, reflecting a trust gap in AI benefit perception.

4.2.4. Transparency and Trust

Data Usage Transparency: Transparency was considered highly important across all stakeholder groups, with parents/educators scoring the highest (4.34), followed by youth (4.19) and AI professionals (4.17). This reinforces the demand for increased transparency mechanisms in AI governance.
Transparency Perception: Despite valuing transparency, the stakeholders perceived existing AI transparency measures as insufficient. Parents/educators rated transparency perception the lowest (1.96), followed by AI professionals (2.11), and youth (2.41). These results indicate a strong disparity between their expectations and the current implementation of transparency in AI systems.
System Data Trust: AI professionals (4.18), followed by parents/educators (4.07), exhibited relatively higher trust in AI systems, while youth expressed significantly lower trust (3.09). These findings suggest that youth are more skeptical of AI governance practices, reinforcing the necessity of improved explainability measures.

4.2.5. Education and Awareness

Privacy Protection Knowledge: AI professionals reported the highest levels of privacy knowledge (4.04), followed by parents/educators (3.29) and youth (3.04). The significant gap between professionals and youth suggests an urgent need for targeted AI privacy education initiatives.
Digital Privacy Education: AI professionals rated privacy education substantially higher (3.79) compared to youth (2.93) and parents/educators (2.50). These results highlight a potential divide in AI literacy, where non-technical stakeholders may lack the resources to fully understand privacy frameworks.
AI Privacy Awareness: All groups strongly agreed on the importance of AI privacy education, with AI professionals rating it the highest (4.63), followed by parents/educators (4.50) and youth (3.68). The widespread alignment in this area suggests broad recognition of the need for continuous privacy education programs.

4.3. Qualitative Findings

In addition to the quantitative findings, the qualitative data revealed key stakeholder tensions regarding privacy governance in AI-driven environments. A significant divide emerged between young digital citizens and parents/educators, particularly in the realm of parental consent and data-sharing authority [49]. While youth participants expressed frustration over their lack of control and transparency in how their data are handled, many parents and educators viewed youth privacy as something that should be actively managed rather than autonomously controlled. One young respondent voiced their concern, stating, “I am mainly concerned about what data is being taken and how it is used, as I feel we often aren’t informed clearly about what data is being taken and used” (S-YDC #5). Conversely, an educator emphasized the role of awareness rather than outright control, explaining, “Many children and adolescents will use AI without considering their own privacy (similar to how many use social media). A lack of education regarding the risks of sharing personal information on the internet can lead to students potentially misusing AI” (S-PE #40). This tension highlights a fundamental gap between youth demands for autonomy and parental concerns about informed decision-making in AI privacy governance.
Another point of contention emerged between educators and AI professionals regarding the trustworthiness of AI applications. Educators largely expressed skepticism over whether AI systems genuinely safeguard personal data, citing concerns about long-term data retention, algorithmic biases, and the lack of oversight in AI-driven decision-making. One educator articulated this skepticism, stating, “I do not trust that information gathered by AI will be used presently or in the future in an informed manner for the benefit of the individual, but rather fear its exploitation on both an individual and mass level” (S-PE #10). However, AI professionals generally framed privacy risks as technical challenges that could be addressed through improved security measures rather than as inherent flaws in AI systems. An AI researcher highlighted the difficulty in tracing data flows, stating, “Once data goes into an AI system, it’s tough to know where it ends up or who else can see it” (S-AIP #45). This divergence suggests that, while educators advocate for broader regulatory oversight and ethical considerations, AI professionals emphasize governance through internal safeguards and privacy-preserving technologies.
The issue of AI-driven surveillance and profiling further highlights the complex negotiation between young digital citizens and AI professionals. Youth participants expressed deep concern over AI’s ability to track, categorize, and potentially manipulate their behaviors without their explicit consent. One respondent remarked, “I feel uncomfortable knowing AI can recognize my face in public places” (S-YDC #93), while another feared long-term profiling, stating, “AI will guess everything about us. Sensitive topics I research could be recorded forever” (S-YDC #84). AI professionals, however, tended to view these issues through the lens of data governance rather than direct surveillance. A researcher explained, “AI might accidentally recreate sensitive data from its training sets, exposing private information” (S-AIP #85). This suggests that, while youth stakeholders perceive AI surveillance as an immediate privacy violation, AI professionals frame it as a solvable issue through stricter data management practices. These findings collectively highlight the necessity of a multi-stakeholder approach to AI privacy governance, one that not only strengthens technical safeguards but also considers the lived experiences of youth and the ethical concerns raised by educators and parents.

4.4. The PEA-AI Model: The Negotiation Framework

The PEA-AI model, presented in Figure 2, provides an organized framework for an understanding of the development of privacy concerns and ethical considerations through multi-stakeholder engagement in AI governance. Stakeholders engage with and shape privacy constructs based on their perceptions and experiences. These constructs, in turn, inform the design and governance of ethical AI systems. Feedback loops represent how the outcomes of AI governance recursively influence stakeholder attitudes, behaviors, and future expectations, reinforcing a continuous cycle of negotiation and adaptation. Using survey data and qualitative insights, this model compares and contrasts the privacy attitudes of the three main stakeholder groups: young digital citizens, parents and educators, and AI professionals. Unlike traditional models that impose rigid privacy standards through prescriptive frameworks, the PEA-AI model views the creation of ethical AI as an iterative negotiation process in which stakeholders’ expectations of privacy and governance mechanisms evolve through interaction.
The PEA-AI model introduces a multi-stakeholder privacy negotiation framework, where privacy constructs evolve through stakeholder engagement. This framework addresses four key tensions.
  • Data Control vs. Trust: Balancing youth autonomy with AI developers’ risk mitigation measures.
  • Transparency vs. Perception: Addressing the gap between AI’s claimed transparency and user perception.
  • Parental Rights vs. Youth Autonomy: Negotiating consent mechanisms that respect youth agency while addressing parental concerns
  • Privacy Education vs. Awareness Deficit: Strengthening digital literacy to enable informed AI interactions and empower users to navigate privacy challenges effectively.
This model was developed by systematically comparing the responses to survey items concerning five fundamental privacy constructs: DOC, PDS, PRB, TT, and EA. By combining stakeholder responses into broad themes, the model focuses on how privacy perspectives influence governance and AI ethics, rather than establishing fixed causal relationships.
As early adopters of AI, young people demonstrate a nuanced view of personal data protection. While they recognize the importance of data security, they often prioritize convenience over control in their privacy standards. Their responses suggest a willingness to allow AI-driven data collection, as the benefits of personalization, algorithmic recommendations, and social connectivity outweigh their privacy concerns. In contrast, educators and parents view privacy as a precaution, drawing attention to the risks associated with youth data disclosure and the need for regulatory oversight. Nevertheless, their perspectives reveal limited AI literacy, which hampers their ability to effectively guide youth in managing AI privacy settings. AI professionals, who are responsible for designing and implementing privacy safeguards, primarily view privacy through the lens of system performance and risk mitigation. While they acknowledge the challenges in achieving full transparency and user agency in AI-driven settings, their focus remains on compliance with existing regulations.
The PEA-AI model highlights how stakeholder tensions shape privacy governance in the AI ecosystem. Despite all three categories engaging with AI systems, their expectations around data control, privacy, and trust often diverge. A comparative analysis reveals significant differences in privacy perceptions across themes. While all groups aim to control and manage data, their priorities vary: parents and educators prioritize security, young digital citizens value autonomy, and AI experts focus on technological limitations. Transparency and trust emerge as critical issues, with educators highlighting a lack of user-friendly disclosure, AI professionals acknowledging the challenges of comprehensive explainability, and young users demanding clarity and accessibility in AI systems.
By framing privacy as an ongoing negotiation rather than a static regulatory process, the PEA-AI model advances AI development, stakeholder-driven governance, and policy interventions. It emphasizes the importance of aligning AI design with ethical considerations, ensuring that privacy measures reflect real user behavior rather than hypothetical assumptions. The model advocates for the importance of multi-stakeholder involvement in governance, addressing the interests and constraints of various user groups. On the policy front, it guides adaptive regulation measures that balance the technical concerns of AI professionals, the privacy demands of young digital citizens, and the ethical considerations of parents and educators. Rather than prescribing a specific privacy solution, the model provides a comparative approach to understanding privacy conflicts in AI ecosystems, encouraging active participation from young digital citizens, educators, and AI professionals in the creation of ethical AI practices.
The PEA-AI model makes several key contributions to the field of AI privacy governance. First, it introduces a stakeholder-driven approach to AI governance, unlike traditional models that focus solely on individual privacy attitudes. By integrating stakeholder negotiation, the model ensures that governance frameworks reflect the diverse needs and perspectives of all stakeholders, including young digital citizens, parents/educators, and AI professionals. Second, the model has significant policy implications, as it supports data protection frameworks that advocate for dual-consent mechanisms in AI data governance, balancing youth autonomy with parental oversight. Third, the model provides a foundation for AI design frameworks, enabling AI professionals to develop user-centered, privacy-enhancing technologies that prioritize transparency, accessibility, and ethical considerations. As AI evolves, the PEA-AI model stresses the need for privacy governance to adapt to shifting stakeholder expectations, fostering a more inclusive and equitable digital society.

5. Discussion

This section interprets the study’s findings with a sustained focus on policy, design, education, governance, and theoretical implications. It addresses stakeholder-specific challenges and proposes actionable solutions guided by the PEA-AI model.

5.1. Privacy Perception in AI: Bridging Stakeholder Disparities

The findings reveal significant disparities in how privacy is perceived and managed across stakeholder groups. Young digital citizens tend to prioritize autonomy, personalization, and seamless interactions with AI but often lack the knowledge or confidence to make informed privacy decisions. Parents and educators, while concerned about youth exposure to algorithmic systems, frequently lack the technical grounding to guide them effectively. AI professionals, on the other hand, focus primarily on technical efficacy, data security, and compliance, often overlooking the lived experiences and usability challenges of end-users. These disparate perceptions underscore the fragmented nature of AI privacy governance, revealing a pressing need for coordinated efforts that bridge understanding, expectations, and responsibilities.

5.2. Stakeholder Tensions in AI Privacy Management: Diverging Priorities and Overlapping Concerns

Tensions emerge when stakeholders navigate competing priorities. Young digital citizens call for greater agency and transparency, while parents and educators adopt more protective postures, often resorting to restrictive mediation. AI professionals emphasize system integrity and regulatory alignment, framing privacy as a technical problem to be solved. However, these tensions unfold within a broader landscape shaped by corporate control and limited regulatory intervention. While AI professionals, parents, and educators influence implementation-level decisions, they often operate within systems architected by powerful digital corporations that define data collection norms, platform logic, and interface constraints. The lack of robust privacy legislation and minimal enforcement further concentrates decision-making power in the hands of platform owners, constraining both transparency and accountability. As several stakeholders in this study noted, the opacity of AI systems is not merely a design oversight but a structural feature of the current business models. Therefore, framing privacy governance as a stakeholder negotiation must be approached critically: such negotiation is only meaningful if corporate and governmental entities—those who currently determine the boundaries of digital privacy—are also engaged and held accountable. A stakeholder-inclusive governance framework must thus go beyond surface-level participation to address these power asymmetries directly. Without structural checks, efforts toward ethical design and participatory engagement risk being subsumed within the very systems that they seek to reform.

5.3. A Stakeholder-Driven Framework for Ethical AI Governance

The PEA-AI model developed in this study offers a comparative framework for an understanding of how stakeholder values shape privacy governance in AI ecosystems. Unlike traditional top-down models, the PEA-AI conceptualizes privacy as a dynamic process, negotiated across roles and contexts. It reframes ethical AI development not as a static checklist but as an evolving set of practices grounded in participatory engagement. The model integrates user control, risk–benefit awareness, transparency, and educational needs into a coherent structure, facilitating ethical AI design that adapts to the needs of young digital citizens, caregivers, and system developers alike. To operationalize the PEA-AI model in real-world contexts, we propose three primary avenues for implementation. First, in AI system design, developers should embed participatory design methodologies that involve youth and caregivers in iterative co-design sessions, usability testing, and privacy feature validation. These sessions should focus on surfacing age-appropriate expectations and ensuring that transparency tools are understandable and actionable. Second, in regulation, policymakers should institutionalize stakeholder-informed privacy audits and mandate that youth-facing AI platforms include consent mechanisms and data practices developed with direct youth input. Regulatory frameworks must also integrate youth advisory councils to inform legislative updates and oversight. Third, in education, school systems should integrate AI ethics and privacy modules into digital literacy curricula, co-developed with educators and technologists and supported by experiential learning tools such as simulations and interactive scenarios. These initiatives would not only support informed engagement with AI but also cultivate a generation of privacy-conscious digital citizens.

5.4. Bridging the Gap Between Privacy Awareness and Practical Implementation

While the awareness of AI-related privacy risks is growing, significant gaps remain in translating this into practical, effective privacy management strategies. Many young digital citizens understand the stakes of data sharing but encounter barriers when navigating complex or opaque privacy settings. Educators and parents frequently lack the resources to support youth in interpreting algorithmic behaviors or configuring privacy protections. Meanwhile, developers often prioritize technical feasibility and compliance over accessibility and usability. The PEA-AI model underscores the need for bridge mechanisms that convert privacy knowledge into practical, user-friendly measures. This includes addressing gaps in literacy, designing intuitive privacy controls, and incorporating participatory governance models that adapt to user needs. Effective AI privacy solutions should simplify privacy controls, provide clear explanations of data usage, and offer real-time feedback to users. Multi-stakeholder educational initiatives, such as gamified AI ethics courses, youth-centric design concepts, and interactive privacy dashboards, can empower users to make informed decisions. Additionally, AI developers should conduct usability testing with diverse stakeholders to ensure that privacy measures are accessible and easy to comprehend. Future research should explore longitudinal methods to see if increased awareness leads to sustained privacy protection, ensuring that privacy literacy translates into meaningful action.

5.5. The Role of Transparency in Building Trust and User Autonomy

Transparency remains a critical but under-implemented component of AI privacy governance. While most stakeholders agree on its importance, the mechanisms used to deliver transparency often fall short. Legalistic privacy notices, vague permissions, and generalized statements of compliance do little to build meaningful trust. Youth participants in particular expressed frustration with opaque explanations of data collection and AI logic. Developers and policymakers must move beyond minimal disclosure standards and toward context-sensitive transparency. This includes visual privacy indicators, interactive consent tools, and simplified explanations of AI processes tailored to different user demographics. Adaptive transparency models, which allow users to choose the level of insight they receive about AI data processing, can enhance user agency and decision-making. Future research should explore transparency-enhancing technologies (TETs), such as AI-generated privacy summaries or interactive data flow diagrams, to empower users while maintaining system efficiency. Transparency must be a foundational factor of ethical AI design, ensuring accessibility, user agency, and trust.

5.6. Strengthening AI Privacy Through User-Centric and Stakeholder-Inclusive Policy Interventions

Policy interventions that are user-centric and balance technical advancements with ethical data practices are necessary for effective AI privacy control. While all stakeholder groups recognize the importance of privacy measures, there is often a disconnect between policy development and practical implementation. AI privacy policies need to be co-designed with direct stakeholder involvement, ensuring that interventions are explicit, practical, and adaptable for different users. Regulations should mandate simplified privacy settings to enable young users and their guardians to manage their data effectively without needing technical expertise. Age-appropriate privacy standards, similar to child protection laws, should be expanded to address AI-specific risks, such as data-driven profiling and content manipulation. To further enable users to comprehend, evaluate, and manage privacy concerns, consistent digital literacy education should be introduced into national curricula. Beyond user education, corporate accountability and transparent enforcement mechanisms are essential to guarantee that AI systems adhere to ethical data governance standards. To ensure that privacy issues are detected and addressed before AI systems are deployed, these organizations must conduct multi-stakeholder privacy impact studies. Additionally, advisory boards that include young digital citizens, educators, and AI professionals can provide real-time policy suggestions as privacy challenges evolve. Future studies should investigate the effects of user-driven policy design on AI trust and adoption, ensuring that regulatory frameworks adapt to the needs and expectations of diverse stakeholders. Policy interventions that focus on users can promote responsible and transparent AI development while protecting users’ privacy rights by closing the gap between legislative safeguards and their actual use. However, without legal safeguards and accountability mechanisms targeting the structural power of dominant digital platforms, participatory design alone will not be sufficient to ensure equitable and enforceable privacy outcomes.

5.7. Theoretical Implications for AI Privacy Governance

This study contributes to a growing body of research that frames privacy not as a static entitlement but as a relational and situational construct. The PEA-AI model underscores how privacy expectations are shaped by role-based experiences, contextual trust, and varying levels of control. It challenges binary frameworks that position privacy as either granted or withheld, advocating instead for a participatory paradigm in which privacy is actively co-negotiated. This theoretical repositioning invites a broader exploration of how digital agency is constructed, constrained, and expressed within AI-mediated environments. It also reinforces the need for longitudinal research that tracks how these dynamics evolve in response to changes in technology, regulation, and social norms.

5.8. Limitations and Future Work

While this study offers useful insights regarding AI privacy governance from a stakeholder-driven perspective, it has several limitations. First, the perspectives of policymakers and regulatory bodies were not included, potentially overlooking institutional factors that influence AI privacy governance. Similarly, underrepresented youth populations such as those from rural, Indigenous, or lower socio-economic backgrounds were not systematically captured in the sample, which may limit the inclusiveness of the findings. Expanding the sample to include these stakeholder groups in future research would enhance the model’s generalizability and practical relevance. Second, the study may have been subject to regional or demographic bias, as data were drawn from specific geographic and educational contexts. Broader sampling across jurisdictions and cultural backgrounds could reveal important variations in privacy perceptions and regulatory expectations that were not fully captured in this study. Third, the cross-sectional design reflects privacy attitudes and behaviors at a single point in time, limiting the ability to assess how these perceptions evolve in response to ongoing technological change, regulatory developments, or shifts in AI literacy. Longitudinal studies tracking stakeholder perspectives over time would offer a more comprehensive view of privacy adaptation within dynamic AI ecosystems. Fourth, the use of qualitative interviews and self-reported survey responses introduces potential bias. Participants may have limited technical knowledge or respond in ways that they perceive as socially desirable, which may influence the reliability of the findings. To mitigate these limitations, future research should incorporate triangulation strategies, including behavioral data, experimental approaches, and real-world case studies of AI privacy solutions, to validate and enrich the qualitative insights. Additionally, while the PEA-AI model provides a robust structure for the analysis of stakeholder privacy concerns, its applicability across diverse domains has not yet been tested. Applying the model in contexts such as healthcare, education, finance, or smart cities could uncover domain-specific risks and governance needs. Finally, as AI technologies and data governance frameworks continue to evolve, future research should prioritize adaptive privacy models that integrate emerging privacy-enhancing technologies, flexible regulatory mechanisms, and participatory design strategies. Co-developing privacy safeguards with end-users will be essential in ensuring that they remain transparent, inclusive, and aligned with the changing expectations of digital citizens.

6. Conclusions

This study adopted a stakeholder-driven approach to examine privacy governance in AI ecosystems, drawing insights from young digital citizens, parents, educators, and AI professionals. Through grounded theory analysis, the research identified key concerns, including limited digital literacy, regulatory inconsistencies, and a lack of algorithmic transparency, which constrain effective privacy management. In response, we propose the PEA-AI model, which is structured around five core constructs: DOC, PDS, PRB, TT, and EA. The findings show that privacy education enhances user agency and risk awareness, while trust and transparency serve as critical enablers of meaningful engagement with AI systems. The study also highlights ongoing tensions, such as the restrictiveness of parental mediation, which may limit youth autonomy in managing personal data. To advance ethical AI development, privacy controls must be accessible, user-friendly, and shaped through multi-stakeholder participation. The PEA-AI model contributes to ethical AI policymaking by offering a structured, comparative lens to align stakeholder expectations with governance mechanisms. While the model enhances our theoretical understanding of AI privacy, future research should broaden demographic inclusion, refine construct measurement, and employ longitudinal methods to assess evolving privacy attitudes. Embedding participatory design into AI development and policymaking processes will be key to ensuring that privacy frameworks remain adaptive, transparent, and aligned with the expectations of diverse digital citizens.

Author Contributions

Conceptualization, A.K.S.; methodology, A.K.S.; software, A.B.; validation, A.B.; formal analysis, A.B.; investigation, A.B., A.K.S. and M.C.; resources, A.K.S.; data curation, A.B.; writing—original draft preparation, A.B.; writing—review and editing, A.K.S. and M.C.; visualization, A.B.; supervision, A.K.S.; project administration, A.K.S.; funding acquisition, A.K.S. All authors have read and agreed to the published version of the manuscript.

Funding

This project has been funded by the Office of the Privacy Commissioner of Canada (OPC) under its Contributions Program (Funding Reference Number: 7777-6-189693); the views expressed herein are those of the authors and do not necessarily reflect those of the OPC.

Data Availability Statement

The raw datasets presented in this article are not readily available due to confidentiality and anonymity agreements with the participants.

Acknowledgments

During the preparation of this manuscript, the authors used OpenAI’s ChatGPT (GPT-4, 2025) for the purpose of grammatical and language refinement to ensure clarity and coherence in the research writing process. The authors have reviewed and edited the output and take full responsibility for the content of this publication.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. McCarthy, J.; Minsky, M.L.; Rochester, N.; Shannon, C.E. A proposal for the Dartmouth summer research project on artificial intelligence, August 31, 1955. AI Mag. 2006, 27, 12–14. [Google Scholar] [CrossRef]
  2. Nilsson, N.J. The Quest for Artificial Intelligence; Cambridge University Press: Cambridge, UK, 2009. [Google Scholar] [CrossRef]
  3. Jackson, P. Introduction to Expert Systems; Addison-Wesley Pub. Co.: Reading, MA, USA, 1986. [Google Scholar]
  4. Russell, S.; Norvig, P. Artificial Intelligence: A Modern Approach, 4th ed.; Pearson Series in Artifical Intelligence; Prentice Hall: Upper Saddle River, NJ, USA, 2020. [Google Scholar]
  5. Eysenbach, G. The Role of ChatGPT, Generative Language Models, and Artificial Intelligence in Medical Education: A Conversation With ChatGPT and a Call for Papers. JMIR Med. Educ. 2023, 9, e46885. [Google Scholar] [CrossRef] [PubMed]
  6. Nah, F.F.-H.; Zheng, R.; Cai, J.; Siau, K.; Chen, L. Generative AI and ChatGPT: Applications, challenges, and AI-human collaboration. J. Inf. Technol. Case Appl. Res. 2023, 25, 277–304. [Google Scholar] [CrossRef]
  7. Sebastian, G. Privacy and Data Protection in ChatGPT and Other AI Chatbots. Int. J. Secur. Priv. Pervasive Comput. 2023, 15, 1–14. [Google Scholar] [CrossRef]
  8. Sania, U. Exploring the role of artificial intelligence (AI) in shaping youth’s worldviews in education. Ustozlar Uchun 2024, 57, 530–533. [Google Scholar]
  9. Greenwald, E.; Leitner, M.; Wang, N. Learning artificial intelligence: Insights into how youth encounter and build understanding of AI concepts. Proc. AAAI Conf. Artif. Intell. 2021, 35, 15526–15533. [Google Scholar] [CrossRef]
  10. Acquisti, A.; Brandimarte, L.; Loewenstein, G. Privacy and human behavior in the age of information. Science 2015, 347, 509–514. [Google Scholar] [CrossRef]
  11. Shrestha, A.K.; Barthwal, A.; Campbell, M.; Shouli, A.; Syed, S.; Joshi, S.; Vassileva, J. Navigating AI to unpack youth privacy concerns: An in-depth exploration and systematic review. In Proceedings of the 2024 IEEE 15th Annual Information Technology, Electronics and Mobile Communication Conference (IEMCON), Berkeley, CA, USA, 24–26 October 2024; IEEE: Berkeley, CA, USA, 2024. [Google Scholar]
  12. Jeff Smith, H.; Dinev, T.; Xu, H. Information privacy research: An interdisciplinary review. MIS Q. Manag. Inf. Syst. 2011, 35, 989–1015. [Google Scholar] [CrossRef]
  13. Beldad, A.; De Jong, M.; Steehouder, M. I trust not therefore it must be risky: Determinants of the perceived risks of disclosing personal data for e-government transactions. Comput. Hum. Behav. 2011, 27, 2233–2242. [Google Scholar] [CrossRef]
  14. Taddeo, M.; Floridi, L. How AI can be a force for good. Science 2018, 361, 751–752. [Google Scholar] [CrossRef]
  15. Rader, E.; Cotter, K.; Cho, J. Explanations as mechanisms for supporting algorithmic transparency. In Proceedings of the 2018 CHI Conference on Human Factors in Computing Systems, Montreal, QC, Canada, 21–26 April 2018; ACM: New York, NY, USA, 2018; Volume 2018, pp. 1–13. [Google Scholar] [CrossRef]
  16. Holloway, D.; Green, L. The Internet of toys. Commun. Res. Pract. 2016, 2, 506–519. [Google Scholar] [CrossRef]
  17. Sarangapani, P.M. A cultural view of teachers, pedagogy, and teacher education. In Handbook of Education Systems in South Asia; Sarangapani, P.M., Pappu, R., Eds.; Springer: Singapore, 2021; pp. 1247–1270. [Google Scholar] [CrossRef]
  18. Glaser, B.G.; Strauss, A.L. The Discovery of Grounded Theory: Strategies for Qualitative Research; Aldine Publishing Company: Chicago, IL, USA, 1967. Available online: http://www.sxf.uevora.pt/wp-content/uploads/2013/03/Glaser_1967.pdf (accessed on 21 January 2025).
  19. Nissenbaum, H. Privacy as contextual integrity. Wash. Law Rev. 2004, 79, 119. [Google Scholar]
  20. Westin, A.F. Social and political dimensions of privacy. J. Soc. Issues 2003, 59, 431–453. [Google Scholar] [CrossRef]
  21. Lee, C.H.; Gobir, N.; Gurn, A.; Soep, E. In the black mirror: Youth investigations into artificial intelligence. ACM Trans. Comput. Educ. 2022, 22, 25. [Google Scholar] [CrossRef]
  22. Bergström, A. Online privacy concerns: A broad approach to understanding the concerns of different groups for different uses. Comput. Hum. Behav. 2015, 53, 419–426. [Google Scholar] [CrossRef]
  23. Stoilova, M.; Nandagiri, R.; Livingstone, S. Children’s understanding of personal data and privacy online—A systematic evidence mapping. Inf. Commun. Soc. 2021, 24, 557–575. [Google Scholar] [CrossRef]
  24. Ho, M.-T.; Mantello, P.; Ghotbi, N.; Nguyen, M.-H.; Nguyen, H.-K.T.; Vuong, Q.-H. Rethinking technological acceptance in the age of emotional AI: Surveying gen z(zoomer) attitudes toward non-conscious data collection. Technol. Soc. 2022, 70, 102011. [Google Scholar] [CrossRef]
  25. Davis, K.; James, C. Tweens’ conceptions of privacy online: Implications for educators. Learn. Media Technol. 2013, 38, 4–25. [Google Scholar] [CrossRef]
  26. Goyeneche, D.; Singaraju, S.; Arango, L. Linked by age: A study on social media privacy concerns among younger and older adults. Ind. Manag. Data Syst. 2024, 124, 640–665. [Google Scholar] [CrossRef]
  27. Gazulla, E.D.; Hirvonen, N.; Sharma, S.; Hartikainen, H.; Jylhä, V.; Iivari, N.; Kinnula, M.; Baizhanova, A. Youth perspectives on technology ethics: Analysis of teens’ ethical reflections on AI in learning activities. Behav. Inf. Technol. 2024, 44, 888–911. [Google Scholar] [CrossRef]
  28. Menon, D.; Shilpa, K. “Hey, Alexa” “Hey, Siri”, “OK Google” ….” exploring teenagers’ interaction with artificial intelligence (AI)-enabled voice assistants during the COVID-19 pandemic. Int. J. Child-Comput. Interact. 2023, 38, 100622. [Google Scholar] [CrossRef]
  29. Zuboff, S. The age of surveillance capitalism. In Social Theory Re-Wired; Routledge: Oxfordshire, UK, 2023; pp. 203–213. [Google Scholar] [CrossRef]
  30. Kallina, E.; Singh, J. Stakeholder involvement for responsible AI development: A process framework. In Proceedings of the 4th ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization, San Luis Potosi, Mexico, 29–31 October 2024; ACM: New York, NY, USA, 2024; pp. 1–14. [Google Scholar] [CrossRef]
  31. Campbell, M.; Joshi, S.; Barthwal, A.; Shouli, A.; Shrestha, A.K. Applying communication privacy management theory to youth privacy management in AI contexts. In Proceedings of the 2025 IEEE 4th International Conference on AI in Cybersecurity (ICAIC); IEEE: Houston, TX, USA, 2025; pp. 1–10. [Google Scholar] [CrossRef]
  32. Campbell, M.; Barthwal, A.; Shouli, A.; Joshi, S.; Shrestha, A.K. Investigation of the privacy concerns in AI systems for young digital citizens: A comparative stakeholder analysis. In Proceedings of the 2025 IEEE 15th Annual Computing and Communication Workshop and Conference (CCWC), Las Vegas, NV, USA, 6–8 January 2025; IEEE: Las Vegas, NV, USA, 2025. [Google Scholar]
  33. Shouli, A.; Barthwal, A.; Campbell, M.; Shrestha, A.K. Unpacking youth privacy management in AI systems: A privacy calculus model analysis. IEEE Access 2025, submitted.
  34. Miltgen, C.L.; Peyrat-Guillard, D. Cultural and generational influences on privacy concerns: A qualitative study in seven European countries. Eur. J. Inf. Syst. 2014, 23, 103–125. [Google Scholar] [CrossRef]
  35. Brandtzaeg, P.B.; Pultier, A.; Moen, G.M. Losing Control to Data-Hungry Apps: A Mixed-Methods Approach to Mobile App Privacy. Soc. Sci. Comput. Rev. 2019, 37, 466–488. [Google Scholar] [CrossRef]
  36. Bélanger, F.; Crossler, R.E. Privacy in the digital age: A review of information privacy research in information systems. MIS Q. 2011, 35, 1017. [Google Scholar] [CrossRef]
  37. Xu, H.; Dinev, T.; Smith, H.; Hart, P. Examining the formation of individual’s privacy concerns: Toward an integrative view. In Proceedings of the International Conference on Information Systems (ICIS), Paris, France, 14–17 December 2008; Volume 6. [Google Scholar]
  38. Malhotra, N.K.; Kim, S.S.; Agarwal, J. Internet users’ information privacy concerns (IUIPC): The construct, the scale, and a causal model. Inf. Syst. Res. 2004, 15, 336–355. [Google Scholar] [CrossRef]
  39. Livingstone, S.; Helsper, E.J. Parental mediation of children’s internet use. J. Broadcast. Electron. Media 2008, 52, 581–599. [Google Scholar] [CrossRef]
  40. Koh, C.E.; Prybutok, V.R.; Ryan, S.D.; Wu, Y.A. model for mandatory use of software technologies: An integrative approach by applying multiple levels of abstraction of informing science. Informing Sci. Int. J. Emerg. Transdiscipl. 2010, 13, 177–203. [Google Scholar] [CrossRef]
  41. Clarke, R. Internet privacy concerns confirm the case for intervention. Commun. ACM 1999, 42, 60–67. [Google Scholar] [CrossRef]
  42. Dinev, T.; Hart, P. An extended privacy calculus model for e-commerce transactions. Inf. Syst. Res. 2006, 17, 61–80. [Google Scholar] [CrossRef]
  43. Schnackenberg, A.K.; Tomlinson, E.C. Organizational transparency: A new perspective on managing trust in organization-stakeholder relationships. J. Manag. 2016, 42, 1784–1810. [Google Scholar] [CrossRef]
  44. Kehr, F.; Kowatsch, T.; Wentzel, D.; Fleisch, E. Blissfully ignorant: The effects of general privacy concerns, general institutional trust, and affect in the privacy calculus. Inf. Syst. J. 2015, 25, 607–635. [Google Scholar] [CrossRef]
  45. Milne, G.R.; Culnan, M.J. Strategies for reducing online privacy risks: Why consumers read (or don’t read) online privacy notices. J. Interact. Mark. 2004, 18, 15–29. [Google Scholar] [CrossRef]
  46. Pavlou, P.A. State of the information privacy literature: Where are we now and where should we go? MIS Q. Manag. Inf. Syst. 2011, 35, 977–988. [Google Scholar] [CrossRef]
  47. Puhakainen, P.; Siponen, M. Improving employees’ compliance through information systems security training: An action research study. MIS Q. 2010, 34, 757–778. [Google Scholar] [CrossRef]
  48. Buchanan, T.; Paine, C.; Joinson, A.N.; Reips, U.D. Development of measures of online privacy concern and protection for use on the Internet. J. Am. Soc. Inf. Sci. Technol. 2007, 58, 157–165. [Google Scholar] [CrossRef]
  49. Shrestha, A.K.; Joshi, S. Toward ethical AI: A qualitative analysis of stakeholder perspectives. In Proceedings of the 2025 IEEE 15th Annual Computing and Communication Workshop and Conference (CCWC), Las Vegas, NV, USA, 6–8 January 2025; IEEE: Las Vegas, NV, USA, 2025; pp. 00022–00029. [Google Scholar] [CrossRef]
Figure 1. Heatmap analysis: stakeholder perspectives on AI privacy.
Figure 1. Heatmap analysis: stakeholder perspectives on AI privacy.
Systems 13 00455 g001
Figure 2. PEA-AI model.
Figure 2. PEA-AI model.
Systems 13 00455 g002
Table 1. Constructs and definitions.
Table 1. Constructs and definitions.
ConstructDefinition
Data Ownership and ControlIt is the degree to which young people have control over their personal data and engage in discussions about privacy.
Parental Data SharingIt is the degree to which parents exercise their rights to share children’s data and consider the implications of doing so.
Perceived Risks and BenefitsIt is the degree to which individuals perceive risks, ethical concerns, and benefits related to the use of personal data by AI systems.
Transparency and Trust It is the degree to which transparency in data usage influences trust in AI systems.
Education and Awareness It is the degree to which stakeholders are informed about privacy and ethical issues associated with AI.
Table 2. Constructs and items.
Table 2. Constructs and items.
ConstructItems
Data Ownership and Control (DOC) doc1: Importance of users having control over their personal data.
doc2: Frequency of considering user data control in work.
doc3: Feasibility/comfortability of implementing data control mechanisms.
Parental Data Sharing (PDS)pds1: Handling data shared by parents on behalf of children.
pds2: Importance of obtaining consent from young users.
Perceived Risks and Benefits (PRB)prb1: Concern about ethical/privacy implications
prb2: Significance of benefits in justifying data use.
Open-Ended Question: Primary risks associated with personal data use.
Open-Ended Question: Benefits AI systems provide by using personal data.
Transparency and Trust (TT)tt1: Importance of transparency about data usage.
tt2: Perception of transparency in current AI systems.
tt3: Belief that increasing transparency improves user trust.
Education and Awareness (EA)ea1: Knowledge about privacy issues related to AI systems.
ea2: Belief that users receive adequate training on privacy.
ea3: Importance of being educated on privacy and ethical issues/adequacy of privacy information.
Table 3. Participants’ demographics.
Table 3. Participants’ demographics.
Respondents’
Characteristics
Percentage (%)Number of Participants (n)
Survey Interview Focus Group Survey Interview Focus Group
Young Digital Citizens 35.6 0.0 0.0 151 0 0
Parents 12.7 8.3 11.1 54 1 1
Educators 10.8 41.7 33.3 46 5 3
Both Parents and Educators6.4 25.0 11.1 27 3 1
AI Developers 10.8 16.7 33.3 46 2 3
AI Researchers 23.1 8.3 11.1 98 1 1
Both AI Developers and Researchers0.5 0.0 0.0 2 0 0
Table 4. Comparative analysis (mean score).
Table 4. Comparative analysis (mean score).
Theme (Privacy Item)YouthParents/EducatorsAI Professionals
Data Control Importance4.094.464.40
Perceived Data Control3.353.403.64
Comfort Data Sharing2.833.403.79
Parental Data Sharing2.522.461.82
Parental Data Rights2.523.392.90
AI Privacy Concerns4.094.254.32
Perceived Data Benefits3.873.504.53
Data Usage Transparency4.194.354.17
Transparency Perception2.411.962.12
System Data Trust3.094.084.18
Privacy Protection Knowledge3.053.294.05
Digital Privacy Education2.932.503.79
AI Privacy Awareness3.694.504.63
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Barthwal, A.; Campbell, M.; Shrestha, A.K. Privacy Ethics Alignment in AI: A Stakeholder-Centric Framework for Ethical AI. Systems 2025, 13, 455. https://doi.org/10.3390/systems13060455

AMA Style

Barthwal A, Campbell M, Shrestha AK. Privacy Ethics Alignment in AI: A Stakeholder-Centric Framework for Ethical AI. Systems. 2025; 13(6):455. https://doi.org/10.3390/systems13060455

Chicago/Turabian Style

Barthwal, Ankur, Molly Campbell, and Ajay Kumar Shrestha. 2025. "Privacy Ethics Alignment in AI: A Stakeholder-Centric Framework for Ethical AI" Systems 13, no. 6: 455. https://doi.org/10.3390/systems13060455

APA Style

Barthwal, A., Campbell, M., & Shrestha, A. K. (2025). Privacy Ethics Alignment in AI: A Stakeholder-Centric Framework for Ethical AI. Systems, 13(6), 455. https://doi.org/10.3390/systems13060455

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop