Next Article in Journal
AI Narrative Modeling: How Machines’ Intelligence Reproduces Archetypal Storytelling
Previous Article in Journal
A Survey of Open-Source Autonomous Driving Systems and Their Impact on Research
Previous Article in Special Issue
Locating the Ethics of ChatGPT—Ethical Issues as Affordances in AI Ecosystems
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Review

Perspectives on Managing AI Ethics in the Digital Age

by
Lorenzo Ricciardi Celsi
1,* and
Albert Y. Zomaya
2
1
Dipartimento di Ingegneria Informatica, Automatica e Gestionale, Sapienza Università di Roma, Antonio Ruberti, Via Ariosto 25, 00185 Roma, Italy
2
Centre for Distributed and High-Performance Computing, School of Computer Science, The University of Sydney, Sydney, NSW 2006, Australia
*
Author to whom correspondence should be addressed.
Information 2025, 16(4), 318; https://doi.org/10.3390/info16040318
Submission received: 19 February 2025 / Revised: 12 April 2025 / Accepted: 16 April 2025 / Published: 17 April 2025
(This article belongs to the Special Issue Do (AI) Chatbots Pose any Special Challenges for Trust and Privacy?)

Abstract

:
The rapid advancement of artificial intelligence (AI) has introduced unprecedented opportunities and challenges, necessitating a robust ethical and regulatory framework to guide its development. This study reviews key ethical concerns such as algorithmic bias, transparency, accountability, and the tension between automation and human oversight. It discusses the concept of algor-ethics—a framework for embedding ethical considerations throughout the AI lifecycle—as an antidote to algocracy, where power is concentrated in those who control data and algorithms. The study also examines AI’s transformative potential in diverse sectors, including healthcare, Insurtech, environmental sustainability, and space exploration, underscoring the need for ethical alignment. Ultimately, it advocates for a global, transdisciplinary approach to AI governance that integrates legal, ethical, and technical perspectives, ensuring AI serves humanity while upholding democratic values and social justice. In the second part of the paper, the author offers a synoptic view of AI governance across six major jurisdictions—the United States, China, the European Union, Japan, Canada, and Brazil—highlighting their distinct regulatory approaches. While the EU’s AI Act as well as Japan’s and Canada’s frameworks prioritize fundamental rights and risk-based regulation, the US’s strategy leans towards fostering innovation with executive directives and sector-specific oversight. In contrast, China’s framework integrates AI governance with state-driven ideological imperatives, enforcing compliance with socialist core values, whereas Brazil’s framework is still lacking the institutional depth of the more mature ones mentioned above, despite its commitment to fairness and democratic oversight. Eventually, strategic and governance considerations that should help chief data/AI officers and AI managers are provided in order to successfully leverage the transformative potential of AI for value creation purposes, also in view of the emerging international standards in terms of AI.

1. Introduction

The digital era has profoundly transformed society, generating extraordinary opportunities but also complex challenges. In this context, reflecting on the relationship between technology and ethics becomes crucial for building a fair and sustainable future. From the progress of artificial intelligence (AI) to the evolution of the internet, the need for shared principles to guide innovation is an increasingly pressing issue.
Technological progress is a human achievement that, when properly oriented, increases the dignity of the person and improves living conditions. Technology must contribute to strengthening fraternal communion and promoting peace. However, its evolution can also generate risks, such as the concentration of power in the hands of a few or the degradation of human relationships.
Among the most relevant innovations, AI represents an emblematic example of this ambivalence. Born according to the definition by scientist John McCarthy as a discipline capable of imitating some human cognitive functions, today, AI has spread to sectors such as healthcare, industry, and education, showing both enormous benefits and risks to human freedom and dignity. The European regulation on AI (the so-called AI Act) highlights the importance of regulatory oversight to ensure inclusion, transparency, and respect for fundamental rights. However, the debate is not limited to the legal dimension: AI raises profound ethical questions related to human responsibility in designing tools that influence the lives of millions of people [1].
The Pontifical Academy for Life has positioned itself as a privileged interlocutor in the dialog on technological ethics. Initiatives like the RenAIssance Foundation, led by Father Paolo Benanti, testify to an approach that integrates the humanities, social sciences, and natural sciences to address the challenges of AI. The “Rome Call for AI Ethics”, presented in 2020, represents one of the most significant contributions in this field. The document establishes fundamental principles such as transparency, inclusion, and responsibility, promoting an integrated vision that combines innovation and social justice [2].
In an increasingly unequal global context, the commitment of the Pontifical Academy for Life aims to ensure that technology is a tool in the service of humanity. The collaboration with companies like Microsoft and IBM demonstrates that transdisciplinary dialog between philosophers, theologians, scientists, and technologists is not only possible but necessary for developing global governance of AI [3].
If AI is the symbol of present and future challenges, the internet represents those already in place and in the recent past. Paolo Benanti, a theologian and prominent figure in technology ethics [4], has analyzed the evolution of the digital world from its original utopia to the current crisis. Benanti identifies two historical phases: the first, culminating in the Arab Spring of 2010, saw the internet as a space for liberation and global sharing. The second, from 2011 onwards, showed the dark side of the network: fake news, polarization, and manipulation. Events such as the assault on Capitol Hill in 2021 highlighted how digital platforms, far from being neutral, reflect the ideologies of their creators and lend themselves to control and profit dynamics. According to Benanti, the digital world is not just a set of technological tools but a dimension that transforms skills, values, and human relationships. The pandemic accelerated this transition, making it evident that ethical management of platforms is necessary. Europe, with its regulations, has shown that it is possible to intervene to ensure that the digital world serves the common good.
The paper is structured as follows. In Section 2, we provide a clear explanation of the motivation behind the review paper, as well as an explanation of how the literature was selected, analyzed, and consolidated. In Section 3, we discuss the emergence of algor-ethics as an approach to ethics for algorithms that takes into account the trade-off between technological progress on the one hand and ethical responsibility on the other hand. In Section 4, we discuss AI regulation across the different jurisdictions of the USA, China, the European Union, Japan, Canada and Brazil, offering a synoptic view of the respective ways of tackling this emerging issue. In Section 5, we provide strategic and governance considerations that should help chief data/AI officers and AI managers successfully leverage the transformative potential of AI for value creation purposes; we also highlight the emerging international standards in terms of AI (i.e., ISO/IEC 22989:2022 [5] and ISO/IEC 42001:2023 [6]). Concluding remarks end the paper.

2. Motivation and Rationale Behind the Literature Review

2.1. Motivation

The motivation for this study arises from the recognition that AI is evolving rapidly within fragmented regulatory landscapes, varied ethical interpretations, and increasing societal impact. While much work has been done on AI ethics from either philosophical or policy perspectives, few contributions have explicitly sought to unify these dimensions with operational frameworks that can be used by practitioners. The study aims to address this gap by proposing a comprehensive, transdisciplinary framework that draws on normative ethics, comparative AI regulation, and emerging technical standards to offer strategic guidance for real-world AI implementation.
In other words, this study’s original contribution lies in articulating a transdisciplinary framework—grounded in the concept of algor-ethics—that integrates philosophical reflection, comparative regulatory analysis, and actionable governance strategies (e.g., ISO/IEC 42001) for AI implementation. This integrative perspective is designed to support decision-makers, including policymakers, AI managers, and chief data/AI officers, in ensuring that AI technologies are developed and deployed in alignment with democratic values, human rights, and social justice.
A key takeaway from this study is that regulatory frameworks alone are insufficient to ensure responsible AI governance. Standardization efforts, particularly ISO/IEC 42001:2023 and ISO/IEC 22989:2022, provide an essential foundation for aligning AI development with ethical principles.
By reaffirming the transdisciplinary contribution proposed in this study—rooted in the concept of algor-ethics and combining normative insight, empirical evidence, and governance mechanisms—we emphasize the practical implications for decision-makers. Chief AI and data officers, AI governance leads, and public regulators can draw from this framework to structure compliance, foster organizational maturity, and ensure that AI systems are deployed in a manner that is not only lawful, but meaningfully ethical and human centered.
Looking ahead, AI governance must transcend jurisdictional silos and foster a global, transdisciplinary dialog involving policymakers, industry leaders, ethicists, and civil society. A human-centric approach to AI regulation should prioritize the following:
  • Ethical frameworks, such as algor-ethics, to mitigate bias and discrimination;
  • Clear accountability mechanisms ensuring liability for AI-driven decisions;
  • Global cooperation on AI risk assessment, particularly for high-risk systems such as generative AI and autonomous decision-making models;
  • Public awareness initiatives to promote AI literacy and prevent misuse.
The AI revolution is not just technological—it is profoundly ethical and social. The question is not whether AI should be regulated, but how to govern AI in a way that safeguards human dignity, fosters innovation, and ensures long-term sustainability. Without proactive governance, we risk being governed by AI systems rather than governing them. By leveraging robust regulatory frameworks and international standards such as ISO/IEC 42001 and ISO/IEC 22989 and fostering cross-sectoral collaboration, it is possible to shape a future where AI serves as a force for societal good rather than an instrument of unchecked power. Only through co-responsibility among governments, businesses, and civil society can AI truly become a tool for progress, equity, and human flourishing.

2.2. Literature Selection and Scope

This study adopts a structured, transdisciplinary review methodology that combines conceptual analysis with systematic literature selection, guided by PRISMA-ScR (Preferred Reporting Items for Systematic Reviews and Meta-Analyses extension for Scoping Reviews) principles [7]. Our approach bridges philosophical foundations, regulatory frameworks, and practical governance strategies to develop an integrative perspective on AI ethics.
We conducted comprehensive searches across three major databases—Scopus, IEEE Xplore, and Google Scholar—using targeted keyword combinations. The search focused on publications from January 2020 to March 2025, with exceptions for seminal theoretical works (e.g., Floridi, Ellul, Heidegger, Rifkin) that inform the philosophical underpinnings of our framework.
For the sources finally included in the study (more than sixty after screening), the inclusion criteria were the following:
  • Peer-reviewed articles/conference papers addressing AI ethics, governance, or regulation;
  • Policy documents from authoritative bodies (OECD, EU, NIST, Vatican);
  • Empirical case studies of AI failures (e.g., COMPAS, healthcare triage bias);
  • Relevance to one of the core dimensions: ethics, regulation, strategic governance, or standardization;
  • Contemporary focus, prioritizing the most recent works where possible.
Excluded sources were mainly non-English texts, opinion pieces lacking empirical/theoretical grounding, and redundant or derivative works. The selection process started from an initial screening of more than two hundred records, through the full-text review of one hundred and fifty records as fifty were removed for irrelevance or duplication, to selection and the identification of the final corpus of sources appearing in the reference section at the end of the document.
The retained sources were analyzed and grouped into three macro-thematic domains:
  • Philosophical and ethical foundations (e.g., algor-ethics, techno-humanism);
  • Comparative governance and regulation (e.g., AI Act, US Executive Orders, Chinese AI regulation guidelines);
  • Standards and managerial frameworks (e.g., ISO/IEC 22989:2022, ISO/IEC 42001:2023, DAMA DMBOK).
Each domain was analyzed to extract key arguments, guiding principles, and policy implications. Then, we integrated these insights, with a focus on constructing a coherent transdisciplinary framework for AI governance. This way, employing rigorous selection and synthesis, we carried out the review in as systematic a way as possible. We maintain that the overall aim of the study is to bridge ethical theory and organizational practice through an integrative lens. This approach does the following:
  • Ensures replicability via PRISMA-ScR protocols;
  • Preserves conceptual depth through foundational texts;
  • Bridges theory–practice gaps by linking ethical principles to empirical cases.
The synthesis aims to align abstract ethical discourse with actionable governance tools (e.g., ISO/IEC 42001 compliance), fulfilling the study’s integrative objective.

3. Algor-Ethics: Ethics for Algorithms Between Technological Progress and Ethical Responsibility

The concept of algor-ethics, introduced in the contemporary debate, summarizes the urgency of moral control over algorithms. These, often considered neutral tools, instead have a profound impact on individual and collective decisions. The Rome Call for AI Ethics underscores the importance of continuous verification throughout the life cycle of technologies, from design to use. This requires interdisciplinary skills and an education that trains new generations for the conscious use of technologies. Algor-ethics, therefore, serves as an antidote to algocracy, that is, the risk that decision-making power is concentrated in the hands of those who control data and algorithms. The ethical governance of AI becomes a priority to preserve human dignity and ensure a balance between technological innovation and social justice.
Human beings have now become “techno-humans” who are intrinsically linked to technology. This perspective does not imply either transhumanism or posthumanism but recognizes technology as a necessary extension of human nature. However, to avoid this condition degenerating into dehumanization, innovation must be oriented through questions of meaning and shared values.
The challenge is to create a balance between innovation and ethical responsibility, overcoming the temptation of technology for its own sake. As emphasized in [8], it is not about demonizing the digital world but about confronting its implications with awareness and critical thinking.
Thus, the ethics of technology is not a marginal issue but a necessity to guide the digital evolution toward the common good. Artificial intelligence, the internet, and other innovations of our time are not merely tools but reflections of our collective choices. Only through transdisciplinary dialog and shared governance will it be possible to ensure a future in which technology truly serves humanity.
In this respect, the challenge of co-responsibility arises between human and machine technology. Reference [9] begins with a provocative statement: “Religion concerns the salvation of the soul in the heavens, and technology concerns the preservation of data in the cloud”. This parallel highlights how technology, in its pervasiveness, has taken on a role traditionally reserved for religion and other great ideological narratives. In [10], Jacques Ellul, a French sociologist and theologian, already in 1954 emphasized that technique was “the crucial issue of the century”. This transition is rooted in the void left by the crisis of great ideologies and secularization, which has deprived many Western societies of a shared ethical framework.
In this context, technology and science have become the pillars of modern thought. However, although science has revolutionized the way we live, it lacks transcendence, focusing exclusively on immanence. This “dictatorship of technique”, as Jürgen Habermas defines it [11], is particularly evident in artificial intelligence (AI) and biotechnology, fields where ethical boundaries are becoming increasingly blurred.
While technological progress has improved the quality of life, it also carries significant risks. In [12], Guido Tonelli, a physicist and key figure in the discovery of the Higgs boson, warns that entrusting science with the task of solving human problems can generate dangerous illusions. The desire for immortality, fueled by advancements in medicine and biotechnology, is an emblematic example.
The absence of ethical reflection risks transforming the human being into a mere cog in the technological system. As Martin Heidegger noted in [13], modern technology tends to reduce the human being to an object, ignoring its spiritual and transcendent dimensions.
Indeed, converging technologies—nanotechnologies, biotechnologies, information technologies, and cognitive sciences—are redefining the boundaries of the human. The idea of an enhanced being through genetic or cybernetic interventions raises profound questions about our identity. As Jeremy Rifkin observes in [14], biotechnological progress requires a revision of fundamental values: it is not enough to ask what we can do with technology, but what we should do.
In particular, our perception of how AI works is often far from reality. As humans, we tend to rely on what we see and hear, not having access to the deep mechanisms that govern everyday tools like smartphones or voice assistants. Technologies such as ChatGPT have intensified this perception: they are capable of seeming intelligent and even human, increasing the risk that the distinction between machine and human being blurs.
A crucial role in the debate about AI concerns the moral notion of personal action. Humans, endowed with consciousness and free will, can make ethically aware decisions. Machines, on the other hand, act based on external programming, even when their results appear unpredictable. This distinction is essential because attributing moral intentionality to machines would mean ignoring their purely instrumental nature.
Also, as stated by the Dicastery for the Doctrine of the Faith and the Dicastery for Culture and Education of the Catholic Church [15], the personal act finds itself at the point of convergence between proper human contribution and automatic calculation. Ethics, therefore, must confront new challenges, integrating the multiple levels of responsibility and interaction required by advanced technological systems.
Therefore, it is essential to develop a reflection that goes beyond the simple juxtaposition of science and ethics. A transdisciplinary perspective, considering the relationships between systems and the qualities emerging from such interactions, becomes fundamental to addressing the complexity of today’s technologies.
With the proliferation of AI systems and their applications, new forms of action are developing, often difficult to categorize using traditional language. Expressions like “distributed forms of action” or “distributed responsibility” attempt to describe the complexity of these interactions. But what does it really mean to be responsible in a technological context?
An emblematic example is represented by multi-agent systems, where different machines, often equipped with AI, collaborate to achieve a common goal. In these cases, the final outcome results from the interaction between human and artificial components, making it difficult to attribute responsibility to a single actor.
However, one thing remains clear: intentionality, the cornerstone of moral action, is an exclusive prerogative of human beings. While machines can be morally assessed based on their effects, such judgments always derive from human choices, both in terms of design and use. As many scholars have pointed out, talking about the autonomy of machines in moral terms can be misleading. Machines are not thinking but calculating, and there is a risk of confusing their operational efficiency with intelligence comparable to human intelligence.

3.1. The Distinctive Traits of Algor-Ethics

The concept of algor-ethics, introduced by the Pontifical Academy for Life, summarizes the urgency of embedding ethical control within the algorithmic systems that increasingly mediate our societies. Unlike existing models that treat AI ethics as a peripheral concern or frame it in abstract principles with limited traction, algor-ethics proposes a normative-operational framework that integrates moral reasoning throughout the AI lifecycle—from design to deployment, from regulation to organizational governance.
What sets algor-ethics apart from frameworks such as “trustworthy AI” (e.g., the guidelines proposed by EU’s high-level expert group on AI) or “responsible AI” (e.g., Microsoft’s guidelines) is its explicit linkage between ethical values and their institutionalization within socio-technical systems. While existing models often stop at principle-level recommendations (e.g., fairness, accountability, transparency), algor-ethics introduces three defining innovations.
  • A holistic, humanistic foundation: Rooted in a vision of human dignity, algor-ethics insists on keeping human beings—not just as users but as moral agents—at the center of algorithmic ecosystems. This goes beyond risk mitigation to focus on meaning, purpose, and justice.
  • Co-responsibility and distributed ethics: Algor-ethics moves past individual accountability to frame responsibility as distributed across designers, deployers, regulators, and users. This counters the limitations of blame-based approaches in multi-agent systems.
  • Transdisciplinarity as praxis: Rather than viewing philosophy, engineering, policy, and theology as separate domains, algor-ethics weaves them together. This is not just epistemological—it results in practical governance blueprints, as illustrated by its synergy with standards such as ISO/IEC 22989:2022 and ISO/IEC 42001:2023.
In doing so, algor-ethics offers a third way between techno-optimism and techno-pessimism: it reclaims ethical intentionality in socio-technical design, while maintaining pragmatic sensitivity to organizational realities.
To operationalize algor-ethics, a multi-layered implementation roadmap can be proposed according to the following phases.
  • Design phase: Ethical risk assessment is embedded alongside technical feasibility studies. Design documentation includes ethical assumptions, stakeholder mapping, and trade-off rationales.
  • Development phase: Algorithms are stress-tested for biases, using fairness-aware modeling and explainability constraints. Developers undergo ethics-by-design training based on algor-ethics principles.
  • Deployment phase: Monitoring dashboards include ethical performance metrics (e.g., inclusion rate, transparency score). Human-in-the-loop validation ensures meaningful oversight.
  • Governance phase: A standing ethics board (internal or external) co-defines risk thresholds and escalates non-compliance. This aligns with ISO/IEC 42001′s governance clauses but anchors them in a shared ethical framework (see Section 5 for further detail on the above-mentioned ISO/IEC standard).
Table 1 sums up the above-mentioned multi-layered implementation roadmap.
Importantly, algor-ethics is not limited to Western liberal democracies. Its emphasis on personhood, subsidiarity, and solidarity makes it adaptable across contexts—whether in Global South healthcare AI deployments or multi-stakeholder climate forecasting platforms.
Algor-ethics is particularly tailored to the following:
  • AI managers and chief AI/data officers: as a framework for aligning ethical principles with technical governance (e.g., in ISO/IEC 42001 implementation);
  • Regulators and policy analysts: to bridge normative principles and compliance instruments;
  • Researchers and ethicists: offering an integrative scaffolding for empirical, conceptual, and normative studies.
In summary, algor-ethics is not merely a philosophical call for responsibility—it is a practical roadmap for ethical AI governance rooted in human dignity, systemic thinking, and cross-sectoral collaboration.

3.2. The Need for Transdisciplinarity

The rapid technological evolution requires an ethics that surpasses traditional disciplinary divisions. It is not enough to simply add scientific results and humanistic reflections; transdisciplinary development is necessary, one that integrates different knowledge into a systemic framework.
According to [16], such an approach unfolds on three levels:
  • Emergence of new qualities: Within a system, interactions between different elements produce characteristics that cannot be deduced from the individual components.
  • Connection between systems: Technological systems cannot be analyzed in isolation but in relation to the social, economic, and cultural contexts in which they operate.
  • Moral discernment: The interaction between human agents and technological artifacts requires a critical judgment that considers the ethical implications of choices.
This systemic approach does not imply the renunciation of individual responsibility but highlights how each technological decision is embedded in a complex network of relationships. For example, responsibility cannot be limited to the end consumer who uses a digital system. Similarly, relying solely on the ethical sensitivity of researchers who design algorithms is insufficient.
In this respect, the paradigm of algor-ethics is proposed in [17] as a specific ethics for the development of algorithms. This approach aims to integrate moral principles into all phases of the technological systems’ life cycle, from design to implementation.
Algor-ethics centers around three fundamental principles:
  • Dignity of the person: Every technological system must respect the rights and dignity of individuals, avoiding discrimination or exclusion.
  • Justice and fairness: Algorithms must be designed to ensure fair outcomes without favoring certain groups at the expense of others.
  • Transparency and responsibility: It is essential that AI systems are understandable and that their creators can be held accountable for their decisions.
The goal is to create a culture of co-responsibility that involves not only researchers and developers but also businesses, legislators, and civil society.
Luciano Floridi, with the concept of the “infosphere” in [18], underscores how the digital era has transformed our way of living. Technology is neither inherently good nor bad. Its value depends on how we use it and the intentions driving its development. This change requires a new humanism capable of integrating technological achievements with an ethical vision.

3.3. Case Studies: Bridging Ethical Reflection with Data-Driven Insights and Regulatory Evidence

The concept of algor-ethics emerges as a response to documented failures of AI systems across sectors. Unlike abstract ethical frameworks, algor-ethics is grounded in operationalizable principles derived from real-world cases. Below, we reconstruct its foundations through empirical evidence, demonstrating how it addresses gaps in existing approaches.
The OECD AI Principles [19], adopted in 2019 and expanded in subsequent updates, serve as a foundational reference for trustworthy AI governance. These principles emphasize human-centered values, transparency, robustness, accountability, and inclusivity. The accompanying OECD AI Incidents Tracker [20] catalogs over 200 real-world failures of AI systems across sectors—ranging from biased hiring algorithms to autonomous vehicle malfunctions—providing a taxonomy of risk categories and governance gaps. This resource informed the section on distributed responsibility and the need for transdisciplinary oversight.
Developed by the US National Institute of Standards and Technology, the AI Risk Management Framework (RMF) [21] offers a structured approach to identifying, evaluating, and mitigating AI risks across technical, social, and organizational dimensions. The RMF’s core components—Map, Measure, Manage, Govern—were used to inform the discussion on ISO/IEC 42001:2023 and the DAMA governance framework (see Section 5 for further detail in this respect). Particularly relevant are NIST’s risk profiles for large language models and high-stakes use cases, which directly align with the governance challenges addressed in this study.
The Stanford AI Index Report [22] produced by Stanford’s Human-Centered Artificial Intelligence (HAI) Institute offers quantitative indicators on AI capabilities, public perception, regulation, and harm. The 2024 edition notes the following:
  • Global AI investment reached USD 189 billion in 2023, with significant concentration in the US and China;
  • Instances of documented algorithmic harm doubled since 2020;
  • The adoption of AI in healthcare, insurance, and finance continues to outpace regulatory readiness.
These insights are used to support the urgency of regulatory convergence and real-world implementation of standards like ISO/IEC 42001 (see Section 5 for further detail in this respect).
Moreover, to demonstrate the societal risks posed by AI systems, we can reference some key empirical studies, such as the following.
  • COMPAS risk scoring tool: Investigative work by ProPublica [23] revealed racial bias in the COMPAS algorithm used to predict criminal recidivism, misclassifying Black defendants at disproportionately higher rates. This case has become emblematic of opaque, high-impact AI in the justice system.
  • Healthcare triage bias: Obermeyer et al. [24] found that commercial algorithms used for patient prioritization in the US healthcare system underestimated the needs of black patients by using cost as a proxy for health—an ethical and statistical failure that illustrates the importance of representative data and fairness-aware design.
  • AI in legal automation: McCradden et al. [25] raise concerns about deploying machine learning models in clinical and legal domains without clear accountability frameworks or model explainability, pointing to both epistemic and liability gaps.
We also report instances where empirical harms led to concrete regulatory responses:
  • Facial recognition bans: After the occurrence of high false-positive rates for non-white individuals, multiple U.S. cities (e.g., San Francisco, Portland, Boston) enacted moratoria or bans on government use of facial recognition technology. These actions underscore how local governance can intervene to prevent algorithmic discrimination when national policy lags.
  • AI and the COVID-19 pandemic: The early months of the pandemic saw widespread reliance on AI-driven diagnostics, resource forecasting, and misinformation moderation—often without adequate validation. Models predicting ICU needs, for instance, were deployed before peer review, and content moderation algorithms failed to flag false information about vaccines. These cases reveal the dangers of premature deployment and the lack of agile regulatory oversight under emergency conditions.
All in all, algor-ethics is therefore not merely theoretical but born from systemic deficiencies observed in the following:
  • Criminal justice: the COMPAS recidivism algorithm’s racial bias [23] revealed how “neutral” tools perpetuate discrimination when fairness audits are absent—a core focus of algor-ethics’ justice principle (see Section 3.2).
  • Healthcare: the study of biased triage algorithms in [25] showed how training data skewed by cost (not clinical need) violated dignity and transparency. Algor-ethics mandates representative data validation at design stage.
  • Employment: Amazon’s gender-biased hiring tool [26] exemplified the accountability gap—no party was liable for harm. Algor-ethics assigns clear ownership via its lifecycle, dynamic approach.
These cases underscore why static ethics guidelines fail: they lack mechanisms to prevent harm. Algor-ethics embeds checks at each phase (design, deployment, monitoring), as later demonstrated in Section 5.
As anticipated in Section 3.2, algor-ethics translates empirical lessons into three actionable principles:
1.
Dignity of the person
  • Empirical anchor: Facebook’s emotional contagion experiment [27] manipulated users without consent, violating autonomy.
  • Algor-ethical response: Human oversight protocols (e.g., clinician review for medical AI) and consent workflows for data use are needed.
2.
Justice and fairness
  • Empirical anchor: Racial bias in mortgage-approval algorithms [28] showed how historical data entrenches inequity.
  • Algor-ethical response: Mandating bias red-teaming (e.g., NIST’s AI RMF) and equity impact assessments pre-deployment.
3.
Transparency and accountability
  • Empirical anchor: Clearview AI’s opaque facial recognition use [29] highlighted risks of undocumented systems.
  • Algor-ethical response: Enforcement of audit trails (ISO/IEC 42001) and explainability thresholds (e.g., EU AI Act’s Article 13).
In more detail, to demonstrate further the practical relevance of the proposed algor-ethical framework, we apply it to two critical and ethically sensitive challenges in contemporary AI systems. These case studies highlight how ethical principles can be operationalized across the design, deployment, and monitoring phases of AI lifecycle management.

3.3.1. Case Study 1: Diagnostic Bias in AI Medical Tools

Problem Statement: Empirical evidence indicates that certain AI-driven diagnostic tools exhibit significant accuracy disparities across skin tones, often underperforming on darker skin [25]. This raises profound ethical concerns related to justice, equity, and human dignity in healthcare.
Algor-Ethical Interventions:
  • Design Phase: Ensure representativeness in training datasets by incorporating demographically diverse patient images and clinical data. This step directly addresses the principle of justice by mitigating structural bias in data.
  • Deployment Phase: Implement human-in-the-loop mechanisms, such as clinician override capabilities, to uphold dignity by preserving clinical agency in high-stakes decisions.
  • Monitoring Phase: Conduct disaggregated performance evaluations based on demographic variables to promote transparency and enable continual fairness auditing in real-world settings.

3.3.2. Case Study 2: Ethical Trade-Offs in AV Decision-Making

Problem Statement: Ethical tensions arise in autonomous vehicle control algorithms that may systematically favor passenger safety over that of pedestrians, potentially encoding morally controversial trade-offs [30].
Algor-Ethical Interventions:
  • Design Phase: Incorporate mechanisms for public deliberation and stakeholder consultation to inform the ethical weighting of outcomes, aligning system objectives with societal conceptions of justice.
  • Deployment Phase: Equip AVs with forensic tools such as black-box recorders to support accountability through post-incident auditing and responsibility attribution.
  • Monitoring Phase: Introduce continuous dynamic risk recalibration based on evolving empirical data to adaptively safeguard dignity in unforeseen or novel driving contexts.

3.4. Literature Review of the Application Scenarios Where Algor-Ethics Will Be Relevant

AI is one of the most influential and promising technological innovations of our era. From medicine to peace, labor to environmental protection, AI is transforming every vital sector of humanity. However, alongside these extraordinary opportunities, significant ethical, social, and political dilemmas also arise that we cannot ignore. Responsible use of AI requires profound reflection on its application and impact on human life. It is not enough to rely on technology to improve our quality of life; we must accompany it with critical and conscious reflection on the ethical and social implications it entails.
Across the application subdomains—e.g., healthcare, environmental applications, and autonomous systems—several key ethical themes recur. Issues of fairness, accountability, transparency, and safety surface in all application areas, albeit manifested in different ways. For example, bias is a concern in clinical AI diagnoses and in climate models (fairness across populations and regions) alike. Transparency and explainability are needed both for patients trusting an AI recommendation and for the public trusting a self-driving car’s decisions. In all cases, there is a tension between the promise of AI innovation and the protection of human values. Yet each domain also brings unique focal points: healthcare ethics stresses patient autonomy and informed consent, environmental ethics introduces non-human stakeholders and sustainability, and autonomous systems ethics grapples with machine agency and control.
A notable trend in recent AI ethics research is the move from high-level principles toward domain-specific guidelines and interdisciplinary engagement. Early AI ethics discussions often produced broad principles or boards of ethics, but contemporary work is increasingly granular. In healthcare, this is seen in efforts to tailor ethical guidelines to medical AI and involve healthcare professionals in identifying real-world ethical pitfalls [31]. In climate and environmental AI, scientists are beginning to integrate ethics into their research process, as Acquaviva et al. demonstrate in [32] by providing concrete best-practice suggestions (like ensuring diverse data sources) for climate modelers.
Another trend is the inclusion of justice and equity considerations: both the healthcare and climate ethics literature call out global inequalities (whether it is access to medical AI in low-income regions or unequal data representation in climate models) as areas that need attention [32,33]. This reflects a broader shift in AI ethics toward not just preventing harm, but proactively promoting fairness and social good.
When comparing methodologies, the body of literature spans normative analyses, empirical studies, and technical frameworks. Systematic reviews and mapping studies (e.g., [34,35]) map the landscape of ethical issues and often highlight gaps in current knowledge. Conceptual papers (like [36]) argue for new ways of thinking, expanding theoretical frameworks to include overlooked factors. On the other hand, technical ethical frameworks (such as the model for autonomous vehicle decision-making in [37]) attempt to translate ethical theory into design requirements or algorithms. Empirical studies—though still relatively few in number—have started to appear, especially in healthcare ethics, where researchers survey clinicians or patients about AI (as summarized in [35]). These different methodologies complement each other: high-level ethical theory sets ideals, while empirical work reveals on-the-ground realities and technical research tries to operationalize solutions. However, this diversity also reveals a gap: there is still little consensus on how to effectively integrate ethical principles into AI development processes across the board. For instance, multiple frameworks exist for fair or explainable AI, but choosing and enforcing them remains challenging.
Indeed, several unresolved questions and gaps persist in the field of AI ethics. One major gap is between principle and practice—it is one thing to enumerate ethical principles or even enact regulations, and quite another to ensure AI systems behave ethically in practice. This is evident in healthcare, where the authors of [35] note that current regulations might not reflect users’ actual ethical concerns. Another gap lies in contextual and cultural adaptation of AI ethics. As studies of autonomous vehicles suggest, ethical preferences can vary by culture [37], which raises the question: should AI ethics standards be universal or context-specific? Similarly, the absence of voices from the Global South in much AI ethics research (whether in healthcare or climate applications) points to an imbalance that needs addressing if AI is to be ethical for everyone. From a technical standpoint, there are open questions about how to measure and balance competing ethical objectives. For example, how should developers trade off an autonomous car’s safety of the passenger vs. pedestrians, or the accuracy of an AI model vs. its energy consumption? Methods for systematically handling such trade-offs are still in development.
In general, the expanding literature on AI ethics in specialized domains enriches our understanding of both the common threads and the domain-specific nuances of ethical AI. There is clear progress—for instance, greater awareness of AI’s environmental impact and more stakeholder-informed approaches—indicating that the field is moving toward a more holistic and inclusive ethical paradigm. At the same time, bridging the gap between ethical theory and real-world AI deployment remains a pressing challenge. Ongoing research trends suggest that collaboration between ethicists, domain experts, and engineers will be crucial. Such collaboration can help translate lofty principles into practical guidelines, ensure those guidelines are informed by diverse perspectives, and ultimately embed ethical considerations throughout the AI lifecycle [35,36]. The next few years will be critical for testing how these ideas can be operationalized, whether through improved ethical governance frameworks, international regulations, or design methodologies that make ethical AI not just an aspiration but a standard practice. Each subfield—be it healthcare, environmental sustainability, or autonomous systems—will contribute lessons toward the overarching goal of aligning AI with ethical and societal values on a global scale.

3.4.1. AI and Peace

One of the sectors where AI could have a significant impact is diplomacy and peace. As emphasized by Pope Francis in his message for the 2024 World Day of Peace [37], the use of new technologies in the military and diplomatic context imposes crucial ethical reflections. AI, while having the potential to prevent conflicts and foster diplomacy, could also be used for destructive purposes, such as in the case of lethal autonomous weapons. These systems, lacking moral judgment, risk distancing human decision-makers from the moral responsibility of their actions. The Pope urges a global approach that regulates the use of these technologies, ensuring that they serve peace and not violence.
The ethical challenge, therefore, is twofold: on the one hand, we must value AI’s potential to resolve conflicts and facilitate diplomacy; on the other, we must prevent technology from being used for destructive ends. Global peace depends on our ability to govern AI responsibly and in line with universal moral principles.
In more detail, autonomous systems—ranging from self-driving cars and service robots to autonomous weapons—raise distinctive ethical questions. A prominent case is autonomous vehicles (AVs), which bring classical moral dilemmas to real-world engineering. AVs must sometimes make split-second decisions in life-and-death situations (e.g., brake or swerve scenarios), leading to the famous “trolley problem” analog for cars. Rhim et al. in [30] describe the “AV moral dilemma” as the question of how an AV should respond in an unavoidable accident scenario, a topic that has sparked intense debate among engineers, ethicists, and the public. Their work goes further to propose an integrative ethical decision-making framework for AVs, noting that ethical choices may depend on factors like cultural background and individual morality. For example, their framework suggests an AV user’s cultural context could influence whether they favor fixed rules or context-dependent (relativist) judgments in crash scenarios. This points to moral pluralism as a challenge: there may not be a one-size-fits-all ethical rule for autonomous decision-making, complicating the programming of “universal” ethical AI behaviors. Other researchers likewise highlight that the ethics of AVs is not only about extreme dilemmas, but also about the everyday risk distribution and responsibility. Who is accountable if an autonomous car harms someone—the manufacturer, the driver/passenger, the algorithm’s designer? Ensuring transparency in AV decision logic and maintaining meaningful human control when necessary are active areas of discussion in the recent literature. Early empirical studies (e.g., surveys of public opinion) indicate mixed trust in AV ethics; a significant portion of people remain uncomfortable with machines making moral choices on the road, especially without clear accountability mechanisms.
Beyond civilian robots, autonomous weapon systems bring additional urgency to ethical discourse. The prospect of AI-driven lethal systems that can select and engage targets without human intervention raises fundamental moral issues about the value of human life and the delegation of deadly authority to algorithms. International ethics scholars and policymakers have been grappling with whether such systems can ever meet humanitarian and legal standards. For instance, ongoing debates in the United Nations consider the necessity of preserving “meaningful human control” over weapons, reflecting a widespread ethical view that decisions of life and death should not be left solely to machines. While detailed academic studies on autonomous weapons ethics exist, a common thread is concern over accountability (who is responsible for unlawful harm by an autonomous drone?) and the potential for lowered inhibition to initiate conflict if machines do the fighting. These concerns underline a gap between rapid technological advances in autonomy and the slower development of ethical and legal frameworks to govern their use. As of 2024, no global consensus has been reached on regulations for autonomous weapons, leaving an unresolved ethical gray zone in how AI is applied in warfare.

3.4.2. AI in Medicine

AI also has the potential to revolutionize healthcare, improving diagnostics, optimizing treatments, and making healthcare services more efficient. However, using AI in medicine raises a series of delicate ethical issues related to privacy, data security, discrimination, and professional responsibility [31,38].
  • Data privacy. One of the most critical aspects is the management of health data [39]. AI requires enormous amounts of information to function, and these data must be managed with the highest security. Any privacy violations or improper use of data could undermine patients’ trust in digital technologies. An emblematic example is that of the American TikToker Kodye Elyse, who discovered how images of her children shared online were used for purposes without her consent. This incident highlights the paradox of our digital age: the data we share is no longer fully under our control.
  • Discrimination and prejudices. AI could perpetuate or even amplify existing biases in data. If the datasets used to train algorithms are not representative of the diversity of the population, there is a risk that diagnoses may be inaccurate or discriminatory. Discrimination in treatments, especially against minorities, is a real risk that must be addressed through careful and inclusive technology design.
  • The role of human judgment. AI can improve efficiency but cannot replace empathy and human judgment. The physician must maintain a central role in patient care, using AI as support but never as a substitute for interpersonal relationships. In other words, AI should never replace compassion; instead, it must be a tool that enriches the physician’s ability to care for the patient.
  • Responsibility in the event of errors. Increasing automation in medicine also raises questions about responsibility in the case of errors. If a medical error is caused by an AI system, who is responsible? Clear legislation must be developed to address this issue and establish who should be held accountable in case of harm—whether it is the AI, the system’s designers, or the physicians who use it.
  • Informed consent. Last but not least, navigating how to obtain meaningful consent for AI’s use in diagnoses or treatment recommendations.
Multiple reviews have cataloged these issues. For instance, Murphy et al. in [33] found that most literature focuses on ethics of AI in clinical care (e.g., diagnostic algorithms, robotic caregivers) and repeatedly flags privacy, trust, accountability, and bias as central concerns. Notably, a gap can be observed regarding global health—ethical implications of AI in low- and middle-income countries were largely absent, underscoring an equity concern in the current scholarship. Farhud et al. in [38] similarly underscore that before fully embracing AI in healthcare, practitioners must consider all these ethical dimensions and apply medical ethics principles to guide AI deployment.
Beyond identifying issues, recent works also evaluate how stakeholders perceive them. Li et al. in [35] conducted a systematic review of empirical studies on medical AI ethics and revealed a disconnection between high-level AI ethics guidelines and front-line practice. While policymakers and ethicists have developed abstract principles and regulations (e.g., WHO and EU guidelines) to address AI in medicine, these often do not align with the day-to-day ethical concerns of clinicians, patients, and developers. For example, regulations might stress algorithmic transparency or fairness in general, but practitioners worry about very concrete issues like AI errors in diagnosis or loss of human touch in care. The review by Li et al. [35] suggests involving multidisciplinary stakeholders—ethicists together with developers, clinicians, and patients—to bridge this gap. This trend reflects a growing recognition that effective ethical governance in healthcare AI requires both top-down principles and bottom-up insights from real-world use.

3.4.3. AI and the Social Question

The introduction of AI in key sectors also raises social questions [8]. While AI can potentially improve the lives of many, its pervasiveness could also exacerbate social inequalities. Workers in highly automated sectors, such as manufacturing or services, are at risk of unemployment unless adequate training and professional retraining programs are implemented. Similarly, inequalities in access to technology could create new forms of exclusion.
Without significant developments in response, the importance of technology working for the common good has been repeatedly emphasized, avoiding it being used to manipulate opinions or consolidate power in the hands of a few. Indeed, AI could become a tool for political manipulation, as seen in the famous Cambridge Analytica scandal, where personal data were used to influence elections. It is essential that global policies promote the ethical use of AI, protecting citizens’ rights and preventing risks of abuse.

3.4.4. AI for the Environment

AI can also play a fundamental role in protecting the environment. Optimizing energy consumption, managing natural resources, and reducing waste are all areas where AI could significantly contribute to sustainability. Intelligent systems are already being used to monitor pollution, analyze environmental data, and improve the management of water and energy resources. AI can also help protect biodiversity, collecting and analyzing massive amounts of data to identify corrective actions in real-time [40].
Traditionally, AI ethics debates have been anthropocentric, centered on human-centric issues such as bias, privacy, and safety, with little attention to environmental impact. However, environmental ethics in AI is rapidly emerging as a vital subtopic. Researchers argue that AI’s environmental footprint—from energy-hungry model training to electronic waste—has moral significance and must be factored into ethical assessments. In her recent perspective study [36], van Wynsberghe introduced the concept of “Sustainable AI”, calling for a holistic approach that considers not only how AI can advance sustainability goals but also how to make AI development itself sustainable. The author proposes Sustainable AI as a movement with two branches: AI for sustainability (using AI to help the environment) and sustainability of AI (reducing AI’s ecological impact, such as carbon emissions from computation). This marks a strategic broadening of AI ethics to include ecological integrity alongside social justice. Importantly, van Wynsberghe highlights tensions between innovation and resource use, urging that AI’s design should align with intergenerational environmental stewardship, not just immediate performance gains.
Concrete ethical issues at the intersection of AI and the environment have been documented also in [32], which illustrates how apparently technical choices in climate AI models can carry ethical consequences. For example, training a climate prediction model on globally available data without correction will make it most accurate for regions with abundant data (often affluent nations), inadvertently biasing benefits toward those regions. This data inequity means communities most vulnerable to climate change (often in the Global South) might receive less accurate forecasts or insights, exacerbating global disparities. The authors argue that climate science and AI cannot be divorced from questions of justice: equitable access to data, inclusion of indigenous and local knowledge, and transparency in model limitations are ethical imperatives in climate AI. In general, adopting AI-based environmental technologies must be accompanied by policies that protect the cultural and biological specifics of different regions, avoiding standardization that could threaten local biodiversity. More broadly, scholars are now applying environmental ethics frameworks to AI, such as environmental justice. This approach examines AI’s benefits and burdens, asking who reaps the benefits of AI innovations and who bears the environmental costs (e.g., communities near data centers or mining sites for hardware). By extending the scope of AI ethics to include ecosystems and non-human stakeholders, this line of research is pushing the field toward a more inclusive, planet-conscious ethics. Still, as this is a nascent area, many questions remain open—for instance, how to operationalize “green AI” metrics in AI development pipelines, or how to balance performance and energy efficiency in model training when ethical guidelines demand both accuracy and sustainability.

3.4.5. AI and Space Exploration

Space exploration is another sector in which AI is opening new frontiers [41]. Missions to the Moon, Mars, and beyond require highly automated management, and AI is crucial for the success of these missions. Rovers on Mars, for example, are equipped with intelligent systems that allow them to operate autonomously without the need for constant human intervention. Future space missions will depend even more on AI for navigation, data analysis, and long-term operation management.

3.4.6. AI and Insurtech

AI is significantly impacting the insurance technology (Insurtech) sector, which is undergoing rapid digital transformation.
According to the Italian Insurtech Association (IIA) [42], digital insurance policies accounted for 23% of the global market in 2020 and are projected to reach 80% by 2030, with an annual growth rate of 22%.
Digital technology investments in the insurance sector in Italy, in particular, were EUR 800 M in 2020, expected to rise to EUR 980 M by 2024.
By 2026, 40% of insurance policies in Italy will involve AI-driven processes.
Despite the growth, challenges persist, including limited investments, insufficient skills, lack of consumer education, and an underdeveloped Insurtech ecosystem.
The main opportunities for AI in Insurtech are to be considered in terms of the following:
  • Process optimization: AI is revolutionizing claims assessment and settlement processes, reducing costs by up to 60% and speeding up resolution times by a factor of 10 [43,44,45].
  • Service evolution: Integrated ecosystems combining insurance with additional services are emerging, enhancing customer education and service value.
  • Social impact: AI enables partnerships with public entities to prevent cybercrime and mitigate natural disaster risks [46,47], promoting community stability.

4. AI Regulation Across Different Jurisdictions: USA, China, European Union, Japan, Canada, and Brazil

We now discuss in detail the different regulatory frameworks across the jurisdictions of the USA, China, the EU, Japan, Canada and Brazil and then carry out a comparative analysis aimed at highlighting the differences among them.

4.1. USA

In the context of emerging and transformative technologies, President Biden took a significant step in late 2023 by signing an executive order focused on “Ensuring the Safe, Secure, and Ethical Development and Utilization of Artificial Intelligence” [48]. This executive order, unlike the Quantum Computing Cybersecurity Preparedness Act [49], serves more as a foundational framework for potential future actions rather than a precise roadmap with specific deadlines.
For instance, the executive order clearly outlines the government’s interest in overseeing and assessing AI models that surpass specific thresholds of complexity and computational capacity, particularly in response to safety concerns. Yet, it does not immediately enact specific measures; instead, it mandates further investigation and deliberation. This move followed mounting public pressure for AI regulation, even though the challenges and controversies associated with regulating a technology that has the potential for significant harm before any harm occurs are evident.
Nevertheless, the executive order contains crucial provisions, such as strengthening cybersecurity measures for advanced AI models, ensuring the public identification of AI-generated content, and monitoring materials that could be used in the development of various types of biological weapons.

4.2. China

China has introduced new guidelines for generative AI services, aiming to limit their public use while promoting industrial development. These regulations [50], set to take effect in August 2023, primarily impact organizations providing generative AI services to the public rather than those developing the technology for non-mass-market purposes.
Notably, the rules require compliance with the “core values of socialism” and prohibit any attempts to undermine state power or the socialist system. This creates major interoperability issues between SuperAIs developed in China and those created in Western countries, where AI models are based on different ideological and ethical principles. This led, for instance, to the recent ban of DeepSeek AI by the Italian Data Protection Authority [51].
China has been striving to bolster its generative AI capabilities and challenge the US’s dominance in the field. The Chinese government has even discouraged its tech giants from accessing AI tools like ChatGPT, citing concerns about “uncensored replies”. Instead, Chinese companies such as Alibaba and Baidu are developing their own generative AI tools. However, when Baidu unveiled its chatbot Ernie, the response from investors was lukewarm.
China’s generative AI regulations emphasize the importance of intellectual property rights related to training data and prohibit unfair competition. All training data must come from government-approved sources, and service providers must allow individuals to request reviews or corrections of information used for AI models. Additionally, the Chinese government plans to support generative AI development through infrastructure and public training initiatives.
More specifically, service providers must perform security assessments before deploying their services or when making significant updates. They have the flexibility to conduct these assessments internally or use third-party evaluators. Self-assessments require signatures from a minimum of three key individuals: the legal representative, the security assessment lead, and the legality assessment lead.
When evaluating the safety of the data corpora, a detailed review is conducted, involving the manual inspection of at least 4000 randomly selected training data items. To assess the safety of generated content, a random sample of at least 1000 test questions is examined, with an acceptance rate of 90% or higher. The same criteria and sampling size apply to inspections of keywords and classification models. Additionally, a specific set of questions is used to check adherence to the core values of socialism.

4.3. EU

The European Union has passed a framework to regulate AI within the European single market (AI Act, [52]). This regulation, initially discussed in 2019, was conceived at a time when AI presented a different landscape in terms of potential, risks, and opportunities. However, the emergence of advanced AI models—such as ChatGPT in November 2022—has drastically changed the field.
We are now in the era of large AI models or SuperAI, in which systems are expected to surpass human performance in many tasks within just a few years. There is a risk that Europe is grappling with regulations tailored for an older generation of AI while struggling to manage the latest technological advancements.
The European AI Act must address several immediate challenges that have existed for years. One such issue is the watermarking of AI-generated content, ensuring that citizens can recognize when they are interacting with AI-generated materials, whether they be videos, audio, photos, or text. This is crucial, especially considering the 2023 Argentine presidential election campaign, where AI was heavily used in disinformation campaigns to generate highly credible fake news.
Another key goal of the AI Act is to ban AI applications that contradict European ethical values and fundamental rights. However, when it comes to SuperAIs, a fundamental question arises: How do we regulate systems that scientists do not yet fully understand and that might pose systemic risks that have not yet materialized, without stifling industrial innovation? Striking this balance is essential for the future of European AI technologies.
The UK and the US have chosen a distinct approach towards SuperAIs. Instead of rushing into immediate regulation, they are focusing on risk analysis through specialized centers. The UK swiftly set up the “AI Safety Institute” within its intelligence community. Similarly, in the US, following the AI Act signed by President Biden in October 2023, the responsibility for studying and standardizing these critical technologies has been given to the National Institute of Standards and Technology (NIST). This federal agency, working closely with the US intelligence community, is leading the preliminary study essential for future regulation.
At this early stage, both the UK and the US have embraced a pro-innovation, national security-sensitive approach. This strategy aims to foster a self-regulated SuperAI industry, driven by two primary motives:
  • Economic—The SuperAI market is expected to be one of the most lucrative in technology, offering prosperity to countries that capitalize on these opportunities.
  • Geopolitical—Leading the development of this transformative technology is crucial for global dominance in the 21st century.
NIST and the AI Safety Institute will collaborate with private entities developing SuperAIs, using uniform and transparent scientific methods based on data extracted from these models. This collaboration aims to gain the widest possible understanding of risks, impacts, and the development of mitigation actions. Once the landscape becomes clearer, appropriate regulations will be introduced.
A major step toward the approval of the European AI Act came in late December 2023, when Commissioner Breton announced “stringent requirements” for SuperAIs in Europe. However, if not carefully designed, these requirements could stifle European innovation and increase reliance on non-European technologies, weakening European digital sovereignty and strategic autonomy.
If excessive regulation is imposed, European SuperAI companies may find themselves at a competitive disadvantage compared to non-European firms. Furthermore, the uncertainty over whether these regulations will effectively mitigate yet-unknown risks could drive AI developers to relocate to more innovation-friendly countries outside Europe, exacerbating Europe’s over-reliance on foreign technology.
A promising way forward could be the establishment of a European AI Safety Center that collaborates with European AI industries, as well as the UK and US AI institutes, to build scientifically grounded requirements. This could serve as a solid starting point for the work of the new European Commission after the 2024 elections.

Approval and Implementation Timeline of the AI Act

On 13 March 2024, the European Parliament approved the European Artificial Intelligence Regulation (AI Act) with 523 votes in favor, establishing the world’s first binding AI-specific legislation. The regulation aims to protect fundamental rights, democracy, and environmental sustainability while fostering innovation responsibly. We now outline the timeline for its implementation, the categories of risk and obligations for high-risk AI systems, and the expected challenges and opportunities for the Insurtech sector.
The approval of the AI Act marks a turning point for AI governance in the European Union. The regulation will become fully applicable by 2027, but its implementation will occur in phases:
  • Six months after publication: Ban on specific AI applications.
  • Nine months after entry into force: Introduction of voluntary codes of conduct.
  • Twelve months after publication: Application of general AI rules and governance.
  • Thirty-six months after publication: Obligations for high-risk systems.
The AI Act classifies AI applications based on risk, prohibiting or regulating systems deemed harmful to fundamental rights. Key provisions include the following:
  • Prohibited applications:
    • Biometric categorization based on sensitive data.
    • Facial image retrieval from the internet or CCTV for facial recognition databases.
    • Emotion detection in workplaces and schools.
    • Social scoring and predictive policing based solely on individual profiles.
    • Manipulation of human behavior or exploitation of vulnerabilities.
  • Restricted uses: Real-time biometric identification by law enforcement is only allowed under specific time and geographical limitations with prior judicial or administrative authorization.
  • High-risk systems obligations. These systems must do the following:
    • Undergo risk assessments.
    • Maintain usage logs.
    • Ensure transparency, accuracy, and human oversight.
    • Allow individuals to file complaints and receive explanations for AI-driven decisions.
  • Transparency for general-purpose AI systems: These must comply with EU copyright law, publish detailed summaries of training data, and clearly label deepfakes.

4.4. Japan

Japan has taken a distinct “soft-law” regulatory path characterized by voluntary guidelines, self-regulation, and public–private dialog. The country’s Social Principles of Human-Centric AI, published by the Japanese Cabinet Office [53], emphasize transparency, accountability, and inclusiveness, while promoting innovation. Japan’s governance framework encourages companies to align with these values without imposing binding legislative constraints, thereby fostering ethical behavior through consensus-building rather than top-down enforcement. Moreover, the Japanese government actively collaborates with industry and academia through initiatives such as the AI Strategy Council and METI’s partnership programs. This cooperative approach is designed to ensure that AI development aligns with societal needs and the common good.
Japan approaches AI governance within its broader “Society 5.0” vision—an integration of digital innovation with societal benefit. In addition, to the above-mentioned AI governance guidelines, the Japan government collaborates with the OECD and ISO bodies.
All in all, the key characteristics of Japan’s regulatory framework are the following:
  • Voluntary compliance, reinforced by trust in public institutions;
  • Focus on human-centric design, resilience, and quality assurance;
  • Active support for AI ethics by design in healthcare and eldercare.
Japan’s emphasis on social cohesion and ethical co-design aligns with algor-ethics, though enforcement mechanisms remain voluntary.

4.5. Canada

Canada has taken a leading role in developing practical, government-wide tools for AI risk governance. Its Directive on Automated Decision-Making mandates the use of an Algorithmic Impact Assessment (AIA) for any federal system involving automated decision-making [54]. The AIA assesses systems based on dimensions such as the severity of potential impacts, transparency, data quality, and human oversight. Depending on the level of risk (low, moderate, high, very high), specific requirements are enforced—ranging from public disclosures to third-party audits. Canada’s AIA model has gained international attention for its actionable risk-tiered design, offering a blueprint for other democracies seeking to operationalize principles such as accountability and fairness. In general, Canada has adopted a procedural governance model, and it was among the first countries to require AIAs for government use of AI.
The key characteristics of Canada’s regulatory framework are the following:
  • Risk-tiered obligations (impact level from 1 to 4);
  • Emphasis on explainability, auditability, and public engagement;
  • Regulatory sandboxing to test innovation under supervision.
Canada’s structured risk assessments, transparency-by-design, and inclusion of stakeholders reflect the operational ethos of algor-ethics and are proving to be strong in public sector use.

4.6. Brazil

Brazil has emerged as a regional leader in AI governance, aligning its national AI strategy with existing data protection law—the Lei Geral de Proteção de Dados (LGPD). The Brazilian AI strategy emphasizes ethical principles such as privacy, human rights, and environmental sustainability. In 2021, Brazil published a Legal Framework for Artificial Intelligence [55], currently under legislative review. The proposed law introduces key principles like transparency, auditability, and non-discrimination, while preserving flexibility for innovation. Brazil’s approach reflects an effort to harmonize AI regulation with democratic values and socio-economic development, particularly in Latin America, where regulatory capacity varies widely across countries.
Despite differences in institutional structures and political priorities, these regulatory models share common features:
  • Risk-based approaches to assessing and managing harms;
  • Emphasis on transparency, explainability, and human oversight;
  • Engagement with multi-stakeholder ecosystems, including civil society, academia, and the private sector;
  • Integration with broader data governance and digital rights frameworks.
These examples complement the global regulatory efforts of the USA, the EU, and China by showcasing alternative pathways toward responsible AI governance that balance innovation with societal trust. As such, they reinforce the paper’s call for a globally interoperable and ethically grounded approach to AI regulation.
In general, Brazil has taken initial steps toward formal AI regulation, framing AI governance around democratic values, non-discrimination, and transparency.
The key characteristics of Brazil’s regulatory framework are the following:
  • Rights-based foundation grounded in Constitutional principles;
  • Multi-stakeholder consultation on AI ethics guidelines;
  • Regional leadership in Latin America for inclusive governance.
Based on the above and by comparison with the previous subsections, we can conclude that Brazil’s commitment to fairness and democratic oversight echoes algor-ethical values but lacks the institutional depth of more mature frameworks.

4.7. Comparative Analysis of the AI Regulatory Frameworks Across the USA, China, the EU, Japan, Canada, and Brazil

A comparative analysis of the worldwide landscape of regulatory frameworks reveals the following insights:
  • Different regulatory approaches: The EU adopts a horizontal and legally binding regulatory framework that promotes uniformity and legal certainty but may encounter challenges in adapting to the rapid pace of technological innovation. In contrast, the USA follows a sector-specific, innovation-driven model that fosters flexibility and market responsiveness but risks regulatory fragmentation and uneven oversight. China’s vertically integrated approach emphasizes centralized control and strategic deployment in key sectors, enabling targeted regulation while potentially producing gaps in coverage. Among complementary models, Japan pursues a soft-law strategy that leverages voluntary guidelines and public–private collaboration to cultivate ethical norms through consensus-building. Brazil aligns its AI strategy with existing data protection legislation, seeking to harmonize innovation with democratic accountability.
  • Different scope of coverage: The EU’s comprehensive framework aims to ensure robust protection of fundamental rights across all AI applications. The USA approach, with its fragmented jurisdictional boundaries, may leave significant regulatory gaps, particularly in non-healthcare or non-financial domains. China prioritizes strategic and high-impact AI applications, which allows for rapid rulemaking but may limit uniform protections across sectors. In Canada, the mandatory use of an Algorithmic Impact Assessment (AIA) ensures that federal AI systems are evaluated according to their scope and potential societal harm, exemplifying a scalable governance model with practical enforceability.
  • Different risk classification: The EU employs a hierarchical, risk-based taxonomy that mandates strict obligations for high-risk AI systems, including those used in critical infrastructure, education, and law enforcement. The USA lacks a unified risk classification framework, resulting in inconsistencies across sectors. Japan resembles the USA’s approach to risk classification. China, by contrast, aligns its risk prioritization with national security and industrial policy objectives, focusing regulatory scrutiny on applications deemed socially or politically sensitive. Canada’s AIA provides a concrete example of operationalizing risk-based assessment through a tiered system, linking risk levels with oversight mechanisms such as third-party audits and public disclosure. Brazil’s regulatory framework resembles Canada’s with respect to risk classification.
  • Different enforcement mechanisms: The EU relies on centralized supervisory authorities empowered to issue sanctions, thereby ensuring regulatory coherence but also raising concerns about over-regulation. The USA employs a decentralized enforcement model, with responsibilities distributed among sectoral agencies, which can hinder consistent compliance. China’s enforcement is primarily directive-based, enabling rapid implementation but potentially limiting transparency and external oversight. In Japan, enforcement is largely non-coercive, relying on voluntary adherence and industry engagement. Brazil’s emerging legal framework emphasizes principles such as transparency and non-discrimination, while retaining regulatory flexibility to accommodate technological evolution. In Canada the enforcement mechanism is solely limited to the public sector for now.
This analysis underscores the diverse strategies in AI regulation, reflecting each region’s priorities and challenges in balancing innovation with ethical considerations. Also, we report in Table 2 the pros and cons of each regulatory framework.
The comparison illustrates that no single jurisdiction fully realizes the algor-ethical framework. However, the EU, Japan and Canada offer promising models, especially in terms of human dignity, distributed responsibility, and lifecycle governance. A transnational synthesis—fostering interoperability between rights-based and innovation-driven regimes—will be essential to prevent AI fragmentation and ethics-washing.

5. Strategic and Governance Considerations for AI Applications

In this section, we discuss how organizations should define a robust strategy and governance framework in order to maximize the benefits of AI and digital transformation, in compliance with the above-mentioned regulation.
By balancing technological innovation with the protection of fundamental rights, the AI Act sets a precedent for the ethical and sustainable use of AI across industries, ensuring a future where AI contributes positively to society and the economy. Within its boundaries, implementing effective governance will be critical for managers to harness AI’s transformative potential responsibly. In particular, chief data/AI officers and AI managers should resort to four tools:
  • The DAMA framework [57];
  • AI strategy;
  • AI governance;
  • Compliance to emerging international standards, namely ISO/IEC 22989:2022 [58] and ISO/IEC 42001:2023 [5].

5.1. The DAMA Framework

Following the Data Management Body of Knowledge (DAMA DMBoK) [57] is essential for the following:
  • Structuring integrated data management.
  • Mapping organizational stakeholders involved in AI projects.
  • Assessing the maturity of data and AI applications.

5.2. AI Strategy Essentials

With the DAMA framework in place, it is important to define the correct AI strategy, which should define the boundaries within which to carry out the design, prototyping, and production of any AI. An AI strategy should do the following:
  • Define clear impact areas and organizational guidelines.
  • Set measurable objectives with implementation roadmaps.
  • Ensure frequent updates (quarterly recommended).
  • Focus on no more than 8–10 primary objectives with corresponding KPIs.
  • Align AI strategy with business use cases.
These can be considered as the essential elements of crafting a winning Al strategy. A successful Al strategy is not about chasing the latest tech trends, but it is about aligning Al initiatives with business goals, setting clear objectives, and measuring impact, from the original Al vision, through a phased implementation roadmap, to meaningful KPIs that allow us to ultimately calculate the return on investment. These are the practical tools the chief data/AI officer can use to transform Al into a powerful driver of business success [6].

5.3. AI Governance

Finally, the fulfillment of the above-mentioned strategy within the guidelines offered by the DAMA framework can be obtained only through effective AI governance across the whole organization. Effective governance requires the following:
  • Comprehensive data documentation and use case mapping.
  • Clear understanding of data flows and sources.
  • Streamlined data and AI flows to enhance business agility.
All in all, AI governance is the system of rules, processes, and oversight that ensures artificial intelligence is developed and used responsibly, safely, and in alignment with human and societal values. It defines how leaders control the risks and unlock the value of AI—across governments, companies, and society. The core pillars of AI governance are therefore as follows:
  • Transparency. In this respect, AI governance should be enforced so that risk is controlled as per how the AI system makes decisions, ensuring that they are explained and auditable.
  • Accountability. In this respect, AI governance must define clear rules in terms of who is responsible if something goes wrong.
  • Fairness. In this respect, AI governance should define and enforce clear guardrails so that the AI system treats all people and groups equitably.
  • Safety and security. In this respect, AI governance should oversee processes so that the AI system proves robust against misuse, error, and adversarial attacks.
  • Privacy. In this respect, AI governance should oversee processes so that the AI system respects data protection and personal freedoms.
  • Alignment with human values. In this respect, AI governance should oversee processes so that the AI system serves the values and mission of the company, as well as society’s goals, without drifting toward unintended, harmful outcomes.
In conclusion, AI governance must be regarded as the chief data/AI officer’s playbook for making sure AI works for us, and not against us [59].

5.4. ISO/IEC 42001:2023

The ISO/IEC 42001:2023 establishes a framework for organizations to manage AI in a responsible and ethical manner. This international standard provides requirements for setting up, implementing, maintaining, and improving an AI Management System. Its main objective is to promote trustworthiness, transparency, and the ethical use of AI, while enabling innovation and growth in AI technologies.
The standard is applicable to any organization, regardless of its size or sector, that develops, provides, or uses AI-based products or services. It addresses critical aspects such as the following:
  • Risk management;
  • Security;
  • Fairness;
  • Data quality;
  • Accountability.
By promoting a risk-based approach, the framework ensures that organizations can effectively mitigate AI-related risks and maximize its benefits. It is also aligned with the AI Act from the European Union, providing guidelines for ensuring compliance with emerging regulations on AI ethics and governance.
Let us now break down the key elements—risk management, security, fairness, data quality, and accountability—as outlined in ISO/IEC 42001:2023, which governs the responsible and ethical use of AI systems. These principles within ISO/IEC 42001:2023 are designed to provide a holistic framework for managing the risks, ethical considerations, and operational challenges associated with AI systems. Implementing these principles helps organizations ensure that AI is used in a way that is secure, fair, transparent, accountable, and based on high-quality data, ultimately fostering public trust and ensuring that AI technologies benefit society as a whole.

5.4.1. Risk Management

In ISO/IEC 42001:2023, risk management refers to identifying, assessing, and mitigating risks associated with AI systems. This involves the following.
  • Risk identification: Understanding and cataloging all potential risks AI may introduce, such as safety concerns, biases in algorithms, unintended consequences of decisions made by AI, and data privacy issues.
  • Risk assessment: Evaluating the severity and likelihood of these risks. This includes determining which risks could have a high impact on stakeholders (e.g., customers, employees, society).
  • Risk mitigation: Implementing strategies to minimize or eliminate identified risks. This might involve adjusting AI system designs, ensuring robust validation of models, conducting ongoing monitoring, and setting up mechanisms for user feedback and system updates.
The goal is to ensure that AI systems operate safely and responsibly, minimizing harm to individuals, organizations, and society. In the literature, the first risk management framework for AI systems based on the recently proposed regulatory frameworks for AI is introduced in [60]. Later on, in [61] an extension of the integrated AI risk management framework proposed in [60] is discussed, combining it with existing frameworks for measuring value creation from harnessing AI potential. Additionally, Ref. [62] contributes to the applied field of AI by implementing the proposed risk framework across nine industry-relevant use cases: by employing the proposed risk-aware actual value metric, stakeholders are empowered to make informed decisions that prioritize safety and maximize the potential benefits of AI initiatives. Even more recently, Novelli et al. [63] have proposed the application of the relevant risk categories to specific AI scenarios using a risk assessment model that integrates the AI Act with the risk approach arising from the Intergovernmental Panel on Climate Change and the related literature. This integrated model enables the estimation of AI risk magnitude by considering the interaction between risk determinants, individual drivers of determinants, and multiple risk types. The effectiveness of the proposed risk management approach is illustrated using large language models as an example of generative AI.

5.4.2. Security

Security in the context of ISO/IEC 42001 addresses protecting AI systems and their data from various threats, such as cyberattacks, unauthorized access, or data manipulation. It includes the following:
  • System protection: Ensuring the AI system’s architecture is designed to be resilient to cyberattacks or breaches.
  • Data protection: Safeguarding the confidentiality, integrity, and availability of data used by the AI. This includes encryption, secure data storage, and proper access controls.
  • Vulnerability management: Identifying weaknesses in AI systems and addressing them proactively, such as patching vulnerabilities, using intrusion detection systems, and ensuring continuous monitoring for security threats.
Security is crucial to prevent the exploitation of AI systems and ensure that sensitive data are protected from unauthorized access or malicious use.

5.4.3. Fairness

Fairness aims to ensure that AI systems do not discriminate or create biases in their decisions. This principle involves the following:
  • Bias identification: Regularly testing AI models for bias that could result in unfair treatment of individuals or groups, especially in sensitive areas like hiring, credit scoring, and law enforcement.
  • Bias mitigation: Implementing measures to eliminate or reduce biases in training data, model development, and decision-making processes. This may include using diverse datasets, applying fairness-enhancing algorithms, or introducing transparency in how decisions are made by AI systems.
  • Equal treatment: Ensuring that AI models treat individuals or groups equitably, regardless of their characteristics such as race, gender, age, or socio-economic background.
Fairness ensures that AI benefits all stakeholders and does not perpetuate or amplify inequalities in society.

5.4.4. Data Quality

Data quality in AI is essential to ensure that AI systems function properly and make accurate, reliable decisions. This involves the following:
  • Data accuracy: Ensuring that the data used by AI models is correct, precise, and free from errors. This is crucial to avoid flawed predictions or harmful outcomes.
  • Data completeness: Ensuring that the data used is comprehensive and covers all relevant factors necessary for making informed decisions.
  • Data relevance: The data must be appropriate and aligned with the goals of the AI system, ensuring that only the necessary data are collected and used.
  • Data timeliness: Using up-to-date data to prevent the AI system from making outdated or irrelevant decisions.
  • Data integrity: Ensuring that data are not corrupted or manipulated in ways that would affect the reliability of AI decisions.
High-quality data are the foundation of trustworthy AI systems, and poor data quality can lead to inaccurate models and unfair outcomes.

5.4.5. Accountability

Accountability refers to ensuring that there is a clear line of responsibility for decisions made by AI systems. This encompasses the following:
  • Clear roles and responsibilities: Defining who is responsible for the development, deployment, and monitoring of AI systems within an organization. This includes assigning responsibility for addressing any harm caused by AI systems.
  • Auditability: Ensuring that AI systems and their decision-making processes are transparent and can be audited. This includes maintaining logs of decisions, actions taken, and data used in decision-making to facilitate accountability.
  • Redress mechanisms: Providing avenues for individuals or groups negatively affected by AI systems to seek compensation, corrections, or improvements. This ensures that people have a way to address grievances or harms resulting from AI decisions.
  • Transparency: Organizations must be transparent about how their AI systems work, how decisions are made, and how data are used. This transparency enables stakeholders to understand the AI’s reasoning and hold the system accountable for its actions.
Accountability is essential for building trust in AI systems and ensuring that organizations take responsibility for the impacts of their AI-driven decisions.

5.5. ISO/IEC 22989:2022

ISO/IEC 22989:2022 complements this by providing a comprehensive taxonomy and foundational framework for AI concepts. Published earlier in 2022, this standard defines the terminology, principles, and concepts relevant to AI systems. It focuses on the following:
  • Establishing a shared understanding of AI principles;
  • Clarifying how to classify AI systems based on their characteristics and applications;
  • Supporting organizations in designing, implementing, and managing AI in line with ethical, technical, and operational best practices.
Together, these two standards create a solid foundation for organizations to develop AI responsibly. ISO/IEC 22989 provides the conceptual groundwork, while ISO/IEC 42001 translates these principles into a structured management approach. Both aim to ensure AI technologies are safe, fair, transparent, and aligned with societal values.

5.6. Discussion

The ISO/IEC standards are increasingly being adopted to provide structured guidance for AI governance. For example, ISO/IEC 38507 [64] outlines principles for corporate IT governance, including oversight mechanisms applicable to AI systems. While normative in scope, these standards are seeing practical uptake in multinational firms.
A notable example is Siemens, which has implemented ISO/IEC 38507 in its internal AI governance protocols to align product development with organizational risk policies. Their adoption process involved cross-departmental audits and formal review boards to assess AI deployment compliance.
Similarly, a 2022 empirical study conducted by the German Federal Ministry for Economic Affairs [65] examined ISO/IEC 22989 and 23894 in pilot implementations across six companies in the manufacturing and financial sectors. The study reports positive feedback on the clarity of risk classification and stakeholder mapping, though participants noted challenges in applying abstract standards to rapidly evolving AI tools.
Moreover, qualitative data collected through interviews with AI ethics officers and IT governance leads reveal that while ISO/IEC standards offer a shared vocabulary, organizations often customize their application depending on local regulations, market needs, and internal maturity levels [59].
These findings suggest that the value of ISO/IEC standards lies less in their prescriptive power and more in their role as reference models adaptable to varying organizational contexts.

6. Conclusions

The governance of artificial intelligence AI stands at a crossroads, requiring a delicate balance between fostering innovation and ensuring ethical accountability. As AI continues to shape critical sectors such as healthcare, finance, environmental sustainability, and Insurtech, its ethical, social, and legal implications must be carefully managed. This study has provided a synoptic view of AI governance across the United States, China, the European Union, Japan, Canada, and Brazil, illustrating how regulatory approaches reflect differing political, economic, and ethical priorities.
The EU’s AI Act adopts a risk-based framework, imposing stringent oversight on high-risk AI applications while promoting transparency and accountability. The United States, through executive orders and sector-specific initiatives, emphasizes self-regulation, innovation, and national security concerns. Meanwhile, China integrates AI regulation with state ideology, ensuring compliance with socialist principles and exerting centralized control over AI-driven applications. These variations underscore the need for global regulatory harmonization, ensuring that AI systems are both interoperable and aligned with universal ethical principles.
A key takeaway from this study is that regulatory frameworks alone are insufficient to ensure responsible AI governance. Standardization efforts, particularly ISO/IEC 42001:2023 and ISO/IEC 22989:2022, provide an essential foundation for aligning AI development with ethical principles:
The “ISO/IEC 42001: AI Management Systems” standard establishes a comprehensive framework for AI governance, defining policies for risk management, security, fairness, transparency, and accountability. By integrating structured AI risk assessment methodologies, it ensures that organizations develop AI in compliance with regulatory obligations, mitigating potential harms while maximizing technological benefits.
A complementary standard, “ISO/IEC 22989: AI Concepts and Taxonomy”, offers a shared vocabulary and classification framework for AI systems. By standardizing definitions, capabilities, and risk categorizations, it enhances interoperability across jurisdictions and facilitates coherent governance frameworks.
Together, these standards establish best practices for AI ethics, security, and regulatory compliance, ensuring that AI-driven decision-making remains transparent, accountable, and aligned with societal values.
Looking ahead, AI governance must overcome jurisdictional boundaries and encourage a global and transdisciplinary dialog that involves industry leaders, ethicists and policymakers, as well as the whole civil society. It is important that a human-centric approach to AI regulation prioritizes the following:
  • Ethical “algor-ethics” frameworks to mitigate bias and discrimination.
  • Clear accountability mechanisms ensuring liability for AI-driven decisions.
  • Global cooperation on AI risk assessment, particularly for high-risk systems such as generative AI and autonomous decision-making models.
  • Public awareness initiatives to promote AI literacy and prevent misuse.
The AI revolution is not just technological—it is profoundly ethical and social. The question is not whether AI should be regulated, but how to govern AI in a way that safeguards human dignity, fosters innovation, and ensures long-term sustainability. Without proactive governance, we risk being governed by AI systems rather than governing them.
By leveraging robust regulatory frameworks and international standards such as ISO/IEC 42001 and ISO/IEC 22989 and fostering cross-sectoral collaboration, we can shape a future where AI serves as a force for societal good rather than an instrument of unchecked power. Only through co-responsibility among governments, businesses, and civil society can AI truly become a tool for progress, equity, and human flourishing.

Author Contributions

Conceptualization, L.R.C. and A.Y.Z.; methodology, L.R.C.; formal analysis, L.R.C.; investigation, L.R.C.; resources, L.R.C.; data curation, L.R.C.; writing—original draft preparation, L.R.C.; writing—review and editing, L.R.C. and A.Y.Z.; visualization, L.R.C.; supervision, A.Y.Z. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

No new data were created.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Müller, V.C. Ethics of Artificial Intelligence and Robotics. In Stanford Encylopedia of Philosophy; Edward, N.Z., Ed.; Stanford University: Stanford, CA, USA, 2020; pp. 1–70. [Google Scholar]
  2. Rome Call for AI Ethics, February 28th, 2020. Available online: https://www.vatican.va/roman_curia/pontifical_academies/acdlife/documents/rc_pont-acd_life_doc_20202228_rome-call-for-ai-ethics_en.pdf (accessed on 23 January 2025).
  3. Green, B.P. The Vatican and Artificial Intelligence: An Interview with Bishop Paul Tighe. J. Moral Theol. 2022, 11, 212–231. [Google Scholar]
  4. Benanti, P. Il Crollo di Babele. Che Fare Dopo la Fine del Sogno di Internet? San Paolo Edizioni: Milano, Italy, 2024. [Google Scholar]
  5. ISO/IEC 22989:2022; AI Concepts and Terminology. ISO: Geneva, Switzerland, 2022. Available online: https://www.iso.org/standard/74296.html (accessed on 18 February 2025).
  6. ISO/IEC 42001:2023; AI Management Systems. ISO: Geneva, Switzerland, 2023. Available online: https://www.iso.org/standard/81230.html (accessed on 18 February 2025).
  7. Tricco, A.C.; Lillie, E.; Wasifa, Z.; O’Brien, K.K.; Colquhoun, H.; Levac, D.; Moher, D.; Micah, D.J.; Horsley, T.; Weeks, L.; et al. PRISMA Extension for Scoping Reviews (PRISMA-ScR): Checklist and Explanation. Ann. Intern. Med. 2018, 169, 467–473. [Google Scholar] [CrossRef] [PubMed]
  8. Montomoli, J.; Bitondo, M.M.; Cascella, M.; Rezoagli, E.; Romeo, L.; Bellini, V.; Semeraro, F.; Gamberini, E.; Frontoni, E.; Agnoletti, V.; et al. Algor-ethics: Charting the ethical path for AI in critical care. J. Clin. Monit. Comput. 2024, 38, 931–939. [Google Scholar] [CrossRef] [PubMed]
  9. Valerio, C. La Tecnologia È Religione; Giulio Einaudi Editore: Torino, Italy, 2023; ISBN 978-88-06-25186-4. [Google Scholar]
  10. Ellul, J. La Technique ou L’ENJEU du Siècle; Wikimedia Foundation, Inc.: San Francisco, CA, USA, 1954. [Google Scholar]
  11. Habermas, J. Technology and Science as Ideology; Columbia University Press: New York, NY, USA, 1968. [Google Scholar]
  12. Tonelli, G. Materia. La Magnifica Illusione; Feltrinelli: Gargnano, Italy, 2023. [Google Scholar]
  13. Heidegger, M. The Question Concerning Technology and Other Essays; Garland Publishing, Inc.: New York, NY, USA, 1977. [Google Scholar]
  14. Rifkin, J. The Biotech Century; Tarcher: New York, NY, USA, 1998. [Google Scholar]
  15. Dicastery for the Doctrine of the Faith, Dicastery for Culture and Education. Antiqua et Nova. Note on the Relationship between Artificial Intelligence and Human Intelligence. Available online: https://www.vatican.va/roman_curia/congregations/cfaith/documents/rc_ddf_doc_20250128_antiqua-et-nova_en.html (accessed on 28 January 2025).
  16. Casalone, C. Una ricerca etica condivisa nell’era digitale. La Civiltà Cattol. 2020, 2, 30–43. [Google Scholar]
  17. Benanti, P. The urgency of an algorethics. Discov. Artif. Intell. 2023, 3, 11. [Google Scholar] [CrossRef]
  18. Floridi, L. The Fourth Revolution: How the Infosphere is Reshaping Human Reality; Oxford University Press: Oxford, UK, 2014. [Google Scholar]
  19. OECD Artificial Intelligence Policy Observatory. Available online: https://oecd.ai/en/ (accessed on 21 March 2025).
  20. OECD AI Incidents Tracker. Available online: https://oecd.ai/en/incidents (accessed on 21 March 2025).
  21. NIST AI Risk Management Framework. Available online: https://www.nist.gov/itl/ai-risk-management-framework (accessed on 21 March 2025).
  22. Stanford HAI—AI Index. Available online: https://hai.stanford.edu/ai-index (accessed on 21 March 2025).
  23. Angwin, J.; Larson, J.; Mattu, S.; Kirchner, L. Machine Bias—There’s Software Used across the Country to Predict Future Criminals. And It’s Biased against Blacks; Benton Institute for Broadband & Society: Wilmette, IL, USA, 2016. [Google Scholar]
  24. Obermeyer, Z.; Powers, B.; Vogeli, C.; Mullainathan, S. Dissecting racial bias in an algorithm used to manage the health of populations. Science 2019, 366, 447–453. [Google Scholar] [CrossRef]
  25. McCradden, M.D.; Stephenson, E.A.; Anderson, J.A. Clinical research underlies ethical integration of healthcare artificial intelligence. Nat. Med. 2020, 26, 1325–1326. [Google Scholar] [CrossRef]
  26. Dastin, J. Amazon Scraps Secret AI Recruiting Tool That Showed Bias against Women. Reuters. 2018. Available online: https://www.reuters.com/article/us-amazon-com-jobs-automation-insight/amazon-scraps-secret-ai-recruiting-tool-that-showed-bias-against-women-idUSKCN1MK08G (accessed on 18 February 2025).
  27. Kramer, A.D.I.; Guillory, J.E.; Hancock, J.T. Experimental evidence of massive-scale emotional contagion through social networks. Proc. Natl. Acad. Sci. USA 2014, 111, 8788–8790. [Google Scholar] [CrossRef]
  28. Bartlett, R.; Morse, A.; Stanton, R.; Wallace, N. Consumer-lending discrimination in the FinTech Era. J. Financ. Econ. 2022, 143, 30–56, ISSN 0304-405X. [Google Scholar] [CrossRef]
  29. Hill, D.; O’Connor, C.D.; Slane, A. Police use of facial recognition technology: The potential for engaging the public through co-constructed policy-making. Int. J. Police Sci. Manag. 2022, 24, 325–335. [Google Scholar] [CrossRef]
  30. Krishnapriya, K.S.; Vítor, A.; Kushal, V.; Michael, K.; Kevin, B. Issues Related to Face Recognition Accuracy Varying Based on Race and Skin Tone. IEEE Trans. Technol. Soc. 2020, 1, 8–20. [Google Scholar] [CrossRef]
  31. Rhim, J.; Lee, J.-H.; Chen, M.; Lim, A. A Deeper Look at Autonomous Vehicle Ethics: An Integrative Ethical Decision-Making Framework to Explain Moral Pluralism. Front. Robot. AI 2021, 8, 2021. [Google Scholar]
  32. Tang, L.; Li, J.; Fantus, S. Medical artificial intelligence ethics: A systematic review of empirical studies. Digit. Health 2023, 9, 20552076231186064. [Google Scholar] [CrossRef] [PubMed] [PubMed Central]
  33. Acquaviva, V.; Barnes, E.A.; Gagne, D.J.; McKinley, G.A.; Thais, S. Ethics in climate AI: From theory to practice. PLoS Clim. 2024, 3, e0000465. [Google Scholar] [CrossRef]
  34. Murphy, K.; Di Ruggiero, E.; Upshur, R.; Willison, D.J.; Malhotra, N.; Cai, J.C.; Malhotra, N.; Lui, V.; Gibson, J. Artificial intelligence for good health: A scoping review of the ethics literature. BMC Med. Ethics 2021, 22, 14. [Google Scholar] [CrossRef]
  35. Morley, J.; Elhalal, A.; Garcia, F.; Kinsey, L.; Mokander, J.; Floridi, L. Ethics as a Service: A Pragmatic Operationalisation of AI Ethics. Minds Mach. 2021, 31, 239–256. [Google Scholar] [CrossRef]
  36. Li, F.; Ruijs, N.; Lu, Y. Ethics & AI: A Systematic Review on Ethical Concerns and Related Strategies for Designing with AI in Healthcare. AI 2023, 4, 28–53. [Google Scholar] [CrossRef]
  37. van Wynsberghe, A. Sustainable AI: AI for sustainability and the sustainability of AI. AI Ethics 2021, 1, 213–218. [Google Scholar] [CrossRef]
  38. Message by Pope Francis for the 2024 World Day of Peace. Available online: https://www.vatican.va/content/francesco/en/messages/peace/documents/20231208-messaggio-57giornatamondiale-pace2024.html (accessed on 18 February 2025).
  39. Farhud, D.D.; Zokaei, S. Ethical Issues of Artificial Intelligence in Medicine and Healthcare. Iran. J. Public Health 2021, 50, i–v. [Google Scholar] [CrossRef] [PubMed] [PubMed Central]
  40. Capraro, V.; Lentsch, A.; Acemoglu, D.; Akgun, S.; Akhmedova, A.; Bilancini, E.; Bonnefon, J.F.; Brañas-Garza, P.; Butera, L.; Douglas, K.M.; et al. The impact of generative artificial intelligence on socioeconomic inequalities and policy making. PNAS Nexus. 2024, 3, 191. [Google Scholar] [CrossRef] [PubMed] [PubMed Central]
  41. Ricciardi Celsi, L.; Valli, A. Applied Control and Artificial Intelligence for Energy Management: An Overview of Trends in EV Charging, Cyber-Physical Security and Predictive Maintenance. Energies 2023, 16, 4678. [Google Scholar] [CrossRef]
  42. Justin, G.; Christopher, W.; James, M. Precision Medicine for Long and Safe Permanence of Humans in Space; Chapter 16—Current AI Technology in Space; Chayakrit, K., Ed.; Academic Press: Cambridge, MA, USA, 2025; pp. 239–250. ISBN 9780443222597. [Google Scholar] [CrossRef]
  43. Ranucci Brandimarte, S.; Di Francesco, G. Insurtech or Out, Egea, 2023. Available online: https://insurtechitaly.com/insurtech-or-out/ (accessed on 18 February 2025).
  44. Andreozzi, A.; Ricciardi Celsi, L.; Martini, A. Enabling the Digitalization of Claim Management in the Insurance Value Chain Through AI-Based Prototypes: The ELIS Innovation Hub Approach; Digitalization Cases Vol. 2. Management for Professionals; Urbach, N., Roglinger, M., Kautz, K., Alias, R.A., Sau ders, C., Wiener, M., Eds.; Springer: Cham, Switzerland, 2021. [Google Scholar]
  45. Maiano, L.; Montuschi, A.; Caserio, M.; Ferri, E.; Kieffer, F.; Germanò, C.; Baiocco, L.; Celsi, L.R.; Amerini, I.; Anagnostopoulos, A. A deep-learning–based antifraud system for car-insurance claims. Expert Syst. Appl. 2023, 231, 120644. [Google Scholar] [CrossRef]
  46. Atanasious, M.M.H.; Becchetti, V.; Giuseppi, A.; Pietrabissa, A.; Arconzo, V.; Gorga, G.; Gutierrez, G.; Omar, A.; Pietrini, M.; Rangisetty, M.A.; et al. An Insurtech Platform to Support Claim Management Through the Automatic Detection and Estimation of Car Damage from Pictures. Electronics 2024, 13, 4333. [Google Scholar] [CrossRef]
  47. Salentinig, A.; Iannelli, G.C.; Gamba, P. Data-, Feature-and Decision-Level Fusion for Classification; Elsevier: Amsterdam, The Netherlands, 2023. [Google Scholar]
  48. Executive Order 14110 of October 30, 2023, Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence. Available online: https://www.federalregister.gov/documents/2023/11/01/2023-24283/safe-secure-and-trustworthy-development-and-use-of-artificial-intelligence (accessed on 18 February 2025).
  49. Quantum Computing Cybersecurity Preparedness Act, 12/21/2022. Available online: https://www.congress.gov/bill/117th-congress/house-bill/7535 (accessed on 18 February 2025).
  50. Interim Administrative Measures for Generative Artificial Intelligence Services, August 15th, 2023. Available online: https://www.cac.gov.cn/2023-07/13/c_1690898327029107.htm (accessed on 18 February 2025).
  51. A Deep-See on DeepSeek: How Italy’s Ban Might Shape AI Oversight, January 31st, 2025. Available online: https://www.forbes.com/sites/nizangpackin/2025/01/31/a-deep-see-on-deepseek-how-italys-ban-might-shape-ai-oversight/ (accessed on 18 February 2025).
  52. Regulation (EU) 2024/1689 of the European Parliament and of the Council of 13 June 2024 Laying Down Harmonised Rules on Artificial Intelligence and Amending Regulations (EC) No 300/2008, (EU) No 167/2013, (EU) No 168/2013, (EU) 2018/858, (EU) 2018/1139 and (EU) 2019/2144 and Directives 2014/90/EU, (EU) 2016/797 and (EU) 2020/1828 (Artificial Intelligence Act) (Text with EEA relevance). Available online: https://eur-lex.europa.eu/eli/reg/2024/1689/oj/eng (accessed on 18 February 2025).
  53. Social Principles of Human-Centric AI, Government of Japan. Available online: https://www.cas.go.jp/jp/seisaku/jinkouchinou/pdf/humancentricai.pdf (accessed on 21 March 2025).
  54. Algorithmic Impact Assessment, Government of Canada. Available online: https://www.canada.ca/en/government/system/digital-government/digital-government-innovations/responsible-use-ai/algorithmic-impact-assessment.html (accessed on 21 March 2025).
  55. Regulatory Framework for Artificial Intelligence Passes in Brazil’s Senate. Available online: https://www.mattosfilho.com.br/en/unico/framework-artificial-intelligence-senate/#:~:text=2%2C338%2F2023%20to%20establish%20a,of%20safe%20and%20reliable%20systems (accessed on 21 March 2025).
  56. Statista. Available online: https://www.statista.com/outlook/tmo/artificial-intelligence/worldwide (accessed on 21 March 2025).
  57. Madieka, T.; Ilnicki, R. AI Investment: EU and Global Indicators; European Parliamentary Research Service: Brussels, Belgium, 2024; Available online: https://www.europarl.europa.eu/RegData/etudes/ATAG/2024/760392/EPRS_ATA(2024)760392_EN.pdf (accessed on 21 March 2025).
  58. Data Management Body of Knowledge. Available online: https://www.dama.org/cpages/body-of-knowledge (accessed on 18 February 2025).
  59. Jarrod, A. The Chief AI Officer’s Handbook: Master AI Leadership with Strategies to Innovate, Overcome Challenges, and Drive Business Growth; Packt Publishing: Birmingham, UK, 2025. [Google Scholar]
  60. De Mauro, A.; Pacifico, M. Data-Driven Transformation. Maximise Business Value with Data Analytics the FT Guide; FT Publishing: New York, NY, USA, 2024. [Google Scholar]
  61. Giudici, P.; Centurelli, M.; Turchetta, S. Artificial Intelligence risk measurement. Expert Syst. Appl. 2024, 235, 121220. [Google Scholar] [CrossRef]
  62. Ricciardi Celsi, L. The Dilemma of Rapid AI Advancements: Striking a Balance between Innovation and Regulation by Pursuing Risk-Aware Value Creation. Information 2023, 14, 645. [Google Scholar] [CrossRef]
  63. Novelli, C.; Casolari, F.; Rotolo, A.; Taddeo, M.; Floridi, L. Taking AI risks seriously: A new assessment model for the AI Act. AI Soc. 2024, 39, 2493–2497. [Google Scholar] [CrossRef]
  64. ISO/IEC 38507:2022; Information Technology—Governance of IT. ISO: Geneva, Switzerland, 2022. Available online: https://www.iso.org/standard/56641.html (accessed on 21 March 2025).
  65. German Standardization Roadmap on Artificial Intelligence. Available online: https://www.dke.de/resource/blob/2008048/99bc6d952073ca88f52c0ae4a8c351a8/nr-ki-english---download-data.pdf (accessed on 21 March 2025).
Table 1. Four-phase model allowing to operationalize algor-ethics. For each phase, the table reports the ethical anchoring, the corresponding technical instrument according to the description above, and the related governance mechanism.
Table 1. Four-phase model allowing to operationalize algor-ethics. For each phase, the table reports the ethical anchoring, the corresponding technical instrument according to the description above, and the related governance mechanism.
PhaseEthical AnchoringTechnical InstrumentGovernance Mechanism
DesignFairness, inclusionBias audits, stakeholder mappingEthics-by-design checklist
DevelopmentAccountability, explainabilityFairness-aware modeling and explainability constraintsInternal re-teaming
DeploymentTransparency, safetyEthical KPIs (e.g., transparency score)Human-in-the-loop review
GovernanceCo-responsibilityRisk scoring aligned with ISO/IEC 42001Independent ethics board
Table 2. Synoptic view of how AI regulation is being addressed across the different jurisdictions of the USA, China, the EU, Japan, Canada, and Brazil.
Table 2. Synoptic view of how AI regulation is being addressed across the different jurisdictions of the USA, China, the EU, Japan, Canada, and Brazil.
JurisdictionUSAChinaEUJapanCanadaBrazil
AI market size (as of 2024)USD 146.09 billion (19.33% CAGR) [56,57]USD 2.54 billion (31.7% CAGR) [56,57]USD 66.4 billion (33.2% CAGR) [56,57]USD 10.15 billion (26.30% CAGR) [56,57]USD 6.5 billion (33.9% CAGR) [56,57]USD 4.42 billion (26.24% CAGR) [56,57]
Current regulation[48][50][52][53][54][55]
Guidelines for GenAIXX
Cybersecurity measures for advanced AIXXXX X
Limitations to competition X
Adherence to core values of socialism X
Risk-based framework X XX
Regulatory approachSector-specific regulations with a decentralized approach, leading to potential fragmentationVertical approach with discrete laws targeting specific AI issues, such as recommendation algorithms and deep synthesis toolsHorizontal framework with the AI Act, applying flexible standards across various AI applicationsKeeps AI rules light and practical to boost innovation, trusting companies to “do the right thing” while stepping in only for critical risks (e.g., medical AI), and lying in the middle ground between the EU’s strict laws and the US’s hands-off approachOperational risk management, public-sector first and human-rights basedHybrid model, combining right-based principles (inspired by EU AI Act) with developing-economy flexibility; anchored in data protection, requiring AI systems to comply with privacy rules
Scope of coverageVaries by sector, with some areas lacking specific AI regulations, leading to potential gapsFocused on specific applications, with rapid implementation but potential for uneven coverageComprehensive, covering all AI systems with a focus on fundamental rights and ethical principlesFocuses on human-centric applications in healthcare, mobility and manufacturing, excluding military AI from public policy discussionsFocused on automated decision systems used by federal agencies in core applications (pensions, immigration, tax administration, law enforcement)High-risk sectors (healthcare diagnostics, credit scoring, public services such as facial recognition in policing), excluding research AI and military applications
Risk classificationLacks a unified risk classification, leading to inconsistencies across sectorsEmphasizes control over AI development to safeguard against losing control, with a focus on specific high-risk applicationsRisk-based classification system, imposing stricter requirements on high-risk AI systemsVoluntary risk assessmentFour-tier classification, distinguishing among minimal (chatbots), moderate (resume screening), high (criminal risk assessment), and very high risk (healthcare diagnostics)Three-tier classification, distinguishing among minimal (chatbots), medium (HR tools), and high risk (medical AI)
Enforcement mechanismsEnforcement varies by sector, with potential challenges in ensuring consistent complianceCentralized directives allowing rapid implementation, but with potential constraints on public transparency and external oversightCentralized enforcement with significant penalties for non-compliance, ensuring adherence to regulationsMETI-guided industry standardsMandatory compliance for the public sector, voluntary compliance (guidelines only for the private sector)The primary enforcer is the national Data Protection Authority with fines up to 2% of revenue and mandatory incident reporting
Pros
  • Encourages innovation through a flexible, sector-specific approach
  • Allows industries to develop tailored guidelines
  • Enables swift policy implementation
  • Focuses on specific high-risk applications
  • Aligns with national strategic goals
  • Establishes clear guidelines protecting fundamental rights
  • Promotes ethical AI development
  • Provides a unified framework across member states
  • Agile, preserves business autonomy
  • Practical implementation tool (AIA) freely available
  • Focuses on demonstrable harms rather than on theoretical risks
  • Strong transparency requirements (public AI registries)
  • Strong fundamental rights protection (mirroring EU standards)
  • Pragmatic tiered-risk approach for developing economy context
  • Proven track record in enforcement
Cons
  • Results in fragmented regulations
  • Potential gaps in oversight
  • Inconsistencies across sectors
  • Lacks comprehensive coverage
  • Potential for uneven enforcement
  • Concerns about transparency and external oversight
  • May struggle to keep pace with rapid AI advancements
  • Potential for over-regulation hindering innovation
  • Weak enforcement
  • Relies on corporate goodwill
  • Limited to federal government, no private sector mandate yet
  • No financial penalties reduce compliance urgency
  • Compliance confusion
  • Limited resources for enforcement outside major cities
  • No GenAI rules
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Ricciardi Celsi, L.; Zomaya, A.Y. Perspectives on Managing AI Ethics in the Digital Age. Information 2025, 16, 318. https://doi.org/10.3390/info16040318

AMA Style

Ricciardi Celsi L, Zomaya AY. Perspectives on Managing AI Ethics in the Digital Age. Information. 2025; 16(4):318. https://doi.org/10.3390/info16040318

Chicago/Turabian Style

Ricciardi Celsi, Lorenzo, and Albert Y. Zomaya. 2025. "Perspectives on Managing AI Ethics in the Digital Age" Information 16, no. 4: 318. https://doi.org/10.3390/info16040318

APA Style

Ricciardi Celsi, L., & Zomaya, A. Y. (2025). Perspectives on Managing AI Ethics in the Digital Age. Information, 16(4), 318. https://doi.org/10.3390/info16040318

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop