Perspectives on Managing AI Ethics in the Digital Age
Abstract
:1. Introduction
2. Motivation and Rationale Behind the Literature Review
2.1. Motivation
- Ethical frameworks, such as algor-ethics, to mitigate bias and discrimination;
- Clear accountability mechanisms ensuring liability for AI-driven decisions;
- Global cooperation on AI risk assessment, particularly for high-risk systems such as generative AI and autonomous decision-making models;
- Public awareness initiatives to promote AI literacy and prevent misuse.
2.2. Literature Selection and Scope
- Peer-reviewed articles/conference papers addressing AI ethics, governance, or regulation;
- Policy documents from authoritative bodies (OECD, EU, NIST, Vatican);
- Empirical case studies of AI failures (e.g., COMPAS, healthcare triage bias);
- Relevance to one of the core dimensions: ethics, regulation, strategic governance, or standardization;
- Contemporary focus, prioritizing the most recent works where possible.
- Philosophical and ethical foundations (e.g., algor-ethics, techno-humanism);
- Comparative governance and regulation (e.g., AI Act, US Executive Orders, Chinese AI regulation guidelines);
- Standards and managerial frameworks (e.g., ISO/IEC 22989:2022, ISO/IEC 42001:2023, DAMA DMBOK).
- Ensures replicability via PRISMA-ScR protocols;
- Preserves conceptual depth through foundational texts;
- Bridges theory–practice gaps by linking ethical principles to empirical cases.
3. Algor-Ethics: Ethics for Algorithms Between Technological Progress and Ethical Responsibility
3.1. The Distinctive Traits of Algor-Ethics
- A holistic, humanistic foundation: Rooted in a vision of human dignity, algor-ethics insists on keeping human beings—not just as users but as moral agents—at the center of algorithmic ecosystems. This goes beyond risk mitigation to focus on meaning, purpose, and justice.
- Co-responsibility and distributed ethics: Algor-ethics moves past individual accountability to frame responsibility as distributed across designers, deployers, regulators, and users. This counters the limitations of blame-based approaches in multi-agent systems.
- Transdisciplinarity as praxis: Rather than viewing philosophy, engineering, policy, and theology as separate domains, algor-ethics weaves them together. This is not just epistemological—it results in practical governance blueprints, as illustrated by its synergy with standards such as ISO/IEC 22989:2022 and ISO/IEC 42001:2023.
- Design phase: Ethical risk assessment is embedded alongside technical feasibility studies. Design documentation includes ethical assumptions, stakeholder mapping, and trade-off rationales.
- Development phase: Algorithms are stress-tested for biases, using fairness-aware modeling and explainability constraints. Developers undergo ethics-by-design training based on algor-ethics principles.
- Deployment phase: Monitoring dashboards include ethical performance metrics (e.g., inclusion rate, transparency score). Human-in-the-loop validation ensures meaningful oversight.
- Governance phase: A standing ethics board (internal or external) co-defines risk thresholds and escalates non-compliance. This aligns with ISO/IEC 42001′s governance clauses but anchors them in a shared ethical framework (see Section 5 for further detail on the above-mentioned ISO/IEC standard).
- AI managers and chief AI/data officers: as a framework for aligning ethical principles with technical governance (e.g., in ISO/IEC 42001 implementation);
- Regulators and policy analysts: to bridge normative principles and compliance instruments;
- Researchers and ethicists: offering an integrative scaffolding for empirical, conceptual, and normative studies.
3.2. The Need for Transdisciplinarity
- Emergence of new qualities: Within a system, interactions between different elements produce characteristics that cannot be deduced from the individual components.
- Connection between systems: Technological systems cannot be analyzed in isolation but in relation to the social, economic, and cultural contexts in which they operate.
- Moral discernment: The interaction between human agents and technological artifacts requires a critical judgment that considers the ethical implications of choices.
- Dignity of the person: Every technological system must respect the rights and dignity of individuals, avoiding discrimination or exclusion.
- Justice and fairness: Algorithms must be designed to ensure fair outcomes without favoring certain groups at the expense of others.
- Transparency and responsibility: It is essential that AI systems are understandable and that their creators can be held accountable for their decisions.
3.3. Case Studies: Bridging Ethical Reflection with Data-Driven Insights and Regulatory Evidence
- Global AI investment reached USD 189 billion in 2023, with significant concentration in the US and China;
- Instances of documented algorithmic harm doubled since 2020;
- The adoption of AI in healthcare, insurance, and finance continues to outpace regulatory readiness.
- COMPAS risk scoring tool: Investigative work by ProPublica [23] revealed racial bias in the COMPAS algorithm used to predict criminal recidivism, misclassifying Black defendants at disproportionately higher rates. This case has become emblematic of opaque, high-impact AI in the justice system.
- Healthcare triage bias: Obermeyer et al. [24] found that commercial algorithms used for patient prioritization in the US healthcare system underestimated the needs of black patients by using cost as a proxy for health—an ethical and statistical failure that illustrates the importance of representative data and fairness-aware design.
- AI in legal automation: McCradden et al. [25] raise concerns about deploying machine learning models in clinical and legal domains without clear accountability frameworks or model explainability, pointing to both epistemic and liability gaps.
- Facial recognition bans: After the occurrence of high false-positive rates for non-white individuals, multiple U.S. cities (e.g., San Francisco, Portland, Boston) enacted moratoria or bans on government use of facial recognition technology. These actions underscore how local governance can intervene to prevent algorithmic discrimination when national policy lags.
- AI and the COVID-19 pandemic: The early months of the pandemic saw widespread reliance on AI-driven diagnostics, resource forecasting, and misinformation moderation—often without adequate validation. Models predicting ICU needs, for instance, were deployed before peer review, and content moderation algorithms failed to flag false information about vaccines. These cases reveal the dangers of premature deployment and the lack of agile regulatory oversight under emergency conditions.
- Criminal justice: the COMPAS recidivism algorithm’s racial bias [23] revealed how “neutral” tools perpetuate discrimination when fairness audits are absent—a core focus of algor-ethics’ justice principle (see Section 3.2).
- Healthcare: the study of biased triage algorithms in [25] showed how training data skewed by cost (not clinical need) violated dignity and transparency. Algor-ethics mandates representative data validation at design stage.
- Employment: Amazon’s gender-biased hiring tool [26] exemplified the accountability gap—no party was liable for harm. Algor-ethics assigns clear ownership via its lifecycle, dynamic approach.
- 1.
- Dignity of the person
- Empirical anchor: Facebook’s emotional contagion experiment [27] manipulated users without consent, violating autonomy.
- Algor-ethical response: Human oversight protocols (e.g., clinician review for medical AI) and consent workflows for data use are needed.
- 2.
- Justice and fairness
- Empirical anchor: Racial bias in mortgage-approval algorithms [28] showed how historical data entrenches inequity.
- Algor-ethical response: Mandating bias red-teaming (e.g., NIST’s AI RMF) and equity impact assessments pre-deployment.
- 3.
- Transparency and accountability
- Empirical anchor: Clearview AI’s opaque facial recognition use [29] highlighted risks of undocumented systems.
- Algor-ethical response: Enforcement of audit trails (ISO/IEC 42001) and explainability thresholds (e.g., EU AI Act’s Article 13).
3.3.1. Case Study 1: Diagnostic Bias in AI Medical Tools
- Design Phase: Ensure representativeness in training datasets by incorporating demographically diverse patient images and clinical data. This step directly addresses the principle of justice by mitigating structural bias in data.
- Deployment Phase: Implement human-in-the-loop mechanisms, such as clinician override capabilities, to uphold dignity by preserving clinical agency in high-stakes decisions.
- Monitoring Phase: Conduct disaggregated performance evaluations based on demographic variables to promote transparency and enable continual fairness auditing in real-world settings.
3.3.2. Case Study 2: Ethical Trade-Offs in AV Decision-Making
- Design Phase: Incorporate mechanisms for public deliberation and stakeholder consultation to inform the ethical weighting of outcomes, aligning system objectives with societal conceptions of justice.
- Deployment Phase: Equip AVs with forensic tools such as black-box recorders to support accountability through post-incident auditing and responsibility attribution.
- Monitoring Phase: Introduce continuous dynamic risk recalibration based on evolving empirical data to adaptively safeguard dignity in unforeseen or novel driving contexts.
3.4. Literature Review of the Application Scenarios Where Algor-Ethics Will Be Relevant
3.4.1. AI and Peace
3.4.2. AI in Medicine
- Data privacy. One of the most critical aspects is the management of health data [39]. AI requires enormous amounts of information to function, and these data must be managed with the highest security. Any privacy violations or improper use of data could undermine patients’ trust in digital technologies. An emblematic example is that of the American TikToker Kodye Elyse, who discovered how images of her children shared online were used for purposes without her consent. This incident highlights the paradox of our digital age: the data we share is no longer fully under our control.
- Discrimination and prejudices. AI could perpetuate or even amplify existing biases in data. If the datasets used to train algorithms are not representative of the diversity of the population, there is a risk that diagnoses may be inaccurate or discriminatory. Discrimination in treatments, especially against minorities, is a real risk that must be addressed through careful and inclusive technology design.
- The role of human judgment. AI can improve efficiency but cannot replace empathy and human judgment. The physician must maintain a central role in patient care, using AI as support but never as a substitute for interpersonal relationships. In other words, AI should never replace compassion; instead, it must be a tool that enriches the physician’s ability to care for the patient.
- Responsibility in the event of errors. Increasing automation in medicine also raises questions about responsibility in the case of errors. If a medical error is caused by an AI system, who is responsible? Clear legislation must be developed to address this issue and establish who should be held accountable in case of harm—whether it is the AI, the system’s designers, or the physicians who use it.
- Informed consent. Last but not least, navigating how to obtain meaningful consent for AI’s use in diagnoses or treatment recommendations.
3.4.3. AI and the Social Question
3.4.4. AI for the Environment
3.4.5. AI and Space Exploration
3.4.6. AI and Insurtech
- Service evolution: Integrated ecosystems combining insurance with additional services are emerging, enhancing customer education and service value.
4. AI Regulation Across Different Jurisdictions: USA, China, European Union, Japan, Canada, and Brazil
4.1. USA
4.2. China
4.3. EU
- Economic—The SuperAI market is expected to be one of the most lucrative in technology, offering prosperity to countries that capitalize on these opportunities.
- Geopolitical—Leading the development of this transformative technology is crucial for global dominance in the 21st century.
Approval and Implementation Timeline of the AI Act
- Six months after publication: Ban on specific AI applications.
- Nine months after entry into force: Introduction of voluntary codes of conduct.
- Twelve months after publication: Application of general AI rules and governance.
- Thirty-six months after publication: Obligations for high-risk systems.
- Prohibited applications:
- Biometric categorization based on sensitive data.
- Facial image retrieval from the internet or CCTV for facial recognition databases.
- Emotion detection in workplaces and schools.
- Social scoring and predictive policing based solely on individual profiles.
- Manipulation of human behavior or exploitation of vulnerabilities.
- Restricted uses: Real-time biometric identification by law enforcement is only allowed under specific time and geographical limitations with prior judicial or administrative authorization.
- High-risk systems obligations. These systems must do the following:
- Undergo risk assessments.
- Maintain usage logs.
- Ensure transparency, accuracy, and human oversight.
- Allow individuals to file complaints and receive explanations for AI-driven decisions.
- Transparency for general-purpose AI systems: These must comply with EU copyright law, publish detailed summaries of training data, and clearly label deepfakes.
4.4. Japan
- Voluntary compliance, reinforced by trust in public institutions;
- Focus on human-centric design, resilience, and quality assurance;
- Active support for AI ethics by design in healthcare and eldercare.
4.5. Canada
- Risk-tiered obligations (impact level from 1 to 4);
- Emphasis on explainability, auditability, and public engagement;
- Regulatory sandboxing to test innovation under supervision.
4.6. Brazil
- Risk-based approaches to assessing and managing harms;
- Emphasis on transparency, explainability, and human oversight;
- Engagement with multi-stakeholder ecosystems, including civil society, academia, and the private sector;
- Integration with broader data governance and digital rights frameworks.
- Rights-based foundation grounded in Constitutional principles;
- Multi-stakeholder consultation on AI ethics guidelines;
- Regional leadership in Latin America for inclusive governance.
4.7. Comparative Analysis of the AI Regulatory Frameworks Across the USA, China, the EU, Japan, Canada, and Brazil
- Different regulatory approaches: The EU adopts a horizontal and legally binding regulatory framework that promotes uniformity and legal certainty but may encounter challenges in adapting to the rapid pace of technological innovation. In contrast, the USA follows a sector-specific, innovation-driven model that fosters flexibility and market responsiveness but risks regulatory fragmentation and uneven oversight. China’s vertically integrated approach emphasizes centralized control and strategic deployment in key sectors, enabling targeted regulation while potentially producing gaps in coverage. Among complementary models, Japan pursues a soft-law strategy that leverages voluntary guidelines and public–private collaboration to cultivate ethical norms through consensus-building. Brazil aligns its AI strategy with existing data protection legislation, seeking to harmonize innovation with democratic accountability.
- Different scope of coverage: The EU’s comprehensive framework aims to ensure robust protection of fundamental rights across all AI applications. The USA approach, with its fragmented jurisdictional boundaries, may leave significant regulatory gaps, particularly in non-healthcare or non-financial domains. China prioritizes strategic and high-impact AI applications, which allows for rapid rulemaking but may limit uniform protections across sectors. In Canada, the mandatory use of an Algorithmic Impact Assessment (AIA) ensures that federal AI systems are evaluated according to their scope and potential societal harm, exemplifying a scalable governance model with practical enforceability.
- Different risk classification: The EU employs a hierarchical, risk-based taxonomy that mandates strict obligations for high-risk AI systems, including those used in critical infrastructure, education, and law enforcement. The USA lacks a unified risk classification framework, resulting in inconsistencies across sectors. Japan resembles the USA’s approach to risk classification. China, by contrast, aligns its risk prioritization with national security and industrial policy objectives, focusing regulatory scrutiny on applications deemed socially or politically sensitive. Canada’s AIA provides a concrete example of operationalizing risk-based assessment through a tiered system, linking risk levels with oversight mechanisms such as third-party audits and public disclosure. Brazil’s regulatory framework resembles Canada’s with respect to risk classification.
- Different enforcement mechanisms: The EU relies on centralized supervisory authorities empowered to issue sanctions, thereby ensuring regulatory coherence but also raising concerns about over-regulation. The USA employs a decentralized enforcement model, with responsibilities distributed among sectoral agencies, which can hinder consistent compliance. China’s enforcement is primarily directive-based, enabling rapid implementation but potentially limiting transparency and external oversight. In Japan, enforcement is largely non-coercive, relying on voluntary adherence and industry engagement. Brazil’s emerging legal framework emphasizes principles such as transparency and non-discrimination, while retaining regulatory flexibility to accommodate technological evolution. In Canada the enforcement mechanism is solely limited to the public sector for now.
5. Strategic and Governance Considerations for AI Applications
- The DAMA framework [57];
- AI strategy;
- AI governance;
5.1. The DAMA Framework
- Structuring integrated data management.
- Mapping organizational stakeholders involved in AI projects.
- Assessing the maturity of data and AI applications.
5.2. AI Strategy Essentials
- Define clear impact areas and organizational guidelines.
- Set measurable objectives with implementation roadmaps.
- Ensure frequent updates (quarterly recommended).
- Focus on no more than 8–10 primary objectives with corresponding KPIs.
- Align AI strategy with business use cases.
5.3. AI Governance
- Comprehensive data documentation and use case mapping.
- Clear understanding of data flows and sources.
- Streamlined data and AI flows to enhance business agility.
- Transparency. In this respect, AI governance should be enforced so that risk is controlled as per how the AI system makes decisions, ensuring that they are explained and auditable.
- Accountability. In this respect, AI governance must define clear rules in terms of who is responsible if something goes wrong.
- Fairness. In this respect, AI governance should define and enforce clear guardrails so that the AI system treats all people and groups equitably.
- Safety and security. In this respect, AI governance should oversee processes so that the AI system proves robust against misuse, error, and adversarial attacks.
- Privacy. In this respect, AI governance should oversee processes so that the AI system respects data protection and personal freedoms.
- Alignment with human values. In this respect, AI governance should oversee processes so that the AI system serves the values and mission of the company, as well as society’s goals, without drifting toward unintended, harmful outcomes.
5.4. ISO/IEC 42001:2023
- Risk management;
- Security;
- Fairness;
- Data quality;
- Accountability.
5.4.1. Risk Management
- Risk identification: Understanding and cataloging all potential risks AI may introduce, such as safety concerns, biases in algorithms, unintended consequences of decisions made by AI, and data privacy issues.
- Risk assessment: Evaluating the severity and likelihood of these risks. This includes determining which risks could have a high impact on stakeholders (e.g., customers, employees, society).
- Risk mitigation: Implementing strategies to minimize or eliminate identified risks. This might involve adjusting AI system designs, ensuring robust validation of models, conducting ongoing monitoring, and setting up mechanisms for user feedback and system updates.
5.4.2. Security
- System protection: Ensuring the AI system’s architecture is designed to be resilient to cyberattacks or breaches.
- Data protection: Safeguarding the confidentiality, integrity, and availability of data used by the AI. This includes encryption, secure data storage, and proper access controls.
- Vulnerability management: Identifying weaknesses in AI systems and addressing them proactively, such as patching vulnerabilities, using intrusion detection systems, and ensuring continuous monitoring for security threats.
5.4.3. Fairness
- Bias identification: Regularly testing AI models for bias that could result in unfair treatment of individuals or groups, especially in sensitive areas like hiring, credit scoring, and law enforcement.
- Bias mitigation: Implementing measures to eliminate or reduce biases in training data, model development, and decision-making processes. This may include using diverse datasets, applying fairness-enhancing algorithms, or introducing transparency in how decisions are made by AI systems.
- Equal treatment: Ensuring that AI models treat individuals or groups equitably, regardless of their characteristics such as race, gender, age, or socio-economic background.
5.4.4. Data Quality
- Data accuracy: Ensuring that the data used by AI models is correct, precise, and free from errors. This is crucial to avoid flawed predictions or harmful outcomes.
- Data completeness: Ensuring that the data used is comprehensive and covers all relevant factors necessary for making informed decisions.
- Data relevance: The data must be appropriate and aligned with the goals of the AI system, ensuring that only the necessary data are collected and used.
- Data timeliness: Using up-to-date data to prevent the AI system from making outdated or irrelevant decisions.
- Data integrity: Ensuring that data are not corrupted or manipulated in ways that would affect the reliability of AI decisions.
5.4.5. Accountability
- Clear roles and responsibilities: Defining who is responsible for the development, deployment, and monitoring of AI systems within an organization. This includes assigning responsibility for addressing any harm caused by AI systems.
- Auditability: Ensuring that AI systems and their decision-making processes are transparent and can be audited. This includes maintaining logs of decisions, actions taken, and data used in decision-making to facilitate accountability.
- Redress mechanisms: Providing avenues for individuals or groups negatively affected by AI systems to seek compensation, corrections, or improvements. This ensures that people have a way to address grievances or harms resulting from AI decisions.
- Transparency: Organizations must be transparent about how their AI systems work, how decisions are made, and how data are used. This transparency enables stakeholders to understand the AI’s reasoning and hold the system accountable for its actions.
5.5. ISO/IEC 22989:2022
- Establishing a shared understanding of AI principles;
- Clarifying how to classify AI systems based on their characteristics and applications;
- Supporting organizations in designing, implementing, and managing AI in line with ethical, technical, and operational best practices.
5.6. Discussion
6. Conclusions
- Ethical “algor-ethics” frameworks to mitigate bias and discrimination.
- Clear accountability mechanisms ensuring liability for AI-driven decisions.
- Global cooperation on AI risk assessment, particularly for high-risk systems such as generative AI and autonomous decision-making models.
- Public awareness initiatives to promote AI literacy and prevent misuse.
Author Contributions
Funding
Institutional Review Board Statement
Informed Consent Statement
Data Availability Statement
Conflicts of Interest
References
- Müller, V.C. Ethics of Artificial Intelligence and Robotics. In Stanford Encylopedia of Philosophy; Edward, N.Z., Ed.; Stanford University: Stanford, CA, USA, 2020; pp. 1–70. [Google Scholar]
- Rome Call for AI Ethics, February 28th, 2020. Available online: https://www.vatican.va/roman_curia/pontifical_academies/acdlife/documents/rc_pont-acd_life_doc_20202228_rome-call-for-ai-ethics_en.pdf (accessed on 23 January 2025).
- Green, B.P. The Vatican and Artificial Intelligence: An Interview with Bishop Paul Tighe. J. Moral Theol. 2022, 11, 212–231. [Google Scholar]
- Benanti, P. Il Crollo di Babele. Che Fare Dopo la Fine del Sogno di Internet? San Paolo Edizioni: Milano, Italy, 2024. [Google Scholar]
- ISO/IEC 22989:2022; AI Concepts and Terminology. ISO: Geneva, Switzerland, 2022. Available online: https://www.iso.org/standard/74296.html (accessed on 18 February 2025).
- ISO/IEC 42001:2023; AI Management Systems. ISO: Geneva, Switzerland, 2023. Available online: https://www.iso.org/standard/81230.html (accessed on 18 February 2025).
- Tricco, A.C.; Lillie, E.; Wasifa, Z.; O’Brien, K.K.; Colquhoun, H.; Levac, D.; Moher, D.; Micah, D.J.; Horsley, T.; Weeks, L.; et al. PRISMA Extension for Scoping Reviews (PRISMA-ScR): Checklist and Explanation. Ann. Intern. Med. 2018, 169, 467–473. [Google Scholar] [CrossRef] [PubMed]
- Montomoli, J.; Bitondo, M.M.; Cascella, M.; Rezoagli, E.; Romeo, L.; Bellini, V.; Semeraro, F.; Gamberini, E.; Frontoni, E.; Agnoletti, V.; et al. Algor-ethics: Charting the ethical path for AI in critical care. J. Clin. Monit. Comput. 2024, 38, 931–939. [Google Scholar] [CrossRef] [PubMed]
- Valerio, C. La Tecnologia È Religione; Giulio Einaudi Editore: Torino, Italy, 2023; ISBN 978-88-06-25186-4. [Google Scholar]
- Ellul, J. La Technique ou L’ENJEU du Siècle; Wikimedia Foundation, Inc.: San Francisco, CA, USA, 1954. [Google Scholar]
- Habermas, J. Technology and Science as Ideology; Columbia University Press: New York, NY, USA, 1968. [Google Scholar]
- Tonelli, G. Materia. La Magnifica Illusione; Feltrinelli: Gargnano, Italy, 2023. [Google Scholar]
- Heidegger, M. The Question Concerning Technology and Other Essays; Garland Publishing, Inc.: New York, NY, USA, 1977. [Google Scholar]
- Rifkin, J. The Biotech Century; Tarcher: New York, NY, USA, 1998. [Google Scholar]
- Dicastery for the Doctrine of the Faith, Dicastery for Culture and Education. Antiqua et Nova. Note on the Relationship between Artificial Intelligence and Human Intelligence. Available online: https://www.vatican.va/roman_curia/congregations/cfaith/documents/rc_ddf_doc_20250128_antiqua-et-nova_en.html (accessed on 28 January 2025).
- Casalone, C. Una ricerca etica condivisa nell’era digitale. La Civiltà Cattol. 2020, 2, 30–43. [Google Scholar]
- Benanti, P. The urgency of an algorethics. Discov. Artif. Intell. 2023, 3, 11. [Google Scholar] [CrossRef]
- Floridi, L. The Fourth Revolution: How the Infosphere is Reshaping Human Reality; Oxford University Press: Oxford, UK, 2014. [Google Scholar]
- OECD Artificial Intelligence Policy Observatory. Available online: https://oecd.ai/en/ (accessed on 21 March 2025).
- OECD AI Incidents Tracker. Available online: https://oecd.ai/en/incidents (accessed on 21 March 2025).
- NIST AI Risk Management Framework. Available online: https://www.nist.gov/itl/ai-risk-management-framework (accessed on 21 March 2025).
- Stanford HAI—AI Index. Available online: https://hai.stanford.edu/ai-index (accessed on 21 March 2025).
- Angwin, J.; Larson, J.; Mattu, S.; Kirchner, L. Machine Bias—There’s Software Used across the Country to Predict Future Criminals. And It’s Biased against Blacks; Benton Institute for Broadband & Society: Wilmette, IL, USA, 2016. [Google Scholar]
- Obermeyer, Z.; Powers, B.; Vogeli, C.; Mullainathan, S. Dissecting racial bias in an algorithm used to manage the health of populations. Science 2019, 366, 447–453. [Google Scholar] [CrossRef]
- McCradden, M.D.; Stephenson, E.A.; Anderson, J.A. Clinical research underlies ethical integration of healthcare artificial intelligence. Nat. Med. 2020, 26, 1325–1326. [Google Scholar] [CrossRef]
- Dastin, J. Amazon Scraps Secret AI Recruiting Tool That Showed Bias against Women. Reuters. 2018. Available online: https://www.reuters.com/article/us-amazon-com-jobs-automation-insight/amazon-scraps-secret-ai-recruiting-tool-that-showed-bias-against-women-idUSKCN1MK08G (accessed on 18 February 2025).
- Kramer, A.D.I.; Guillory, J.E.; Hancock, J.T. Experimental evidence of massive-scale emotional contagion through social networks. Proc. Natl. Acad. Sci. USA 2014, 111, 8788–8790. [Google Scholar] [CrossRef]
- Bartlett, R.; Morse, A.; Stanton, R.; Wallace, N. Consumer-lending discrimination in the FinTech Era. J. Financ. Econ. 2022, 143, 30–56, ISSN 0304-405X. [Google Scholar] [CrossRef]
- Hill, D.; O’Connor, C.D.; Slane, A. Police use of facial recognition technology: The potential for engaging the public through co-constructed policy-making. Int. J. Police Sci. Manag. 2022, 24, 325–335. [Google Scholar] [CrossRef]
- Krishnapriya, K.S.; Vítor, A.; Kushal, V.; Michael, K.; Kevin, B. Issues Related to Face Recognition Accuracy Varying Based on Race and Skin Tone. IEEE Trans. Technol. Soc. 2020, 1, 8–20. [Google Scholar] [CrossRef]
- Rhim, J.; Lee, J.-H.; Chen, M.; Lim, A. A Deeper Look at Autonomous Vehicle Ethics: An Integrative Ethical Decision-Making Framework to Explain Moral Pluralism. Front. Robot. AI 2021, 8, 2021. [Google Scholar]
- Tang, L.; Li, J.; Fantus, S. Medical artificial intelligence ethics: A systematic review of empirical studies. Digit. Health 2023, 9, 20552076231186064. [Google Scholar] [CrossRef] [PubMed] [PubMed Central]
- Acquaviva, V.; Barnes, E.A.; Gagne, D.J.; McKinley, G.A.; Thais, S. Ethics in climate AI: From theory to practice. PLoS Clim. 2024, 3, e0000465. [Google Scholar] [CrossRef]
- Murphy, K.; Di Ruggiero, E.; Upshur, R.; Willison, D.J.; Malhotra, N.; Cai, J.C.; Malhotra, N.; Lui, V.; Gibson, J. Artificial intelligence for good health: A scoping review of the ethics literature. BMC Med. Ethics 2021, 22, 14. [Google Scholar] [CrossRef]
- Morley, J.; Elhalal, A.; Garcia, F.; Kinsey, L.; Mokander, J.; Floridi, L. Ethics as a Service: A Pragmatic Operationalisation of AI Ethics. Minds Mach. 2021, 31, 239–256. [Google Scholar] [CrossRef]
- Li, F.; Ruijs, N.; Lu, Y. Ethics & AI: A Systematic Review on Ethical Concerns and Related Strategies for Designing with AI in Healthcare. AI 2023, 4, 28–53. [Google Scholar] [CrossRef]
- van Wynsberghe, A. Sustainable AI: AI for sustainability and the sustainability of AI. AI Ethics 2021, 1, 213–218. [Google Scholar] [CrossRef]
- Message by Pope Francis for the 2024 World Day of Peace. Available online: https://www.vatican.va/content/francesco/en/messages/peace/documents/20231208-messaggio-57giornatamondiale-pace2024.html (accessed on 18 February 2025).
- Farhud, D.D.; Zokaei, S. Ethical Issues of Artificial Intelligence in Medicine and Healthcare. Iran. J. Public Health 2021, 50, i–v. [Google Scholar] [CrossRef] [PubMed] [PubMed Central]
- Capraro, V.; Lentsch, A.; Acemoglu, D.; Akgun, S.; Akhmedova, A.; Bilancini, E.; Bonnefon, J.F.; Brañas-Garza, P.; Butera, L.; Douglas, K.M.; et al. The impact of generative artificial intelligence on socioeconomic inequalities and policy making. PNAS Nexus. 2024, 3, 191. [Google Scholar] [CrossRef] [PubMed] [PubMed Central]
- Ricciardi Celsi, L.; Valli, A. Applied Control and Artificial Intelligence for Energy Management: An Overview of Trends in EV Charging, Cyber-Physical Security and Predictive Maintenance. Energies 2023, 16, 4678. [Google Scholar] [CrossRef]
- Justin, G.; Christopher, W.; James, M. Precision Medicine for Long and Safe Permanence of Humans in Space; Chapter 16—Current AI Technology in Space; Chayakrit, K., Ed.; Academic Press: Cambridge, MA, USA, 2025; pp. 239–250. ISBN 9780443222597. [Google Scholar] [CrossRef]
- Ranucci Brandimarte, S.; Di Francesco, G. Insurtech or Out, Egea, 2023. Available online: https://insurtechitaly.com/insurtech-or-out/ (accessed on 18 February 2025).
- Andreozzi, A.; Ricciardi Celsi, L.; Martini, A. Enabling the Digitalization of Claim Management in the Insurance Value Chain Through AI-Based Prototypes: The ELIS Innovation Hub Approach; Digitalization Cases Vol. 2. Management for Professionals; Urbach, N., Roglinger, M., Kautz, K., Alias, R.A., Sau ders, C., Wiener, M., Eds.; Springer: Cham, Switzerland, 2021. [Google Scholar]
- Maiano, L.; Montuschi, A.; Caserio, M.; Ferri, E.; Kieffer, F.; Germanò, C.; Baiocco, L.; Celsi, L.R.; Amerini, I.; Anagnostopoulos, A. A deep-learning–based antifraud system for car-insurance claims. Expert Syst. Appl. 2023, 231, 120644. [Google Scholar] [CrossRef]
- Atanasious, M.M.H.; Becchetti, V.; Giuseppi, A.; Pietrabissa, A.; Arconzo, V.; Gorga, G.; Gutierrez, G.; Omar, A.; Pietrini, M.; Rangisetty, M.A.; et al. An Insurtech Platform to Support Claim Management Through the Automatic Detection and Estimation of Car Damage from Pictures. Electronics 2024, 13, 4333. [Google Scholar] [CrossRef]
- Salentinig, A.; Iannelli, G.C.; Gamba, P. Data-, Feature-and Decision-Level Fusion for Classification; Elsevier: Amsterdam, The Netherlands, 2023. [Google Scholar]
- Executive Order 14110 of October 30, 2023, Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence. Available online: https://www.federalregister.gov/documents/2023/11/01/2023-24283/safe-secure-and-trustworthy-development-and-use-of-artificial-intelligence (accessed on 18 February 2025).
- Quantum Computing Cybersecurity Preparedness Act, 12/21/2022. Available online: https://www.congress.gov/bill/117th-congress/house-bill/7535 (accessed on 18 February 2025).
- Interim Administrative Measures for Generative Artificial Intelligence Services, August 15th, 2023. Available online: https://www.cac.gov.cn/2023-07/13/c_1690898327029107.htm (accessed on 18 February 2025).
- A Deep-See on DeepSeek: How Italy’s Ban Might Shape AI Oversight, January 31st, 2025. Available online: https://www.forbes.com/sites/nizangpackin/2025/01/31/a-deep-see-on-deepseek-how-italys-ban-might-shape-ai-oversight/ (accessed on 18 February 2025).
- Regulation (EU) 2024/1689 of the European Parliament and of the Council of 13 June 2024 Laying Down Harmonised Rules on Artificial Intelligence and Amending Regulations (EC) No 300/2008, (EU) No 167/2013, (EU) No 168/2013, (EU) 2018/858, (EU) 2018/1139 and (EU) 2019/2144 and Directives 2014/90/EU, (EU) 2016/797 and (EU) 2020/1828 (Artificial Intelligence Act) (Text with EEA relevance). Available online: https://eur-lex.europa.eu/eli/reg/2024/1689/oj/eng (accessed on 18 February 2025).
- Social Principles of Human-Centric AI, Government of Japan. Available online: https://www.cas.go.jp/jp/seisaku/jinkouchinou/pdf/humancentricai.pdf (accessed on 21 March 2025).
- Algorithmic Impact Assessment, Government of Canada. Available online: https://www.canada.ca/en/government/system/digital-government/digital-government-innovations/responsible-use-ai/algorithmic-impact-assessment.html (accessed on 21 March 2025).
- Regulatory Framework for Artificial Intelligence Passes in Brazil’s Senate. Available online: https://www.mattosfilho.com.br/en/unico/framework-artificial-intelligence-senate/#:~:text=2%2C338%2F2023%20to%20establish%20a,of%20safe%20and%20reliable%20systems (accessed on 21 March 2025).
- Statista. Available online: https://www.statista.com/outlook/tmo/artificial-intelligence/worldwide (accessed on 21 March 2025).
- Madieka, T.; Ilnicki, R. AI Investment: EU and Global Indicators; European Parliamentary Research Service: Brussels, Belgium, 2024; Available online: https://www.europarl.europa.eu/RegData/etudes/ATAG/2024/760392/EPRS_ATA(2024)760392_EN.pdf (accessed on 21 March 2025).
- Data Management Body of Knowledge. Available online: https://www.dama.org/cpages/body-of-knowledge (accessed on 18 February 2025).
- Jarrod, A. The Chief AI Officer’s Handbook: Master AI Leadership with Strategies to Innovate, Overcome Challenges, and Drive Business Growth; Packt Publishing: Birmingham, UK, 2025. [Google Scholar]
- De Mauro, A.; Pacifico, M. Data-Driven Transformation. Maximise Business Value with Data Analytics the FT Guide; FT Publishing: New York, NY, USA, 2024. [Google Scholar]
- Giudici, P.; Centurelli, M.; Turchetta, S. Artificial Intelligence risk measurement. Expert Syst. Appl. 2024, 235, 121220. [Google Scholar] [CrossRef]
- Ricciardi Celsi, L. The Dilemma of Rapid AI Advancements: Striking a Balance between Innovation and Regulation by Pursuing Risk-Aware Value Creation. Information 2023, 14, 645. [Google Scholar] [CrossRef]
- Novelli, C.; Casolari, F.; Rotolo, A.; Taddeo, M.; Floridi, L. Taking AI risks seriously: A new assessment model for the AI Act. AI Soc. 2024, 39, 2493–2497. [Google Scholar] [CrossRef]
- ISO/IEC 38507:2022; Information Technology—Governance of IT. ISO: Geneva, Switzerland, 2022. Available online: https://www.iso.org/standard/56641.html (accessed on 21 March 2025).
- German Standardization Roadmap on Artificial Intelligence. Available online: https://www.dke.de/resource/blob/2008048/99bc6d952073ca88f52c0ae4a8c351a8/nr-ki-english---download-data.pdf (accessed on 21 March 2025).
Phase | Ethical Anchoring | Technical Instrument | Governance Mechanism |
---|---|---|---|
Design | Fairness, inclusion | Bias audits, stakeholder mapping | Ethics-by-design checklist |
Development | Accountability, explainability | Fairness-aware modeling and explainability constraints | Internal re-teaming |
Deployment | Transparency, safety | Ethical KPIs (e.g., transparency score) | Human-in-the-loop review |
Governance | Co-responsibility | Risk scoring aligned with ISO/IEC 42001 | Independent ethics board |
Jurisdiction | USA | China | EU | Japan | Canada | Brazil |
---|---|---|---|---|---|---|
AI market size (as of 2024) | USD 146.09 billion (19.33% CAGR) [56,57] | USD 2.54 billion (31.7% CAGR) [56,57] | USD 66.4 billion (33.2% CAGR) [56,57] | USD 10.15 billion (26.30% CAGR) [56,57] | USD 6.5 billion (33.9% CAGR) [56,57] | USD 4.42 billion (26.24% CAGR) [56,57] |
Current regulation | [48] | [50] | [52] | [53] | [54] | [55] |
Guidelines for GenAI | X | X | ||||
Cybersecurity measures for advanced AI | X | X | X | X | X | |
Limitations to competition | X | |||||
Adherence to core values of socialism | X | |||||
Risk-based framework | X | X | X | |||
Regulatory approach | Sector-specific regulations with a decentralized approach, leading to potential fragmentation | Vertical approach with discrete laws targeting specific AI issues, such as recommendation algorithms and deep synthesis tools | Horizontal framework with the AI Act, applying flexible standards across various AI applications | Keeps AI rules light and practical to boost innovation, trusting companies to “do the right thing” while stepping in only for critical risks (e.g., medical AI), and lying in the middle ground between the EU’s strict laws and the US’s hands-off approach | Operational risk management, public-sector first and human-rights based | Hybrid model, combining right-based principles (inspired by EU AI Act) with developing-economy flexibility; anchored in data protection, requiring AI systems to comply with privacy rules |
Scope of coverage | Varies by sector, with some areas lacking specific AI regulations, leading to potential gaps | Focused on specific applications, with rapid implementation but potential for uneven coverage | Comprehensive, covering all AI systems with a focus on fundamental rights and ethical principles | Focuses on human-centric applications in healthcare, mobility and manufacturing, excluding military AI from public policy discussions | Focused on automated decision systems used by federal agencies in core applications (pensions, immigration, tax administration, law enforcement) | High-risk sectors (healthcare diagnostics, credit scoring, public services such as facial recognition in policing), excluding research AI and military applications |
Risk classification | Lacks a unified risk classification, leading to inconsistencies across sectors | Emphasizes control over AI development to safeguard against losing control, with a focus on specific high-risk applications | Risk-based classification system, imposing stricter requirements on high-risk AI systems | Voluntary risk assessment | Four-tier classification, distinguishing among minimal (chatbots), moderate (resume screening), high (criminal risk assessment), and very high risk (healthcare diagnostics) | Three-tier classification, distinguishing among minimal (chatbots), medium (HR tools), and high risk (medical AI) |
Enforcement mechanisms | Enforcement varies by sector, with potential challenges in ensuring consistent compliance | Centralized directives allowing rapid implementation, but with potential constraints on public transparency and external oversight | Centralized enforcement with significant penalties for non-compliance, ensuring adherence to regulations | METI-guided industry standards | Mandatory compliance for the public sector, voluntary compliance (guidelines only for the private sector) | The primary enforcer is the national Data Protection Authority with fines up to 2% of revenue and mandatory incident reporting |
Pros |
|
|
|
|
|
|
Cons |
|
|
|
|
|
|
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2025 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Ricciardi Celsi, L.; Zomaya, A.Y. Perspectives on Managing AI Ethics in the Digital Age. Information 2025, 16, 318. https://doi.org/10.3390/info16040318
Ricciardi Celsi L, Zomaya AY. Perspectives on Managing AI Ethics in the Digital Age. Information. 2025; 16(4):318. https://doi.org/10.3390/info16040318
Chicago/Turabian StyleRicciardi Celsi, Lorenzo, and Albert Y. Zomaya. 2025. "Perspectives on Managing AI Ethics in the Digital Age" Information 16, no. 4: 318. https://doi.org/10.3390/info16040318
APA StyleRicciardi Celsi, L., & Zomaya, A. Y. (2025). Perspectives on Managing AI Ethics in the Digital Age. Information, 16(4), 318. https://doi.org/10.3390/info16040318