Shaping the Future of Healthcare: Ethical Clinical Challenges and Pathways to Trustworthy AI
Abstract
:1. Introduction
1.1. The Promise and Risks of AI in Healthcare
1.2. Instances of AI Misuse in Healthcare and Their Repercussions
- Fabrication of Medical Records: While synthetic data generation aids AI training, the introduction of falsified medical records into clinical workflows or fraudulent claims can mislead clinicians and distort research outcomes [27].
- Misinformation in Medical Advice: AI-generated medical guidance can be erroneous, as seen in studies where chatbots provided incorrect cancer treatment recommendations with fabricated sources, endangering patient safety [28].
- Algorithmic Bias Amplification: AI models trained on non-representative datasets have perpetuated disparities, such as dermatological AI misdiagnosing conditions in patients with darker skin tones because of biased training data [29].
- Deepfake Medical Content: AI-generated videos and images have been exploited to spread false health information, such as vaccine misinformation, fueling public distrust and hesitancy [30].
1.3. Defining Trustworthy AI
- Transparency ensures AI decisions are comprehensible.
- Accountability requires developers and users to be responsible for outcomes.
- Fairness addresses bias in AI algorithms, ensuring equitable healthcare access.
- Patient autonomy ensures AI empowers patients rather than limiting decision-making.
1.4. AI as an Ecosystem
1.5. Towards a Regulatory Genome
- Continuous monitoring and evaluation of AI performance.
- Stakeholder engagement to ensure diverse perspectives are included.
- Frameworks to address emerging risks such as adversarial attacks and AI misuse [21].
- Transparency: The AI model’s decision-making process is explainable through heatmaps and confidence scores, helping clinicians validate its outputs.
- Bias Mitigation: The model can undergo algorithmic fairness audits, ensuring it does not disproportionately misdiagnose patients from underrepresented ethnic groups.
- Compliance: The AI system should adhere to the EU AI Act’s high-risk category standards and follow data protection regulations such as GDPR, with federated learning techniques ensuring privacy-preserving AI training.
- Continuous Monitoring: The system can undergo quarterly post-market surveillance audits, assessing real-world performance across diverse populations.
- Stakeholder Involvement: Regulators, clinicians, and local community representatives contribute to policy adjustments, ensuring culturally sensitive AI deployment.
1.6. Translating the AI Ecosystem Perspective into Actionable Policy and Governance Strategies
- Impact Assessment Metrics: Developing standardized AI impact indicators aligned with specific SDGs, such as healthcare accessibility (SDG 3), reduced inequalities (SDG 10), and responsible innovation (SDG 9). A study by Vinuesa et al. (2020) provides a comprehensive review of how AI can act as an enabler across various SDG targets, highlighting the importance of creating metrics to assess AI’s contributions to these goals [24].
- AI Governance Audits: Requiring AI systems in healthcare to undergo periodic audits to evaluate their contributions to sustainability, ethical compliance, and fairness [42].
- Regulatory Incentives: Encouraging AI developers and healthcare institutions to adopt SDG-aligned AI practices through regulatory incentives, such as funding opportunities and compliance certifications [43].
- Stakeholder-driven Evaluation: Engaging multidisciplinary experts, including healthcare professionals, ethicists, and policymakers, in qualitative and quantitative reviews of AI’s contributions to sustainable healthcare ecosystems. A systematic literature review by Birkstedt et al. (2023) emphasizes the importance of involving diverse stakeholders in AI governance to ensure comprehensive evaluation and accountability [44].
1.7. Key Contributions of This Study
- A Unified AI Governance Framework: Unlike existing approaches that focus separately on AI ethics or regulation, this study integrates transparency, fairness, accountability, and sustainability into a cohesive governance structure.
- Quantifiable AI Trustworthiness Metrics: We introduce measurable indicators—such as explainability scores, bias reduction rates, compliance metrics, and sustainability benchmarks—to evaluate AI systems in healthcare.
- Comparative Analysis of AI Categories: We assess symbolic AI, ML, and hybrid AI approaches, identifying their strengths, risks, and suitability for clinical applications.
- Regulatory Genome Concept for AI Oversight: We introduce an adaptive regulatory framework that aligns with global AI governance trends and SDGs, ensuring ethical AI deployment.
- Bias Mitigation and Algorithmic Fairness Strategies: We provide a roadmap for diverse data curation, algorithmic fairness audits, and continuous monitoring to reduce disparities in AI-driven healthcare.
- Interdisciplinary Policy Recommendations: We offer actionable guidelines for policymakers, healthcare institutions, and AI developers, ensuring compliance with emerging regulations such as the EU AIA and Food and Drug Administration (FDA)’s Software as a Medical Device (SaMD) framework.
2. The Evolving Ethical Landscape of AI in Healthcare
2.1. Privacy, Security, and Data Sovereignty
Data Sovereignty, Regional Regulations, and Cross-Border Implications
2.2. Bias and Fairness in AI Algorithms
2.2.1. Examining Bias in Real-World AI Healthcare Applications
2.2.2. Mitigating Bias Through Diverse Datasets and Algorithmic Audits
2.3. Regulatory and Legal Considerations
2.3.1. Limitations of Current Regulatory Frameworks
2.3.2. Adapting to the Rapid Evolution of AI Technologies
Aspect | Key Challenges | Current Frameworks | Limitations | Recommendations | Reference(s) |
---|---|---|---|---|---|
FDA Approval Processes | Evaluating AI devices pre-market with static metrics | FDA Guidance on Software as a Medical Device and AI tools | Often rely on single-site testing, insufficient generalizability, limited post-market surveillance | Implement iterative approvals, continuous performance monitoring, and multi-site validation | He et al., 2019 [23]; Wu et al., 2021 [64] |
EU Artificial Intelligence Act (AIA) | Establishing a risk-based, human-centric approach | EU AIA Proposal | Lacks adaptation to evolving AI models and overlooks environmental and underuse aspects | Develop adaptive standards, include environmental and sustainability considerations, and broaden the scope beyond a human-centric focus | Pagallo et al., 2022 [65]; Montag & Finck, 2024 [74] |
Ethical and Governance Frameworks | Ensuring fairness, accountability, and transparency | AI4People’s ethical guidelines, EU and national ethics boards | Many guidelines remain aspirational, not fully integrated into enforceable legal structures | Integrate ethical principles into legal mandates, adopt periodic audits, and enforce transparency and bias reporting | Floridi et al., 2018 [71]; Reddy et al., 2020 [72] |
Sustainability and Equity in Regulation | Addressing underutilization of AI for environmental or equitable health outcomes | Limited or absent in current regulations | Insufficient emphasis on sustainability, non-financial disclosures, and equitable data use | Require eco-impact assessments, enforce duties of care, promote equitable access and representativeness in datasets | Palkova, 2021 [73]; Pagallo et al., 2022 [65]; Hacker, 2024 [75] |
Continuous Adaptation to Rapid Evolution | Accounting for model updates, emergent behaviors of LLMs, and generative AI | Static approval models, limited version control | Regulatory gaps for dynamic updates, limited real-time oversight of algorithmic changes | Implement ongoing validation, create rapid response mechanisms, encourage multi-stakeholder regulatory development | He et al., 2019 [23]; Reddy et al., 2020 [72]; Palkova, 2021 [73] |
3. From Principles to Practice: Defining and Evaluating Trustworthy AI
- Ease of Integration (horizontal axis)—This axis represents the balance between helpful internal attributes that improve AI adoption and harmful external factors that create barriers, such as regulatory gaps and privacy risks. AI systems that emphasize fairness, explainability, and ethical governance tend to integrate more smoothly, while those facing compliance challenges or regulatory misalignment may struggle with adoption.
- Impact on Outcomes (vertical axis)—This axis differentiates internal organizational efforts (bottom) from external regulatory, ethical, and societal constraints (top). A higher impact on outcomes is observed when AI systems align with compliance and security needs, whereas a lower impact may arise from transparency and explainability issues that require internal efforts to improve AI adoption.
- Top-left (Fair, Accountable, and Bias-Free AI): Highlights fairness, accountability, and equity in AI design, ensuring responsible decision-making and reducing biases in healthcare AI models.
- Top-right (Secure, Compliant, and Privacy-Focused AI): Represents the importance of regulatory adherence and cybersecurity measures, ensuring compliance but also presenting challenges if excessive constraints hinder innovation.
- Bottom-left (Transparent, Reliable, and Explainable AI): Focuses on AI systems that prioritize transparency and interpretability, enabling healthcare professionals to understand and trust AI-generated decisions.
- Bottom-right (Sustainable, Scalable, and Trustworthy AI): Emphasizes long-term viability, ethical scalability, and the alignment of AI deployment with sustainability and equity goals.
3.1. Transparency and Explainability: Moving Beyond the Black Box
3.1.1. Proposing Measurable Metrics for Transparency in AI Systems
- Feature Contribution Analysis: Techniques such as SHapley Additive exPlanations (SHAP) assign an importance value to each input feature, indicating its contribution to the model’s output [85]. For instance, in a clinical decision support system predicting patient outcomes, SHAP values can highlight which clinical parameters (e.g., blood pressure, cholesterol levels) most significantly influenced a specific prediction. This allows healthcare professionals to understand the model’s reasoning and assess its alignment with clinical knowledge [11].
- Local Interpretable Model-agnostic Explanations (LIME): LIME approximates the AI model locally with an interpretable model to explain individual predictions [86]. In practice, if an AI system recommends a particular treatment plan, LIME can provide a simplified explanation of that specific recommendation, enabling clinicians to evaluate its validity before implementation [11].
- Attention Mechanisms in Neural Networks: In models such as attention-based Bidirectional Long Short-Term Memory networks, attention weights can serve as proxies for feature importance, offering insights into which aspects of these input data the model focuses on during prediction. For example, in patient monitoring systems, attention mechanisms can help identify critical time periods or vital signs that the model deems most relevant, thereby providing transparency in its decision-making process [87].
3.1.2. Advancements in Explainability Techniques
3.1.3. Building Trust Through Understanding and Shared Accountability
3.2. Accountability in AI Decision-Making
4. Future Directions: A Roadmap for Trustworthy AI in Healthcare
4.1. Cross-Disciplinary Collaboration for AI Governance
4.2. Integrating AI with Ethical Frameworks: Towards “Ethical by Design” AI
4.3. Sustainable and Ethical AI Development
- Optimizing algorithms for energy efficiency, reducing computational complexity while maintaining performance.
- Leveraging renewable energy to power AI infrastructure, such as cloud providers committed to carbon neutrality.
- Implementing hardware recycling strategies to reduce e-waste from outdated computing resources [116].
- Carbontracker: This tool monitors hardware power consumption and local energy carbon intensity during the training of deep learning models, providing accurate measurements and predictions of the operational carbon footprint [117].
- Energy Usage Reports: This approach emphasizes environmental awareness as part of algorithmic accountability, proposing the inclusion of energy usage reports in standard algorithmic practices to promote responsible computing in ML [118].
- Policymakers should establish incentives and regulations that promote responsible innovation.
- Healthcare institutions should adopt procurement policies favoring energy-efficient AI solutions and transparent supply chains.
- AI developers should integrate sustainability metrics into model evaluation frameworks, ensuring that AI systems align with both ethical and environmental standards.
4.4. Establishing Quantifiable AI Governance Metrics
4.5. Policy Recommendations for AI Governance Implementation
- Mandatory Transparency Reporting: AI developers must publish detailed model documentation, including interpretability techniques, demographic performance disparities, and post-market monitoring results. For example, the UK NHS AI Lab mandates model transparency reports for all approved AI-driven clinical tools [68].
- Bias Mitigation Mandates: Regulators should require AI developers to conduct fairness audits and actively mitigate algorithmic biases before deployment. For example, the FDA’s SaMD regulatory framework includes demographic fairness testing in clinical AI validation [66].
- Real-Time AI Performance Audits: An AI Governance Task Force should oversee post-market AI performance, ensuring continuous validation and bias correction. For example, the EU AIA proposes AI sandboxes for continuous evaluation and refinement [74].
- Sustainability Compliance Incentives: AI developers adhering to energy efficiency standards should receive tax incentives or expedited regulatory approvals. Initiatives supporting the development and implementation of AI solutions that contribute to carbon neutrality can offer subsidies to organizations integrating sustainable AI technologies, thereby reducing their carbon footprint. Studies have shown that AI-based solutions can significantly reduce energy consumption and carbon emissions in various sectors [116,117,118].
5. Conclusions
- Regulatory Adaptation—Policymakers should develop flexible, adaptive regulations that evolve alongside AI technologies, ensuring continuous monitoring, transparency reporting, and risk assessment frameworks.
- Interdisciplinary Cooperation—Collaboration among technologists, clinicians, ethicists, and policymakers is essential to establish standardized guidelines that balance innovation with ethical safeguards.
- Global Policy Alignment—International cooperation should aim to harmonize AI regulations across jurisdictions, addressing disparities in privacy laws, data sovereignty, and compliance standards.
- Research Priorities—Future studies should focus on empirical evaluations of AI fairness, real-world implementation of AI transparency frameworks, and the development of sustainability metrics for AI-driven healthcare.
Author Contributions
Funding
Institutional Review Board Statement
Informed Consent Statement
Conflicts of Interest
References
- Yu, K.H.; Beam, A.L.; Kohane, I.S. Artificial intelligence in healthcare. Nat. Biomed. Eng. 2018, 2, 719–731. [Google Scholar] [CrossRef]
- Rajpurkar, P.; Chen, E.; Banerjee, O.; Topol, E.J. AI in health and medicine. Nat. Med. 2022, 28, 31–38. [Google Scholar] [CrossRef]
- Goktas, P.; Grzybowski, A. Assessing the impact of ChatGPT in dermatology: A comprehensive rapid review. J. Clin. Med. 2024, 13, 5909. [Google Scholar] [CrossRef] [PubMed]
- Kang, D.; Wu, H.; Yuan, L.; Shi, Y.; Jin, K.; Grzybowski, A. A beginner’s guide to artificial intelligence for ophthalmologists. Ophthalmol. Ther. 2024, 13, 1841–1855. [Google Scholar] [CrossRef]
- Grzybowski, A.; Singhanetr, P.; Nanegrungsunk, O.; Ruamviboonsuk, P. Artificial intelligence for diabetic retinopathy screening using color retinal photographs: From development to deployment. Ophthalmol. Ther. 2023, 12, 1419–1437. [Google Scholar] [CrossRef]
- Li, H.; Cao, J.; Grzybowski, A.; Jin, K.; Lou, L.; Ye, J. Diagnosing systemic disorders with AI algorithms based on ocular images. Healthcare 2023, 11, 1739. [Google Scholar] [CrossRef] [PubMed]
- Goktas, P.; Damadoglu, E. Future of allergy and immunology: Is AI the key in the digital era? Ann. Allergy Asthma Immunol. 2024. [Google Scholar] [CrossRef] [PubMed]
- Goktas, P.; Gulseren, D.; Tobin, A.M. Large Language and Vision Assistant in dermatology: A game changer or just hype? Clin. Exp. Dermatol. 2024, 49, 783–792. [Google Scholar] [CrossRef] [PubMed]
- Farah, L.; Murris, J.M.; Borget, I.; Guilloux, A.; Martelli, N.M.; Katsahian, S.I. Assessment of performance, interpretability, and explainability in artificial intelligence–based health technologies: What healthcare stakeholders need to know. Mayo Clin. Proc. Digit. Health 2023, 1, 120–138. [Google Scholar] [CrossRef]
- Amann, J.; Blasimme, A.; Vayena, E.; Frey, D.; Madai, V.I.; Precise4Q Consortium. Explainability for artificial intelligence in healthcare: A multidisciplinary perspective. BMC Med. Inform. Decis. Mak. 2020, 20, 310. [Google Scholar] [CrossRef]
- Chaddad, A.; Peng, J.; Xu, J.; Bouridane, A. Survey of Explainable AI techniques in healthcare. Sensors 2023, 23, 634. [Google Scholar] [CrossRef] [PubMed]
- Saeed, S.A.; Masters, R.M. Disparities in health care and the digital divide. Curr. Psychiatry Rep. 2021, 23, 61. [Google Scholar] [CrossRef] [PubMed]
- Elendu, C.; Amaechi, D.C.; Elendu, T.C.; Jingwa, K.A.; Okoye, O.K.; Okah, M.J.; Ladele, J.A.; Farah, A.H.; Alimi, H.A. Ethical implications of AI and robotics in healthcare: A review. Medicine 2023, 102, e36671. [Google Scholar] [CrossRef]
- Babic, B.; Gerke, S.; Evgeniou, T.; Cohen, I.G. Beware explanations from AI in health care. Science 2021, 373, 284–286. [Google Scholar] [CrossRef] [PubMed]
- Grzybowski, A.; Jin, K.; Wu, H. Challenges of artificial intelligence in medicine and dermatology. Clin. Dermatol. 2024, 42, 210–215. [Google Scholar] [CrossRef] [PubMed]
- Goktas, P. Ethics, transparency, and explainability in generative ai decision-making systems: A comprehensive bibliometric study. J. Decis. Syst. 2024, 1–29. [Google Scholar] [CrossRef]
- Rudin, C. Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead. Nat. Mach. Intell. 2019, 1, 206–215. [Google Scholar] [CrossRef]
- Tonekaboni, S.; Joshi, S.; McCradden, M.D.; Goldenberg, A. What clinicians want: Contextualizing explainable machine learning for clinical end use. In Machine Learning for Healthcare Conference; PMLR: Cambridge MA, USA, 2019; pp. 359–380. [Google Scholar]
- Cabitza, F.; Campagner, A.; Sconfienza, L.M. As if sand were stone. New concepts and metrics to probe the ground on which to build trustable AI. BMC Med. Inform. Decis. Mak. 2020, 20, 219. [Google Scholar] [CrossRef]
- Durán, J.; Jongsma, K.R. Who is afraid of black box algorithms? On the epistemological and ethical basis of trust in medical AI. J. Med. Ethics 2021, 47, 329–335. [Google Scholar] [CrossRef]
- Zhou, Q.; Zuley, M.; Guo, Y.; Yang, L.; Nair, B.; Vargo, A.; Ghannam, S.; Arefan, D.; Wu, S. A machine and human reader study on AI diagnosis model safety under attacks of adversarial images. Nat. Commun. 2021, 12, 7281. [Google Scholar] [CrossRef] [PubMed]
- Blanckaert, J. The “black box” of artificial intelligence in ophthalmology. Ophthalmol. Times Eur. 2024, 20, 13. [Google Scholar]
- He, J.; Baxter, S.L.; Xu, J.; Xu, J.; Zhou, X.; Zhang, K. The practical implementation of artificial intelligence technologies in medicine. Nat. Med. 2019, 25, 30–36. [Google Scholar] [CrossRef] [PubMed]
- Vinuesa, R.; Azizpour, H.; Leite, I.; Balaam, M.; Dignum, V.; Domisch, S.; Felländer, A.; Langhans, S.D.; Tegmark, M.; Fuso Nerini, F. The role of artificial intelligence in achieving the Sustainable Development Goals. Nat. Commun. 2020, 11, 233. [Google Scholar] [CrossRef] [PubMed]
- Zaidan, E.; Ibrahim, I.A. AI governance in a complex and rapidly changing regulatory landscape: A global perspective. Humanit. Soc. Sci. Commun. 2024, 11, 1121. [Google Scholar] [CrossRef]
- Hazarika, I. Artificial intelligence: Opportunities and implications for the health workforce. Int. Health 2020, 12, 241–245. [Google Scholar] [CrossRef]
- Bhattacharyya, M.; Miller, V.M.; Bhattacharyya, D.; Miller, L.E. High rates of fabricated and inaccurate references in ChatGPT-generated medical content. Cureus 2023, 15, e39238. [Google Scholar] [CrossRef] [PubMed]
- Anibal, J.T.; Huth, H.B.; Gunkel, J.; Gregurick, S.K.; Wood, B.J. Simulated misuse of large language models and clinical credit systems. NPJ Digit. Med. 2024, 7, 317. [Google Scholar] [CrossRef]
- Collins, B.X.; Bélisle-Pipon, J.-C.; Evans, B.J.; Ferryman, K.; Jiang, X.; Nebeker, C.; Novak, L.; Roberts, K.; Were, M.; Yin, Z.; et al. Addressing ethical issues in healthcare artificial intelligence using a lifecycle-informed process. JAMIA Open 2024, 7, ooae108. [Google Scholar] [CrossRef] [PubMed]
- Westerlund, M. The emergence of deepfake technology: A review. Technol. Innov. Manag. Rev. 2019, 9, 39–52. [Google Scholar] [CrossRef]
- Moulaei, K.; Yadegari, A.; Baharestani, M.; Farzanbakhsh, S.; Sabet, B.; Afrash, M.R. Generative artificial intelligence in healthcare: A scoping review on benefits, challenges and applications. Int. J. Med. Inform. 2024, 188, 105474. [Google Scholar] [CrossRef] [PubMed]
- Jobin, A.; Ienca, M.; Vayena, E. The global landscape of AI ethics guidelines. Nat. Mach. Intell. 2019, 1, 389–399. [Google Scholar] [CrossRef]
- Bates, D.W.; Levine, D.; Syrowatka, A.; Kuznetsova, M.; Craig, K.J.; Rui, A.; Jackson, G.P.; Rhee, K. The potential of artificial intelligence to improve patient safety: A scoping review. NPJ Digit. Med. 2021, 4, 54. [Google Scholar] [CrossRef] [PubMed]
- Layode, O.; Naiho HN, N.; Adeleke, G.S.; Udeh, E.O.; Labake, T.T. The role of cybersecurity in facilitating sustainable healthcare solutions: Overcoming challenges to protect sensitive data. Int. Med. Sci. Res. J. 2024, 4, 668–693. [Google Scholar] [CrossRef]
- Mittelstadt, B. Principles alone cannot guarantee ethical AI. Nat. Mach. Intell. 2019, 1, 501–507. [Google Scholar] [CrossRef]
- Liebrenz, M.; Schleifer, R.; Buadze, A.; Bhugra, D.; Smith, A. Generating scholarly content with ChatGPT: Ethical challenges for medical publishing. Lancet Digit. Health 2023, 5, e105–e106. [Google Scholar] [CrossRef] [PubMed]
- Ong, J.C.; Chang, S.Y.; William, W.; Butte, A.J.; Shah, N.H.; Chew, L.S.; Liu, N.; Doshi-Velez, F.; Lu, W.; Savulescu, J.; et al. Ethical and regulatory challenges of large language models in medicine. Lancet Digit. Health 2024, 6, e428–e432. [Google Scholar] [CrossRef] [PubMed]
- Schmidt, J.; Schutte, N.M.; Buttigieg, S.; Novillo-Ortiz, D.; Sutherland, E.; Anderson, M.; de Witte, B.; Peolsson, M.; Unim, B.; Pavlova, M.; et al. Mapping the regulatory landscape for artificial intelligence in health within the European Union. NPJ Digit. Med. 2024, 7, 229. [Google Scholar] [CrossRef]
- Grzybowski, A.; Brona, P. Approval and certification of ophthalmic AI devices in the European Union. Ophthalmol. Ther. 2023, 12, 633–638. [Google Scholar] [CrossRef]
- Shi, Z.; Du, X.; Li, J.; Hou, R.; Sun, J.; Marohabutr, T. Factors influencing digital health literacy among older adults: A scoping review. Front. Public Health 2024, 12, 1447747. [Google Scholar] [CrossRef] [PubMed]
- Dong, Q.; Liu, T.; Liu, R.; Yang, H.; Liu, C. Effectiveness of digital health literacy interventions in older adults: Single-arm meta-analysis. J. Med. Internet Res. 2023, 25, e48166. [Google Scholar] [CrossRef]
- Mökander, J.; Floridi, L. Operationalising AI governance through ethics-based auditing: An industry case study. AI Ethics 2023, 3, 451–468. [Google Scholar] [CrossRef] [PubMed]
- WHO. Ethics and Governance of Artificial Intelligence for Health: WHO Guidance; World Health Organization: Geneva, Switzerland, 2021; Available online: https://iris.who.int/bitstream/handle/10665/341996/9789240029200-eng.pdf (accessed on 5 February 2025).
- Birkstedt, T.; Minkkinen, M.; Tandon, A.; Mäntymäki, M. AI governance: Themes, knowledge gaps and future agendas. Internet Res. 2023, 33, 133–167. [Google Scholar] [CrossRef]
- Finlayson, S.G.; Bowers, J.D.; Ito, J.; Zittrain, J.L.; Beam, A.L.; Kohane, I.S. Adversarial attacks on medical machine learning. Science 2019, 363, 1287–1289. [Google Scholar] [CrossRef]
- Vishwakarma, L.P.; Singh, R.K.; Mishra, R.; Kumari, A. Application of artificial intelligence for resilient and sustainable healthcare system: Systematic literature review and future research directions. Int. J. Prod. Res. 2025, 63, 822–844. [Google Scholar] [CrossRef]
- Perni, S.; Lehmann, L.S.; Bitterman, D.S. Patients should be informed when AI systems are used in clinical trials. Nat. Med. 2023, 29, 1890–1891. [Google Scholar] [CrossRef]
- Imrie, F.; Davis, R.; van der Schaar, M. Multiple stakeholders drive diverse interpretability requirements for machine learning in healthcare. Nat. Mach. Intell. 2023, 5, 824–829. [Google Scholar] [CrossRef]
- Murdoch, B. Privacy and artificial intelligence: Challenges for protecting health information in a new era. BMC Med. Ethics 2021, 22, 122. [Google Scholar] [CrossRef] [PubMed]
- Anthem Medical Data Breach. Wikipedia. 2015. Available online: https://en.wikipedia.org/wiki/Anthem_medical_data_breach (accessed on 5 February 2025).
- Health Service Executive Ransomware Attack. Wikipedia. 2021. Available online: https://en.wikipedia.org/wiki/Health_Service_Executive_ransomware_attack (accessed on 5 February 2025).
- Cohen, I.G.; Mello, M.M. HIPAA and protecting health information in the 21st century. JAMA 2018, 320, 231–232. [Google Scholar] [CrossRef]
- Ness, R.B. Influence of the HIPAA privacy rule on health research. JAMA 2007, 298, 2164–2170. [Google Scholar] [CrossRef]
- Yigzaw, K.Y.; Olabarriaga, S.D.; Michalas, A.; Marco-Ruiz, L.; Hillen, C.; Verginadis, Y.; De Oliveira, M.T.; Krefting, D.; Penzel, T.; Bowden, J.; et al. Health data security and privacy: Challenges and solutions for the future. In Roadmap to Successful Digital Health Ecosystems; Elsevier: Amsterdam, The Netherlands, 2022; pp. 335–362. [Google Scholar] [CrossRef]
- Putzier, M.; Khakzad, T.; Dreischarf, M.; Thun, S.; Trautwein, F.; Taheri, N. Implementation of cloud computing in the German healthcare system. NPJ Digit. Med. 2024, 7, 12. [Google Scholar] [CrossRef]
- Parikh, R.B.; Teeple, S.; Navathe, A.S. Addressing bias in artificial intelligence in health care. JAMA 2019, 322, 2377–2378. [Google Scholar] [CrossRef]
- Panch, T.; Mattie, H.; Atun, R. Artificial intelligence and algorithmic bias: Implications for health systems. J. Glob. Health 2019, 9, 020318. [Google Scholar] [CrossRef]
- Moore, C.M. The challenges of health inequities and AI. Intell.-Based Med. 2022, 6, 100067. [Google Scholar] [CrossRef]
- Byrne, M.D. Reducing bias in healthcare artificial intelligence. J. PeriAnesthesia Nurs. 2021, 36, 313–316. [Google Scholar] [CrossRef] [PubMed]
- Ricci Lara, M.A.; Echeveste, R.; Ferrante, E. Addressing fairness in artificial intelligence for medical imaging. Nat. Commun. 2022, 13, 4581. [Google Scholar] [CrossRef]
- Goktas, P.; Kucukkaya, A.; Karacay, P. Leveraging the efficiency and transparency of artificial intelligence-driven visual chatbot through smart prompt learning concept. Ski. Res. Technol. 2023, 29, e13417. [Google Scholar] [CrossRef] [PubMed]
- Seyyed-Kalantari, L.; Zhang, H.; McDermott, M.B.; Chen, I.Y.; Ghassemi, M. Underdiagnosis bias of artificial intelligence algorithms applied to chest radiographs in under-served patient populations. Nat. Med. 2021, 27, 2176–2182. [Google Scholar] [CrossRef] [PubMed]
- Daneshjou, R.; Smith, M.P.; Sun, M.D.; Rotemberg, V.; Zou, J. Lack of transparency and potential bias in artificial intelligence data sets and algorithms: A scoping review. JAMA Dermatol. 2021, 157, 1362–1369. [Google Scholar] [CrossRef]
- Wu, E.; Wu, K.; Daneshjou, R.; Ouyang, D.; Ho, D.E.; Zou, J. How medical AI devices are evaluated: Limitations and recommendations from an analysis of FDA approvals. Nat. Med. 2021, 27, 582–584. [Google Scholar] [CrossRef] [PubMed]
- Pagallo, U.; Sciolla, J.C.; Durante, M. The environmental challenges of AI in EU law: Lessons learned from the Artificial Intelligence Act (AIA) with its drawbacks. Transform. Gov. People Process Policy 2022, 16, 359–376. [Google Scholar] [CrossRef]
- U.S. Food and Drug Administration (FDA). Artificial Intelligence and Machine Learning in Software as a Medical Device (SaMD)—Regulatory Framework. 2025. Available online: https://www.fda.gov/medical-devices/software-medical-device-samd/artificial-intelligence-and-machine-learning-software-medical-device (accessed on 5 February 2025).
- Mökander, J.; Axente, M.; Casolari, F.; Floridi, L. Conformity assessments and post-market monitoring: A guide to the role of auditing in the proposed European AI regulation. Minds Mach. 2022, 32, 241–268. [Google Scholar] [CrossRef] [PubMed]
- NHS AI Lab. Setting Up an ‘Implementation and Governance Framework’ for Artificial Intelligence (AI) Pilot Studies Taking Place in an NHS Trust, UK. 2023. Available online: https://www.digitalregulations.innovation.nhs.uk/case-studies/setting-up-an-implementation-and-governance-framework-for-artificial-intelligence-ai-pilot-studies-taking-place-in-an-nhs-trust/ (accessed on 5 February 2025).
- Mennella, C.; Maniscalco, U.; De Pietro, G.; Esposito, M. Ethical and regulatory challenges of AI technologies in healthcare: A narrative review. Heliyon 2024, 10, e26297. [Google Scholar] [CrossRef]
- Malik, S.; Surbhi, A. Artificial intelligence in mental health landscape: A qualitative analysis of ethics and law. In AIP Conference Proceedings; AIP Publishing: Melville, NY, USA, 2024; Volume 3220, No. 1. [Google Scholar]
- Floridi, L.; Cowls, J.; Beltrametti, M.; Chatila, R.; Chazerand, P.; Dignum, V.; Luetge, C.; Madelin, R.; Pagallo, U.; Rossi, F.; et al. AI4People—An ethical framework for a good AI society: Opportunities, risks, principles, and recommendations. Minds Mach. 2018, 28, 689–707. [Google Scholar] [CrossRef] [PubMed]
- Reddy, S.; Allan, S.; Coghlan, S.; Cooper, P. A governance model for the application of AI in health care. J. Am. Med. Inform. Assoc. 2020, 27, 491–497. [Google Scholar] [CrossRef] [PubMed]
- Palkova, K. Ethical guidelines for artificial intelligence in healthcare from the sustainable development perspective. Eur. J. Sustain. Dev. 2021, 10, 90. [Google Scholar] [CrossRef]
- Montag, C.; Finck, M. Successful implementation of the EU AI Act requires interdisciplinary efforts. Nat. Mach. Intell. 2024, 6, 1415–1417. [Google Scholar] [CrossRef]
- Hacker, P. Sustainable AI regulation. Common Mark. Law Rev. 2024, 61, 345–386. [Google Scholar] [CrossRef]
- Bottrighi, A.; Grosso, F.; Ghiglione, M.; Maconi, A.; Nera, S.; Piovesan, L.; Raina, E.; Roveta, A.; Terenziani, P. Symbolic AI approach to medical training. J. Med. Syst. 2025, 49, 2. [Google Scholar] [CrossRef]
- Ennab, M.; Mcheick, H. Enhancing interpretability and accuracy of AI models in healthcare: A comprehensive review on challenges and future directions. Front. Robot. AI 2024, 11, 1444763. [Google Scholar] [CrossRef] [PubMed]
- Yagin, F.H.; Colak, C.; Algarni, A.; Gormez, Y.; Guldogan, E.; Ardigò, L.P. Hybrid explainable artificial intelligence models for targeted metabolomics analysis of diabetic retinopathy. Diagnostics 2024, 14, 1364. [Google Scholar] [CrossRef] [PubMed]
- Kim, S.Y.; Kim, D.H.; Kim, M.J.; Ko, H.J.; Jeong, O.R. XAI-based clinical decision support systems: A systematic review. Appl. Sci. 2024, 14, 6638. [Google Scholar] [CrossRef]
- Goktas, P.; Carbajo, R.S. PPSW–SHAP: Towards interpretable cell classification using tree-based SHAP image decomposition and restoration for high-throughput bright-field imaging. Cells 2023, 12, 1384. [Google Scholar] [CrossRef] [PubMed]
- Goktas, P.; Carbajo, R.S. Unleashing the power of high-throughput bright-field imaging for enhanced mesenchymal cell separation: A novel supervised clustering approach in vitro augmentation of healthy and stressful conditions. In European Conference on Biomedical Optics; Optica Publishing Group: Washington, DC, USA, 2023; p. 126290D. [Google Scholar] [CrossRef]
- Pillai, V. Enhancing transparency and understanding in AI decision-making processes. Iconic Res. Eng. J. 2024, 8, 168–172. [Google Scholar]
- Rai, A. Explainable AI: From black box to glass box. J. Acad. Mark. Sci. 2020, 48, 137–141. [Google Scholar] [CrossRef]
- Avacharmal, R. Explainable AI: Bridging the gap between machine learning models and human understanding. J. Inform. Educ. Res. 2024, 4. [Google Scholar] [CrossRef]
- Lundberg, S. A unified approach to interpreting model predictions. arXiv 2017, arXiv:1705.07874. [Google Scholar]
- Zhao, X.; Huang, W.; Huang, X.; Robu, V.; Flynn, D. Baylime: Bayesian local interpretable model-agnostic explanations. In Uncertainty in Artificial Intelligence; PMLR: Cambridge MA, USA, 2021; pp. 887–896. [Google Scholar]
- Shaik, T.; Tao, X.; Xie, H.; Li, L.; Velasquez, J.D.; Higgins, N. QXAI: Explainable AI framework for quantitative analysis in patient monitoring systems. arXiv 2023, arXiv:2309.10293. [Google Scholar]
- Sadeghi, Z.; Alizadehsani, R.; Cifci, M.A.; Kausar, S.; Rehman, R.; Mahanta, P.; Bora, P.K.; Almasri, A.; Alkhawaldeh, R.S.; Hussain, S.; et al. A review of Explainable Artificial Intelligence in healthcare. Comput. Electr. Eng. 2024, 118, 109370. [Google Scholar] [CrossRef]
- Kim, B.; Park, J.; Suh, J. Transparency and accountability in AI decision support: Explaining and visualizing convolutional neural networks for text information. Decis. Support Syst. 2020, 134, 113302. [Google Scholar] [CrossRef]
- Srinivasu, P.N.; Sandhya, N.; Jhaveri, R.H.; Raut, R. From blackbox to explainable AI in healthcare: Existing tools and case studies. Mob. Inf. Syst. 2022, 2022, 8167821. [Google Scholar] [CrossRef]
- Yagiz, M.A.; Mohajer Ansari, P.; Pesé, M.D.; Goktas, P. Transforming in-vehicle network intrusion detection: VAE-based knowledge distillation meets explainable AI. In Proceedings of the Sixth Workshop on CPS&IoT Security and Privacy, Salt Lake City, UT, USA, 14–18 October 2024; pp. 93–103. [Google Scholar] [CrossRef]
- Sankar, B.S.; Gilliland, D.; Rincon, J.; Hermjakob, H.; Yan, Y.; Adam, I.; Lemaster, G.; Wang, D.; Watson, K.; Bui, A.; et al. Building an ethical and trustworthy biomedical AI ecosystem for the translational and clinical integration of foundation models. Bioengineering 2024, 11, 984. [Google Scholar] [CrossRef] [PubMed]
- O’Sullivan, S.; Nevejans, N.; Allen, C.; Blyth, A.; Leonard, S.; Pagallo, U.; Holzinger, K.; Holzinger, A.; Sajid, M.I.; Ashrafian, H. Legal, regulatory, and ethical frameworks for development of standards in artificial intelligence (AI) and autonomous robotic surgery. Int. J. Med. Robot. Comput. Assist. Surg. 2019, 15, e1968. [Google Scholar] [CrossRef]
- Shneiderman, B. Bridging the gap between ethics and practice: Guidelines for reliable, safe, and trustworthy human-centered AI systems. ACM Trans. Interact. Intell. Syst. (TiiS) 2020, 10, 26. [Google Scholar] [CrossRef]
- Morley, J.; Machado, C.C.; Burr, C.; Cowls, J.; Joshi, I.; Taddeo, M.; Floridi, L. The ethics of AI in health care: A mapping review. Soc. Sci. Med. 2020, 260, 113172. [Google Scholar] [CrossRef] [PubMed]
- Dwivedi, Y.K.; Hughes, L.; Ismagilova, E.; Aarts, G.; Coombs, C.; Crick, T.; Duan, Y.; Dwivedi, R.; Edwards, J.; Eirug, A.; et al. Artificial Intelligence (AI): Multidisciplinary perspectives on emerging challenges, opportunities, and agenda for research, practice and policy. Int. J. Inf. Manag. 2021, 57, 101994. [Google Scholar] [CrossRef]
- Floridi, L.; Cowls, J. A unified framework of five principles for AI in society. In Machine Learning and the City: Applications in Architecture and Urban Design; Wiley: Hoboken, NJ, USA, 2022; pp. 535–545. [Google Scholar] [CrossRef]
- Pedersen, C.L.; Ritter, T. Digital authenticity: Towards a research agenda for the AI-driven fifth phase of digitalization in business-to-business marketing. Ind. Mark. Manag. 2024, 123, 162–172. [Google Scholar] [CrossRef]
- Sun, T.Q.; Medaglia, R. Mapping the challenges of Artificial Intelligence in the public sector: Evidence from public healthcare. Gov. Inf. Q. 2019, 36, 368–383. [Google Scholar] [CrossRef]
- Zuiderwijk, A.; Chen, Y.C.; Salem, F. Implications of the use of artificial intelligence in public governance: A systematic literature review and a research agenda. Gov. Inf. Q. 2021, 38, 101577. [Google Scholar] [CrossRef]
- ISO/IEC 42001:2023; Information Technology—Artificial Intelligence—Management System. ISO: New York, NY, USA, 2023. Available online: https://www.iso.org/standard/81230.html (accessed on 5 February 2025).
- IEEE 2801-2022; IEEE Recommended Practice for the Quality Management of Datasets for Medical Artificial Intelligence. IEEE: Piscataway, NJ, USA, 2022. Available online: https://standards.ieee.org/ieee/2801/7459/ (accessed on 5 February 2025).
- Chan, C.K.Y. A comprehensive AI policy education framework for university teaching and learning. Int. J. Educ. Technol. High. Educ. 2023, 20, 38. [Google Scholar] [CrossRef]
- Thomas, C.; Ostmann, F. Enabling AI Governance and Innovation Through Standards. 2024. Available online: https://www.unesco.org/en/articles/enabling-ai-governance-and-innovation-through-standards (accessed on 5 February 2025).
- Ibrahim, H.; Liu, X.; Rivera, S.C.; Moher, D.; Chan, A.W.; Sydes, M.R.; Calvert, M.J.; Denniston, A.K. Reporting guidelines for clinical trials of artificial intelligence interventions: The SPIRIT-AI and CONSORT-AI guidelines. Trials 2021, 22, 11. [Google Scholar] [CrossRef]
- Lekadir, K.; Feragen, A.; Fofanah, A.J.; Frangi, A.F.; Buyx, A.; Emelie, A.; Lara, A.; Porras, A.R.; Chan, A.W.; Navarro, A. FUTURE-AI: International consensus guideline for trustworthy and deployable artificial intelligence in healthcare. arXiv 2023, arXiv:2309.12325. [Google Scholar] [CrossRef]
- Esmaeilzadeh, P. Challenges and strategies for wide-scale artificial intelligence (AI) deployment in healthcare practices: A perspective for healthcare organizations. Artif. Intell. Med. 2024, 151, 102861. [Google Scholar] [CrossRef] [PubMed]
- Peters, D.; Vold, K.; Robinson, D.; Calvo, R.A. Responsible AI—Two frameworks for ethical design practice. IEEE Trans. Technol. Soc. 2020, 1, 34–47. [Google Scholar] [CrossRef]
- Economou-Zavlanos, N.J.; Bessias, S.; Cary Jr, M.P.; Bedoya, A.D.; Goldstein, B.A.; Jelovsek, J.E.; O’Brien, C.L.; Walden, N.; Elmore, M.; Parrish, A.B.; et al. Translating ethical and quality principles for the effective, safe and fair development, deployment and use of artificial intelligence technologies in healthcare. J. Am. Med. Inform. Assoc. 2024, 31, 705–713. [Google Scholar] [CrossRef] [PubMed]
- Goktas, P.; Grzybowski, A. Balancing the promises and challenges of artificial intelligence. Ophthalmol. Times Eur. 2024, 30–31. Available online: https://europe.ophthalmologytimes.com/view/balancing-the-promises-and-challenges-of-artificial-intelligence-ethics-best-practices-medicine-ophthalmology (accessed on 5 February 2025).
- Ashok, M.; Madan, R.; Joha, A.; Sivarajah, U. Ethical framework for Artificial Intelligence and digital technologies. Int. J. Inf. Manag. 2022, 62, 102433. [Google Scholar] [CrossRef]
- Schlicht, L.; Räker, M. A context-specific analysis of ethical principles relevant for AI-assisted decision-making in health care. AI Ethics 2024, 4, 1251–1263. [Google Scholar] [CrossRef]
- van Wynsberghe, A. Sustainable AI: AI for sustainability and the sustainability of AI. AI Ethics 2021, 1, 213–218. [Google Scholar] [CrossRef]
- Nishant, R.; Kennedy, M.; Corbett, J. Artificial intelligence for sustainability: Challenges, opportunities, and a research agenda. Int. J. Inf. Manag. 2020, 53, 102104. [Google Scholar] [CrossRef]
- Lenzen, M.; Malik, A.; Li, M.; Fry, J.; Weisz, H.; Pichler, P.P.; Chaves, L.S.; Capon, A.; Pencheon, D. The environmental footprint of health care: A global assessment. Lancet Planet. Health 2020, 4, e271–e279. [Google Scholar] [CrossRef] [PubMed]
- Bolón-Canedo, V.; Morán-Fernández, L.; Cancela, B.; Alonso-Betanzos, A. A review of green artificial intelligence: Towards a more sustainable future. Neurocomputing 2024, 599, 128096. [Google Scholar] [CrossRef]
- Anthony LF, W.; Kanding, B.; Selvan, R. Carbontracker: Tracking and predicting the carbon footprint of training deep learning models. arXiv 2020, arXiv:2007.03051. [Google Scholar] [CrossRef]
- Lottick, K.; Susai, S.; Friedler, S.A.; Wilson, J.P. Energy usage reports: Environmental awareness as part of algorithmic accountability. arXiv 2019, arXiv:1911.08354. [Google Scholar] [CrossRef]
- Vimbi, V.; Shaffi, N.; Mahmud, M. Interpreting artificial intelligence models: A systematic review on the application of LIME and SHAP in Alzheimer’s disease detection. Brain Inform. 2024, 11, 10. [Google Scholar] [CrossRef]
- Daneshjou, R.; Vodrahalli, K.; Novoa, R.A.; Jenkins, M.; Liang, W.; Rotemberg, V.; Ko, J.; Swetter, S.M.; Bailey, E.E.; Gevaert, O.; et al. Disparities in dermatology AI performance on a diverse, curated clinical image set. Sci. Adv. 2022, 8, eabq6147. [Google Scholar] [CrossRef] [PubMed]
- Adler-Milstein, J.; Aggarwal, N.; Ahmed, M.; Castner, J.; Evans, B.J.; Gonzalez, A.A.; James, C.A.; Lin, S.; Mandl, K.D.; Matheny, M.E.; et al. Meeting the moment: Addressing barriers and facilitating clinical adoption of artificial intelligence in medical diagnosis. NAM Perspect. 2022, 2022, 10-31478. [Google Scholar] [CrossRef] [PubMed]
- Dewasiri, N.J.; Rathnasiri, M.S.H.; Karunarathna, K.S.S.N. Artificial intelligence-driven technologies for environmental sustainability in the healthcare industry. In Transforming Healthcare Sector Through Artificial Intelligence and Environmental Sustainability; Springer Nature: Singapore, 2025; pp. 67–87. [Google Scholar]
- Richie, C. Environmentally sustainable development and use of artificial intelligence in health care. Bioethics 2022, 36, 547–555. [Google Scholar] [CrossRef] [PubMed]
Aspect | Description | Example/Implications | Reference(s) |
---|---|---|---|
Sensitive Data Handling | AI systems depend on vast datasets, creating risks of breaches, re-identification, and privacy violations |
| Perni et al., 2023 [47] |
Private Entities | Private organizations managing health data may prioritize competing goals over patient privacy |
| Mittelstadt, 2019 [35]; Murdoch, 2021 [49] |
Data Sovereignty | Governance of patient data varies across regions, creating challenges for multinational applications of AI in healthcare |
| Cohen & Mello, 2018 [52]; Mittelstadt, 2019 [35]; Murdoch, 2021 [49]; Imrie et al., 2023 [48]; Putzier et al., 2024 [55] |
Cross-Border Challenges | Fragmented global regulations complicate AI training and deployment on diverse datasets |
| Mittelstadt, 2019 [35]; Imrie et al., 2023 [48]; Putzier et al., 2024 [55] |
Regulatory Gaps | Existing frameworks such as HIPAA are outdated for addressing AI’s demands in healthcare, particularly for big data applications |
| Ness, 2007 [53]; Murdoch, 2021 [49] |
Technological Risks | Cloud computing and digital health solutions offer scalability but introduce potential vulnerabilities to cyberattacks |
| Imrie et al., 2023 [48]; Putzier et al., 2024 [55] |
Aspect | Description | Examples/Implications | Reference(s) |
---|---|---|---|
Accountability vs. Liability | Accountability refers to the obligation to explain and justify actions; liability involves legal responsibility and potential financial reparations | Healthcare providers, AI developers, and institutions must understand their roles. If an autonomous surgical robot makes errors, who is answerable, and who bears legal blame? | O’Sullivan et al. 2019 [93] |
Culpability | Pertains to moral wrongdoing, challenging to assign to non-human agents | AI systems lack moral agency, complicating culpability. The emphasis shifts to human oversight, ensuring “doctor-in-the-loop” models of surgical AI | O’Sullivan et al. 2019 [93] |
Multi-Level Governance | Accountability structures span team, organizational, industry, and regulatory levels | At the team level: rigorous documentation and audits. At the organizational level: safety culture and continuous improvement. At industry/regulatory level: standards, certifications, and external oversight | Shneiderman, 2020 [94]; Morley et al., 2020 [95] |
Embedding Ethical Principles | Aligning AI design and deployment with ethical norms to ensure fairness, transparency, and responsibility | Formalized guidelines and professional codes help clarify stakeholder duties, reduce disparities, and maintain public trust | Shneiderman, 2020 [94]; Morley et al., 2020 [95] |
Stakeholder Engagement | Inclusive engagement of clinicians, patients, developers, policymakers, and regulators | Regular input from diverse stakeholders ensures that accountability frameworks reflect real-world clinical needs and societal values, maintaining trust and legitimacy | Shneiderman, 2020 [94]; Morley et al., 2020 [95] |
Priority Area | Key Challenges | Proposed Actions | Primary Stakeholders | Intended Impact |
---|---|---|---|---|
Cross-Disciplinary Collaboration | Fragmented expertise, misaligned incentives, and limited communication across AI developers, clinicians, policymakers, and ethicists | Establish multidisciplinary working groups, encourage joint training programs, and foster international consortia for standards-setting and knowledge exchange | AI developers, healthcare practitioners, policymakers, ethicists, industry consortia, professional associations | More cohesive governance frameworks, improved policy relevance, enhanced credibility and trust in AI solutions |
Ethical by Design Integration | Retrofitted ethics checks, delayed detection of biases, and insufficient accountability mechanisms embedded in technology | Embed ethical principles (fairness, accountability, privacy) at the inception of model development, implement continuous impact assessments, adopt formalized “ethical-by-design” guidelines and iterative audits | AI developers, ethicists, regulators, legal experts, clinical oversight boards | Earlier identification of risks, stronger patient protections, higher adoption rates due to enhanced trust and transparency |
Sustainable and Equitable AI | High energy consumption, environmental degradation, unequal access to advanced AI tools, and risk of exacerbating health disparities | Optimize algorithms for energy efficiency, source renewable computing power, enforce procurement policies prioritizing sustainable vendors, integrate fairness checks, and align with global sustainability goals | Healthcare institutions, AI developers, environmental bodies, public health officials, sustainability experts | Reduced ecological footprint, long-term resource stewardship, equitable access to advanced healthcare tools, alignment with Sustainable Development Goals |
Adaptive Oversight and Regulation | Static, fragmented regulatory models that cannot keep pace with rapidly evolving AI technologies | Develop adaptive regulatory frameworks, promote iterative certification models, support real-time performance monitoring, encourage flexible policy experimentation | Policymakers, regulatory agencies, professional standards organizations, industry partners | Responsive oversight that evolves with emerging technologies, ensuring continuous compliance, safety, and patient well-being |
Public Engagement and Transparency | Low patient and public trust due to opaque decision-making, fear of unchecked innovation, and limited patient input | Provide accessible explanations of AI decisions, facilitate patient and community forums, public reporting of model performance and bias audits, and integrate user feedback into ongoing improvements | Patient advocacy groups, healthcare institutions, policymakers, AI developers, civil society organizations | Heightened public trust, improved patient satisfaction, and greater societal acceptance of AI-driven healthcare |
Dimension | Key Metrics | Examples/Implications | Reference(s) |
---|---|---|---|
Transparency | Explainability Score, % of AI decisions with interpretable outputs | Feature attribution techniques such as SHAP or LIME are used to justify model predictions in clinical decision-making | Vimbi et al., 2024 [119] |
Fairness | Bias reduction rate, model performance disparity across demographics | AI models benchmarked across diverse skin tones to ensure equitable diagnosis accuracy | Daneshjou et al., 2022 [120] |
Accountability | Compliance rate with AI ethics guidelines, human-in-the-loop ratio | AI-assisted radiology systems requiring clinician oversight before issuing diagnostic reports | Adler-Milstein et al., 2022 [121] |
Safety | Error rate reduction, model robustness under adversarial testing | Stress-testing AI-driven clinical decision tools to minimize false positives and negatives | Finlayson et al., 2019 [45] |
Sustainability | Carbon footprint of AI training, green AI adoption rate | Evaluating computational costs and implementing energy-efficient AI models in hospitals | Dewasiri et al., 2025 [122]; Richie, 2022 [123] |
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2025 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Goktas, P.; Grzybowski, A. Shaping the Future of Healthcare: Ethical Clinical Challenges and Pathways to Trustworthy AI. J. Clin. Med. 2025, 14, 1605. https://doi.org/10.3390/jcm14051605
Goktas P, Grzybowski A. Shaping the Future of Healthcare: Ethical Clinical Challenges and Pathways to Trustworthy AI. Journal of Clinical Medicine. 2025; 14(5):1605. https://doi.org/10.3390/jcm14051605
Chicago/Turabian StyleGoktas, Polat, and Andrzej Grzybowski. 2025. "Shaping the Future of Healthcare: Ethical Clinical Challenges and Pathways to Trustworthy AI" Journal of Clinical Medicine 14, no. 5: 1605. https://doi.org/10.3390/jcm14051605
APA StyleGoktas, P., & Grzybowski, A. (2025). Shaping the Future of Healthcare: Ethical Clinical Challenges and Pathways to Trustworthy AI. Journal of Clinical Medicine, 14(5), 1605. https://doi.org/10.3390/jcm14051605