1. Introduction
In recent years, AI has developed rapidly and it now affects people’s lives in many fields, including healthcare [
1], intelligent transportation [
2], and education [
3]. For instance, genetic algorithms are used to predict outcomes in critically ill patients in healthcare [
4]; sensing algorithms are applied for self-driving vehicles in smart transportation [
5]; and natural language processing (NLP) is combined with machine learning (ML) to facilitate online learning in education [
6]. As one of the core technologies of AI, ML has brought the development of AI to an advanced level [
7,
8]. ML employs algorithmic methods that enable machines to solve problems without explicit computer programming [
9]. Deep learning (DL) is a subset of ML based on multi-layered artificial neural networks, which can be further utilized to solve complex problems using unstructured data, much like the human brain [
10,
11]. Healthcare is one of the most promising application domains for ML and DL [
12]. AI techniques and their applications can help to detect cancer faster and earlier than before, make more accurate medical diagnoses, care for and monitor the elderly using robots, etc. [
12,
13,
14]. ML techniques can process massive amounts of data and make increasingly accurate assessments and predictions [
15,
16]. Although ML has advanced the development of AI, it also brings up ethical issues, especially in the healthcare domain [
17,
18,
19]. Ethics has been identified as a priority for developing and deploying AI across sectors [
20,
21]. Ethical decision making by AI systems refers to the computational process of evaluating and selecting alternatives in a manner compliant with social, ethical, and legal requirements [
22]. The resulting ethical issues affect the further development and acceptance of AI, especially in healthcare, where technology must comply with the law, regulations, and privacy principles to ensure the maintenance of the common good [
13,
17,
18,
19].
The use of sophisticated ML algorithms employing DL and other complex techniques leads to black-box models, which may have low transparency and explainability [
23]. Black-box models make it difficult even for their developers to explain how an AI system makes decisions [
24]. Meanwhile, users are confronted with decisions without an explanation for these decisions [
25]. The “black-box” nature of ML often clashes with legislation in high-stakes domains, where stakeholders can experience severe consequences if a bad decision is made. Particularly in the healthcare domain, where lives are at stake, the actual adoption of AI in everyday practice is limited by numerous factors, including accuracy, explainability, transparency, and compatibility [
26]. This makes it important to promote the explainability of AI algorithms. Explainability is essential to responsible AI and can build trust in and engagement with AI [
27]. Algorithmic transparency and explainability have been requested by several societal bodies, such as the government [
26], the media [
28,
29], and the legal community [
30]. The research community has embraced this notion over the last few years, and numerous efforts have been made to design explainable AI systems (e.g., [
31,
32]). Nevertheless, aside from explainability, multiple ethical concerns still exist when using AI-enabled solutions in the healthcare field, and they are gradually becoming the dominant factors influencing the adoption of AI.
Policymakers and related professionals have been looking for approaches to cope with the ethical risks associated with AI development. Examples of rules and regulations are the “Ethics Guidelines for Trustworthy AI” from the European Commission [
33], “Report on the Future of Artificial Intelligence” from the US [
34], and the “Beijing AI Principles” from the Chinese government [
35]. Among these governing bodies, the European Union (EU) has been acknowledged as a leader in establishing a framework for ethical regulations and rules for AI [
36]. Unlike the other two sets of guidelines, the fundamental principle of the EU guidelines is to promote a “human-centered” approach that respects European values and regulations [
33]. The ethical challenges addressed by the EU framework are globally relevant. As they are based on a fundamental-rights approach, the relevance and importance of these guidelines can be considered universal. The authority and obligations underlying these guidelines form the framework for most of the United Nations’ (UN) Sustainable Development Goals (SDGs). This also affects the development strategies in low- and middle-income countries outside the EU [
37]. These guidelines apply to all industrial sectors, and none of them are specifically and directly related to AI’s ethical and legal aspects in healthcare.
In addition to the ethical regulations and policies mentioned above, many academic publications discuss general ethical issues related to AI. Examples are “The global landscape of AI ethics guidelines” by Jobin’s group, which presents an overview of existing ethical guidelines and strategies [
38]; “The Ethics of AI Ethics: An Evaluation of Guidelines” by Hangendorff, which analyzes 22 ethical guidelines for AI, and providing recommendations for overcoming their relative ineffectiveness [
39]; and “Artificial intelligence ethics guidelines for developers and users: clarifying their content and normative implications” by Ryan and Carsten Stahl, which provides a elaborative explanation of 11 normative implications of current AI ethical guidelines directed to AI developers and organizational users [
40]. Although these three documents present very useful discussions of ethical AI issues in a general domain, none of them specifically address the ethics of AI in healthcare.
Academic publications discussing the ethical issues concerning AI in healthcare do exist, such as “The ethics of AI in health care: A mapping review” by Morley’s research group [
41], “Ethical and legal challenges of artificial intelligence-driven healthcare” by Gerke’s group [
42], and “A governance model for the application of AI in healthcare” by Reddy’s group [
43]. Morley’s group focused on mapping the ethical issues based on epistemic, normative, and overarching perspectives [
41]. Gerke’s group explored ethical issues from the perspective of legal challenges, but did not present a systematic review of how AI can influence them in healthcare applications [
42]. Reddy’s group addressed the introduction and implementation of a proposed governance model in healthcare. However, their specification of the ethical issues only focused on the general governance model for ethical issues related to the essential elements of safety and the responsible use of AI.
In short, governmental policy and academic research have often addressed the ethics of AI in a general sense, but have devote much less attention to the specific field of healthcare. Some of the publications do elaborate on AI ethical issues in healthcare, but they fail to give a systematic overview of the ethical issues identified in the application of AI techniques in healthcare.
In this literature review, we aimed to provide an overview of the currently identified ethical concerns related to AI in healthcare. Specifically, the following questions were answered: (1) what are the ethical issues related to AI in healthcare, and (2) what are the ethical strategies related to AI in healthcare? In this way, we aimed to help development teams working on AI for healthcare take the necessary actions to proactively manage ethical issues related to AI in their design processes.
This paper is organized as follows:
Section 2 presents the methodology applied in the systematic review;
Section 3 presents the review results;
Section 4 discusses the results and provides the conclusions of this paper.
3. Results
In the phase of preliminary study collection, the search string retrieved a total of 303 documents, of which 5 were from the ACM Digital Library; 131 from PubMed; 90 from Nature-SCI; 73 from IEEE Xplore, and 4 were from the AI Ethics Guidelines Global Inventory (
Figure 1). The selected documents fulfilled the inclusion criteria of being written in English and published between January 2010 and September 2020. Two reviewers worked in parallel to obtain the results of this phase. As can be seen in
Figure 1, out of the 303 documents obtained from the database search, the titles and abstracts of 300 unique documents were selected after removing duplicates. Of these 300 documents, 122 full documents were screened on the basis of the inclusion criteria regarding ethical issues and guidelines in the healthcare domain. This eventually resulted in 45 documents which were subjected a thorough full-text analysis, excluding 76 documents based on the inclusion and exclusion criteria.
We observed the following distribution of the databases used: 30 from PubMed; 7 from Nature-SCI, 4 from AI Ethics Guidelines Global Inventory, 3 from IEEE Xplore, and 1 from ACM Digital Library. The works’ scores in the quality assessment stage are displayed in
Table 4 in decreasing order.
Table 5 lists the ethical issues that we identified based on the 45 selected documents. As highlighted in the Methods section, we added one codeword based on the fact that it was addressed in EGTAI. We added the term “control” as the related codeword signifying the main ethical issue of freedom and “autonomy” for the thematic code mapping process [
33]. EGTAI emphasizes that “control” is one of the fundamental human rights to be recognized when applying AI solutions [
33]. After the synthesis of the data, we established 11 main ethical issues and 19 sub-issues as the initial results (
Table 5). The ethical sub-issues were derived from the code mapping of the related codewords. We were not able to apply all the codes from Jobin’s work, as the selected documents could not be matched to all of them. After the thematic code mapping, as part of the data synthesis processes, we added “conflicts” as the 12th main ethical issue. “Conflict” is not mentioned in the work of Jobin et al. (2019). This decision was made on the basis of full-text screening and the detailed interpretation of the selected documents. Conflict has two components: one is related to the conflicting goals between government policy and users, and the other is related to conflict in decision-making between the doctors and patients. In addition, we renamed the issue of “non-maleficence” in Jobin’s work as “patient safety and cyber security”. The reason for this was that, according to the selected documents, the ethical issue of non-maleficence addresses patient safety and cyber security.
An overview and specification of the results obtained in different stages of the selection process are presented in
Table 5. After the full-text thematic analysis, the ethical issues related to AI that were identified in each document were synthesized into 12 overarching main ethical issues. These were justice and fairness, freedom and autonomy, privacy, transparency, patient safety and cyber security, trust, beneficence, responsibility, solidarity, sustainability, dignity, and conflicts.
Table 5 also shows the results of the thematic analysis of each main ethical issue related to AI and its related sub-issues in the selected documents. There were 19 ethical sub-issues included among the main issues. Details regarding the specification of the main ethical issues and related sub-issues are elaborated further in this section.
In the following, all the identified main ethical issues and sub-issues, as well as the related coping strategies, are discussed in detail.
3.1. Main Ethical Issue 1: Justice and Fairness
As it can be seen in
Table 5, justice and fairness was found to be the most prevalent ethical issue (in 24 out of 45 sources). Justice and fairness were mainly expressed in reference to bias, fairness, discrimination, and equality. This theme is related to the fair distribution of medical goods and services, without discrimination among individuals.
3.1.1. Bias
As can be observed in
Table 5, bias was the most frequently discussed ethical issue related to justice and fairness (N = 22). In healthcare, bias is mainly caused by self-learning algorithms implemented during the self-learning process [
54,
65]. It can come from “algorithmic bias” due to a systematic error or an unintended tendency of AI algorithms to prefer one outcome over another [
49,
56], such as self-fulfilling [
49], overfitting [
54], or black-box problems [
49,
58]. In addition, the training data used as inputs, especially inappropriate and poorly representative training data for AI models, represent another factor contributing to the issue of bias [
43,
49]. Such input biases can arise when the input data used for training the model do not represent the full spectrum of the target population or when the system has incomplete data [
43,
49].
In the selected documents, sampling bias [
43,
65,
78] and gender bias [
56,
59] were notable issues related to input data. Sampling bias is a well-known and influential issue, exemplified in the Framingham Heart Study, which included people from Framingham, a small, racially homogeneous town in Massachusetts. Sampling bias can result in the over-treatment or under-treatment of certain ethnic groups [
65]. Gender bias [
56,
59] arises, for example, when AI algorithms only learn from predominantly male data, which may cause an AI-powered clinical decision support tool not to capture fully the complexities of a disease in women [
59]. For example, 80% of spontaneous coronary artery dissection (SCAD), a condition that can tear blood vessels in the heart, involves women. Still, women are underrepresented in clinical trials investigating SCAD [
59].
3.1.2. Fairness
Fairness was the second-most frequently discussed ethical sub-issue within main ethical issue 1. Ten selected documents addressed this issue (
Table 5). It is one of the fundamental foundations of justice and fairness, and along with bias, fairness has been a topical issue for centuries [
78]. The complexity and opaqueness of most ML algorithms affect the fairness of AI systems. They lead to an unexplainable decision-making process and output, which makes it difficult to know if the AI system is fair or not [
70,
85].
In particular, two factors within the area of fairness could be distinguished in the literature: gender and ethnicity [
78,
85]. One of the earliest cases involving both factors dates back to the 1970s, when algorithms were used by St George’s Hospital Medical School in the United Kingdom to discriminate based on gender and race in making initial screening decisions for medical school applicants [
78]. Some organizations do not hold data on gender and ethnicity, which can cause a lack of fairness for legal, institutional, or commercial reasons. However, without these data, the risk of indirect discrimination also increases [
85].
3.1.3. Discrimination and Equality
Similarly to the issue of fairness, issues related to discrimination and equality were discussed in 10 documents (
Table 5). Due to the data-driven nature of AI technology, discrimination mainly stems from the selection of training datasets [
51]. An AI solution could lack equality if the training data are not representative or if the target is not appropriately selected [
50]. For instance, France Assos Santé has emphasized that “one of the dangers often identified with the computerization of health data concerns the practice of risk selection by insurance companies” [
48].
Additionally, aggregated data may be used to make decisions about larger populations or to create groups that did not previously exist, which could lead to discrimination, profiling, or surveillance [
50]. Some AI models deployed in domains outside of healthcare have shown racial biases/discrimination [
43,
48,
49,
56,
65,
75], such as overestimating criminal recidivism among members of certain racial groups [
43]. This ethical issue also appears in the healthcare domain. In the United States, a healthcare allocation algorithm has been widely used to determine patients’ healthcare needs and thus implement services that exhibit significant racial bias. For instance, African American patients were significantly “sicker” than white patients who obtained the same scores and, therefore, the same services through the algorithm [
56,
58].
3.1.4. Strategies for Main Ethical Issue 1
Regarding the issue of justice and fairness, the selected documents call for the following strategies, emphasizing AI algorithmic and data-related perspectives.
Algorithmic perspective
Data perspective
Guarantee that the input data collection and analysis are conducted in a mindful, objective, and diverse manner [
59,
78].
Encourage the cooperation of stakeholders, including ethicists, social scientists, and regulatory scholars, in the development of AI systems [
72].
3.2. Main Ethical Issue 2: Freedom and Autonomy
The second-most frequently addressed issue was freedom and autonomy. As can be seen in
Table 5, there were 22 documents related to this issue. In the selected literature, this main ethical issue included the sub-issues of control, respecting human autonomy, and informed consent.
3.2.1. Control
Table 5 shows that 12 documents emphasized this issue. Based on these sources, we understand that the issue of control concerns the management of the data involved in the AI system [
48,
54,
66,
75,
80,
83]. It reflects the ability of individuals to control their data that are used by different stakeholders [
75], as well as the ability of end-users to control their own data and thus secure their privacy directly [
83]. Control also relates to the decisions or recommendations regarding the AI systems used in clinical diagnosis [
37,
54,
71]. The control of decision-making can only be safeguarded through the development of AI and applying human intelligence in line with human values and needs [
37]. Control in AI systems is related to moral responsibility and influences the accountability of patient harm [
71]. For example, clinicians do not have direct control over the decisions and recommendations that the AI system makes [
71]. Furthermore, clinicians lack an understanding of how AI systems translate the input data into output decisions because of the opaque nature of AI systems [
71]. Additionally, a lack of robust control over AI systems’ recommendations will harm patients’ trust in the clinical care they receive [
71]. The issue of informed consent also influences the primary control of medical information in healthcare [
79].
3.2.2. Respecting Human Autonomy
As shown in
Table 5, nine studies were identified discussing this issue. In the context of healthcare, the sub-issue of respecting human autonomy refers to the respect for patient autonomy [
37,
51]. It could also be called respecting patient choice, recognizing the individual’s ability for self-determination and the right to make choices based on their values and beliefs [
51]. The opacity of ML-based decisions can potentially threaten patients’ autonomy by impairing the authority of physicians and the shared decision-making between doctors and patients [
37,
70]. For instance, algorithms applied in healthcare enforce the paternalistic model by prescribing values on ranked treatment options, which ignore the patient’s preferences and harm their autonomy [
37]. The use of highly autonomous decision-making systems can raise the issue of autonomy by manipulating patients to do things they should not do or have not considered thoroughly [
68].
3.2.3. Informed Consent
We identified nine documents discussing the sub-issue of informed consent (
Table 5). Informed consent usually refers to general consent to treatment, a specific procedure, or participation in research. The nine sources reviewed here discuss how informed consent relates to the data involved in AI systems [
48,
50,
70,
75,
83]. First, consent in the field of health research empowers the people involved to confirm (or reject) the sharing of their data and, in the meantime, to comprehend who their data will be shared with and the data sharing plan [
48]. Secondly, obtaining consent from a subject to share their information with a 3rd party is obligated and non-negotiable [
83]. Thirdly, asking for consent is necessary due to the opaqueness of ML algorithm systems, as the sensitive information will be stored in the system. Next, predicting possible AI approaches may require different consent, such as blanket consent [
50] and tiered consent [
31], according to multiple levels, and mechanisms must be created to deal with patients who do not wish to be included in such exercises [
50]. Lastly, the sources also suggest that obtaining consent is critical and obligatory for both competent and incompetent users, ensuring that they are respected and their privacy is maintained when exposed to monitoring technologies [
81].
3.2.4. Strategies for Main Ethical Issue 2
To cope with the issue of freedom and autonomy, various strategies were found in the selected documents.
To cope with the sub-issue of control, the publications proposed the following strategies:
Ensure that human beings stay in control and that they are the final decision-makers [
48]; and
Develop regulations and codes of conduct to maintain the rights of human users’ to control their data, specifically regarding the control of the different versions, as well as the usage and disclosure, of their data [
66,
80], and understanding the full spectrum of AI to enable clinicians to control the technology [
43].
To cope with the issue of respecting human autonomy, the literature calls for the following strategies:
Reach a universal understanding of the medical issues that patient–clinician relationships need to deal with [
51];
Discuss patient autonomy in the context of trust, on which the concept of shared decision-making concept and the legal responsibilities within the system are based [
51];
Respect the physicians’ judgments within the modern healthcare system [
88]; and
Ensure transparent AI so that patients comprehend that the intelligent system does not dominate human-judgment [
68].
To cope with the issues related to informed consent, the literature recommends the following strategies:
Communicate with vulnerable groups carefully before consent is obtained [
53];
Apply advance directives to understand the expressed wishes of senior citizens and respect their autonomy in the progression of their disease [
83];
Comprehend vulnerable groups’ decisions about intelligent assistive technology before their cognitive impairment worsens [
83];
Use behavioral observation to detect patients’ behaviors indicating the withdrawal of obtained consent due to discomfort and disease progression [
83]; and
Customize informed consent to ensure that each patient understands the purpose and the risks of using AI solutions, such as care bots [
53].
3.3. Main Ethical Issue 3: Privacy
Table 5 shows that 20 documents were identified related to this issue. Within these documents, this issue was addressed in terms of data privacy and confidentiality.
3.3.1. Data Privacy
Twenty documents addressed data privacy, which made it the most frequently addressed sub-issue of the main issue of privacy (
Table 5). Data privacy refers to the control over the individual’s health information [
50,
55]. For example, a care robot caring for vulnerable patients may collect information about that person 24/7, and the collected data might be transferred to the hospital for medical purposes. This condition will go against the patients’ privacy rights and self-control, such as rejecting treatment provided by care robots or related stakeholders [
50]. Data privacy issues can be caused by three factors: data usage [
55,
62,
87,
89], data ownership [
53], and custodianship [
57]. Particularly, data usage issues were caused by
The use of sensitive or personal health information without the patient being aware [
50,
53,
62,
68,
85]; and
The misuse of sensitive or personal health information for financial benefits [
55,
57,
62].
Some sources mentioned that privacy also affects users’ trust [
62] and autonomy [
62,
77] when using AI-powered solutions in healthcare. A few sources also addressed the influence of privacy on data ownership [
55], discrimination [
50], stigmatization [
50], dignity [
62], and well-being [
62].
3.3.2. Confidentiality
Nine documents addressed confidentiality, the second sub-issue of main ethical issue 3 (
Table 5). Confidentiality refers to the responsibility of maintaining the privacy of anyone entrusted with these data [
50,
55]. Similarly to data privacy, confidentiality also relates to the scope, appropriate storage, access, and dissemination of sensitive data [
53]. Based on the reviewed sources, security risks have the greatest effect on confidentiality when applying AI systems in healthcare [
53,
74,
83,
89]. During the process of sharing and transmitting patients’ data to third parties, the data can be subjected to security attacks or privacy violations [
53,
74,
83,
89]. For instance, most studies on mobile disorder detection systems use mobile devices to acquire useful signals, transmit them to external servers for analysis, and visualize and communicate the results to users [
74]. Afterwards, the collected users’ data are stored in the database and thus face the risk of being hacked [
53,
55].
3.3.3. Strategies for Main Ethical Issue 3
Regarding the issue of privacy, the selected documents call for the following strategies:
Establish strict rules about data acquisition, data flow management, anonymization, and security [
89];
Store and transfer data securely within regulatory requirements when designing and implementing AI solutions in healthcare [
55];
Ensure that data transmission occurs with patients’ consent and ethical approval [
55,
83];
Restrict identifiable health data during data sharing and protection [
48,
89];
Balance privacy and personal data sharing, especially in regard to the use of new technologies to enable the automated collection and analysis of health data [
89];
Make sure that data analysis within healthcare follows the code of ethics, laws, and regulations [
87]; and
Incorporate legal rules regarding access to public databases into criminal law [
48].
3.4. Main Ethical Issue 4: Transparency
3.4.1. Transparency
Transparency was the most emphasized sub-issue grouped under the main ethical issue of transparency, and 15 of the selected documents discussed this sub-issue (
Table 5). Transparency refers to the possibility of understanding an AI system’s decision-making process [
56]. The black-box nature of most AI algorithms causes a lack of transparency regarding the inner reasoning of specific AI techniques [
55,
56,
58,
90], especially deep learning. In particular, the black-box’s fundamental steps of analysis are opaque, as is the decision-making process [
56,
89]. These issues are exacerbated when algorithms are trained on biased data or exclude certain demographic characteristics [
56]. For instance, an AI algorithm used in the United States of America to predict accused persons’ future recidivism rates showed that the risk scores for an African American with minor crimes were higher than a white American who had committed multiple crimes [
65]. Trust, public trust, patient trust, and the adoption of AI in healthcare will eventually depend on transparency [
52,
59,
79,
89]. Additionally, the processing of sensitive data such as individual medical records raises the issue of transparency [
83].
3.4.2. Explainability
Next to transparency, explainability is another sub-issue related to main ethical issue 4, and five documents were found to address this sub-issue (
Table 5). Similarly to transparency, explainability is associated with the black-box nature of ML and AI algorithms, which lead to difficulty in explaining and interpreting the relationship between input data and outcomes [
58,
85]. Explainability can create or increase the trust of users in AI systems [
84]. The better the AI system’s explanation, the higher the level of trust in the application of AI in the medical field [
84,
85]. Without explainability, medical professionals will encounter difficulty in ensuring the system’s credibility and inspiring trust in a decision that they cannot even explain to anyone, be it a patient or another medical professional [
85].
3.4.3. Strategies of Main Ethical Issue 4
To deal with the issues of transparency and explainability, the sources propose the following strategies:
Enable AI solutions to be transparent and explainable to patients in terms of the AI algorithms and the decisions regarding their treatments [
55,
68];
Elaborate the data collected from patients for medical use, such as digital phenotyping [
79];
Embed transparency in data analysis after collecting the data [
79];
Ensure that the AI system is clear and transparent regarding its data analysis approach [
79];
Develop AI-enabled solutions in partnership with different stakeholders to achieve the ideal level of transparency [
59];
Consider the various needs, demands, and concerns that emerge during collaboration with stakeholders in the healthcare system [
59]; and
Ensure that the legislation addressing the need for the AI system and its decision-making contains guarantees regarding transparency and explainability to stakeholders [
89].
3.5. Main Ethical Issue 5: Patient Safety and Cyber Security
3.5.1. Patient Safety
In the main ethical issue of patient safety and cyber security, patient safety was the most frequently discussed sub-issue, being addressed in nine documents (
Table 5). Patient safety is an essential element in healthcare and is a central subject of debate every time AI is introduced to a healthcare setting [
49]. This sub-issue relates to the unnecessary or potential harm caused by AI tools or unsafe AI in healthcare [
49]. Integrating AI into healthcare can provide multiple benefits, such as improving patient safety and the quality of care [
49,
89], improving access to healthcare, providing local real-time advice to patients or clinicians, and identifying medical emergencies such as sepsis [
49]. On the other hand, AI-enabled clinical support tools can also make mistakes, and the AI algorithms can provide unsafe advice and decisions, which could cause harm to patients [
49,
71]. Of course, in traditional healthcare, the healthcare provider can also harm patients by not obeying patient safety protocols, standards, or procedures. When AI is widely introduced in a healthcare system, it is difficult to define who is responsible for the harm caused by AI errors. This could be the computer programmers who developed the AI solutions, the clinicians using the techniques during the diagnosis process, or the regulator making the relevant policy for the AI solutions [
49].
3.5.2. Cyber Security
In addition to the patient safety issue, six sources also discussed the cyber security of AI (
Table 5), which is mainly related to the prudence, safety, risk, and technical robustness/safety of the cyber environment [
52,
80]. It is also linked to the capability of taking precautions to avoid undesired results and mitigate existential risks [
52]. The selected documents also reported that the mental healthcare field requires thoughtful consideration regarding the data security of the devices that come into contact with individual health information, the approaches related to data generation, and the possibility of hacking and unauthorized surveillance [
68]. When advanced tools and techniques are used to extract large amounts of heterogeneous data provided by citizens, this may lead to security attacks or privacy invasions [
74]. On the other hand, these advanced tools and techniques help support data collection, storage, and transmission, providing intelligent planning ideas, building models, and data management methods [
74].
3.5.3. Strategies for Main Ethical Issue 5
To deal with patient safety and cyber security, the following strategies need to be addressed:
Develop AI systems in a regulated manner together with clinicians and computer scientists [
49,
72];
Vet and review AI tools through legally selected regulatory committees before using them [
66];
Update regulations, codes of conduct, and standards continuously [
66];
Cooperate with stakeholders involved in the AI development process to help the project team establish a responsible ethics model and ensure patient safety and the rights and interests of users [
49,
72];
Foresee undesirable results and avoid adverse consequences of AI techniques by taking proper action to ensure cyber security [
52];
Ensure that the AI system is robust enough to protect the user’s data from being destroyed by the operational or system-interacting agents [
52]; and
Develop explicit standards or policies of data management with security and privacy, and implement them to preserve data confidentiality and identification in healthcare [
68,
74].
3.6. Main Ethical Issue 6: Trust
3.6.1. Trust
Twelve documents discussed the issue of trust (
Table 5). Trust refers to a relational and normative concept, which implies some uncertainty or a risk that the tasks delegated to human agents will not go as planned [
53]. Trust is a central part of the therapeutic relationship between human care providers and patients. As in this relationship, trust is crucial to the interaction between patients and artificial intelligent care providers. Understanding the trust in this interaction is significant, especially in the healthcare domain [
62,
80,
89]. The level of trust in the interaction between humans and AI systems in healthcare depends on various aspects of data [
43,
48,
58,
62,
80], including data usage [
48], data-driven technology [
58], data confidentiality [
80], and breaches of patient data [
43]. The dual-use aspects of technology can threaten trust in the system and related professions, because the technology can be used in multiple ways, and there is also a risk that the collected data will also be used for other purposes [
62]. Bias is another factor related to trust in AI systems used in healthcare [
53,
58]. AI could generate biased and overfitted results that clinicians did not identify, decreasing the user’s trust and acceptance of AI systems in healthcare [
54]. Similarly, automatic recommendations or decisions provided by AI systems with low precision and a lack of explainability and transparency can threaten patient trust [
43,
84].
3.6.2. Strategies for Main Ethical Issue 6
To cope with the issue of trust, the strategies presented in the literature are as follows:
Inform the patients when and how their data are shared, as part of the research protocols and sharing conditions (de-identification, registration, access control, etc.) [
48,
58];
Improve data privacy and confidentiality to prevent the reidentification of anonymized data with spatial data points to ensure patients’ trust in health services [
43,
80]; and
Educate healthcare personnel on the basics of AI, including techniques and solutions, to establish trust in AI healthcare providers [
43,
84].
3.7. Main Ethical Issue 7: Beneficence
3.7.1. Beneficence
Eleven documents discussed the issue of beneficence (
Table 5). Beneficence refers to act with the best interest of others [
57,
88]. It also expresses the desire to promote welfare [
88]. In healthcare, beneficence refers to the act of a healthcare professional who provides benefits or “promotes/does good” [
56,
57,
89] for care-recipients [
43,
51,
53,
88] through promoting their health and well-being [
53,
55,
57,
59], lowering the risks, and preventing health problems and illnesses. It is also related to the balance of the benefits of interventions against risks and costs [
53,
57]. The moral acceptability of deception is also connected to the ethical issue of beneficence. It has been pointed that although it is ostensibly wrong, deception can be justified under certain circumstances to promote a patient’s physical and mental health. For example, a study of interviews with people with dementia has shown that they usually find lying acceptable if it is in their best interests [
83].
3.7.2. Strategies for Main Ethical Issue 7
To deal with the issue of beneficence, researchers have called for the following strategies in improving communication and enhancing beneficence in the design process:
3.8. Main Ethical Issue 8: Responsibility
3.8.1. Responsibility
Table 5 shows that nine documents addressed this ethical issue. In the selected documents, responsibility means being responsible for the decisions made by AI systems when they are applied in the healthcare domain [
55,
56,
66,
71]. It raises the question of who has responsibility for the errors or incorrect performance of AI systems [
43,
56,
66]. Responsibility is often referred to as responsible AI or ML [
66,
79]. In addition to robustness and interpretability, responsible ML is the central factor related to the adoption of ML in healthcare [
85]. For AI systems used in healthcare, it is also necessary to have accountability for patient harm [
66,
71]. When AI systems are involved in the decision-making process, it is unclear to what extent human clinicians will be held accountable for patient harm [
71]. It could be that clinicians do not have direct control of the decisions made by AI systems, or that the AI systems are not transparent [
71]. Therefore, it is difficult, or even impossible, for the clinician to understand how the system makes the output decision on the basis of the input data [
71]. Additionally, the AI developer responsible for “do no harm” is accountable for the harm caused by the decision-making of AI systems [
66].
3.8.2. Strategies for Main Ethical Issue 8
To deal with the issue of responsibility, the literature proposes the following strategies:
Define clear guidelines when making decisions about ethics and legal liability based on AI outputs [
43];
Recognize and document shared responsibility among stakeholders before developing AI solutions [
55];
Require both doctors and AI developers to follow the “do no harm” standard [
66]; and
Involve AI developers and engineers specializing in system safety in moral accountability assessments to prevent patient harm [
71].
3.9. Main Ethical Issue 9: Solidarity
3.9.1. Solidarity
Eight documents highlighted the issue of solidarity (
Table 5). Solidarity is always highlighted when dealing with the issue of justice and equality when AI-powered solutions are applied in healthcare [
48,
52,
53,
56,
58]. Within the scope of justice and equality, the allocation algorithm that is widely used in healthcare can result in the discrimination of particular groups [
48] and races [
52], as well as inequality in the allocation of resources [
52], geography, and social economy [
58]. For example, insurers adjust the class unity of their insurance according to their risk level and ultimately stratify their customers [
48]. African American patients and disabled people may not be treated equally to nondisabled Caucasian people [
52,
56,
58].
In addition, physicians might prioritize their patients according to criteria other than medical urgency [
56]. Solidarity is also related to the inequality of care distributed in society and the burdens of the caregivers, which influence societal concerns in relation to public health [
53]. The nature of algorithmic prediction can easily translate into algorithm profiling to categorize new subgroups among existing populations without their knowledge, threatening patients’ autonomy and individuality within societies. Solidarity is also emphasized as an effective approach to reinforce the community-based nature of healthcare and emphasize the significance of pursuing the common good within this context [
56]. Patient–doctor relations are essential to the issue of solidarity [
52,
62,
83]. The establishment of a good rapport with patients improves treatment results [
62,
83].
3.9.2. Strategies for Main Ethical Issue 9
To cope with the issue of Solidarity, a solidarity-based model needs to be established when applying AI solutions in society [
48]. The processes of technologies such as AI and big data should be beneficial without exacerbating socioeconomic or cultural divisions while at the same time preserving the solidarity-based model of social protection [
48].
Within the scope of equality and justice, the following strategies are proposed in the reviewed sources:
Include the goal of improving the health of disabled people when designing with AI [
52];
Improve internet speed and pursue the digital transformation of the healthcare system to cope with the inequalities caused by the digital divide [
48];
Consider interpersonal justice in the design of care bots to decrease inequality in the distribution of care in society and burdens on caregivers [
53];
Allow patients to select among available resources to respect their autonomy to cope with inequality in the allocation of resources [
56]; and
Establish adequate communication, mutual trust, and empathy in the patient–doctor relationship [
83].
3.10. Main Ethical Issue 10: Sustainability
3.10.1. Sustainability
The issue of sustainability was discussed in association with the issue of solidarity and was discussed in seven documents (
Table 5). Based on the selected documents, we know that the UN’s SDGs affect the development strategies in low- and middle-income countries. It presents AI as an explicit global and transnational effort to promote trustworthy development, deployment, and application. Against this background, the issue of sustainability is a key factor related to the trustworthy establishment of AI worldwide [
37].
Five concerns related to the sustainability of developing, deploying, and implementing AI solutions in healthcare were discussed in the literature. Specifically, we identified these concerns as conflicting goals, unequal contexts, risk and uncertainty, opportunity cost, and democratic deficits [
37]. Sustainability in employability was discussed in relation to the use of computer algorithms and ML to support decision-making in occupational health. ML applied in this context has potential for sustained employability through the design of improved decision support tools. In particular, the data used in ML algorithms as the training set can maintain sustained employability by predicting suitable interventions. However, the issue of group profiling and discrimination can occur when applying ML decision-support tools in occupational health, which might lead to social and potential economic inequality [
57]. AI should contribute to the sustainable development of society and benefit individuals’ health and well-being [
59].
Additionally, beneficence is significant in maintaining sustainability, promoting well-being, and preserving dignity [
55]. Moreover, health and well-being, as well as inequalities between urban and rural health services, influence sustainability [
37,
59]. Digital technologies are valuable in advancing universal health coverage and SDGs. However, the digital divide influences sustainability, and also prevents the achievement of the SDGs. This is especially the case in regard to the lack of communication in low- and middle-income countries [
73,
75]. The sources also addressed the financial sustainability of the healthcare system [
83] and the significance of establishing a data ecosystem in the digital health domain [
75].
3.10.2. Strategies for Main Ethical Issue 10
To ensure the sustainability of trustworthy AI technologies, the sources call for the following strategies:
Support the establishment of trustworthy global AI by:
Addressing threats in implementing trustworthy AI in the related framework [
37];
Understanding the translation of ethical norms into practice to secure the trustworthy governance of AI both locally and globally [
37];
Enabling cross-country development of shared, well-expressed rules so that they are context-independent [
37];
Making high-income countries responsible for providing financial support to make up for the potential losses of countries involved in global endeavors [
37];
Translating shared general norms into specific regulations [
37];
Cooperating with the World Health Organization and UN bodies to lead shared efforts toward the development of a trustworthy global AI system [
37];
Developing tools to cope with inequality and the digital divide in low- and middle-income countries [
73]; and
Embedding a systematic approach to establish digital care passes, linking rapid detection with digital symptom checkers, contact tracing, epidemiological intelligence, and long-term clinical follow-up [
73].
Develop sustainable ML decision-making tools within occupational healthcare, and ensuring that “education” is carried out in regard to these tools by:
Providing the best effective data and training these tools with the best-known algorithms [
57]; and
Validating the tools and discussing their ethical impact [
57].
Engage health research ethics committees in assessing ML decision-making tools regarding the aspects of potential analytical risk [
57], function creeps [
57], discrimination issues [
57], privacy issues [
57]; and data custody and ownership [
57].
3.11. Main Ethical Issue 11: Dignity
3.11.1. Dignity
The next most frequently discussed issue was dignity, as can be seen in
Table 5. We identified seven documents addressing the issue of dignity. Dignity is considered a fundamental right [
51,
62,
80] and relates to the respect for human rights and freedoms [
66]. In the context of care ethics, this topic is ultimately about human dignity in the relationship between the caregiver and the care recipient [
53]. The computer scientist Weizenbaum expressed the concern that everything in society was approaching a state corresponding to the computational metaphor, resulting in a mechanical expectation regarding decision-making, which would eventually result in affronts to human dignity [
62]. For instance, the paternalistic AI model ignores patients’ preferences, not just harming patients’ autonomy but also their dignity [
70].
3.11.2. Strategies for Main Ethical Issue 11
To deal with the issue of dignity, the reviewed sources recommend the following strategies:
Design and operate AI systems to be compatible with human dignity, rights, and freedom [
53,
54];
Focus on dignity and privacy to respect human rights when establishing the ethics of AI-enabled solutions [
53]; and
Enact patient acts to support the respect for human dignity, life, and integrity [
80].
3.12. Main Ethical Issue 12: Conflicts
3.12.1. Conflicts
Related to the issue of Sustainability is that of conflict. We again identified seven documents that discussed this issue (
Table 5). Conflicts are non-avoidable when AI solutions are implemented in the healthcare domain. Conflicting goals are often encountered in this context, as discussed in relation to the issue of sustainability, in cases where some political objectives of regulations such as the SDGs determine the political governance and development strategies in health sectors. This influences local healthcare priorities and may lead to conflicting goals in opposing AI technologies that promote ethical safety [
37]. Conflicts in decision-making occur between patients or surrogates and medical staff in interpreting results, as well as between AI models and physicians [
51]. When using ML decision-making tools in an occupational context, stakeholders often have conflicts in terms of their priorities and the differing interests or perspectives of employers, employees, regulations, insurance companies, etc. [
57]. Automation bias is an influential factor in conflicting human decisions [
64,
66]. Potential conflicts emerge due to the ability of humans to act autonomously and the nature of complex machines, which are sometimes obscure but infallible [
54]. The literature also discusses the conflict between data security and general liability problems [
83].
3.12.2. Strategies for Main Ethical Issue 11
To cope with the issue of conflicts, the literature calls for the following strategies:
Share decision-making with patients or their surrogates to guarantee human oversight and assign responsibility to physicians, patients, and surrogates [
51]; and
Seek human perspectives, for instance, thhrough multidisciplinary team meetings, to deal with conflicts in the decision-making or predictions of AI models and physicians [
51].
4. Discussion and Conclusions
In this systematic literature review, we aimed to provide an overview of the ethical issues and strategies related to the use of AI solutions in healthcare. These strategies can help developers, policymakers, healthcare institutions, and other healthcare ecosystem stakeholders to take necessary actions to proactively manage the ethical issues associated with AI in the related design processes.
We combined the ethical concerns addressed in the documents The Global Landscape of AI Ethics Guidelines by Jobin’s group (2019) and EGTAI developed by the European Commission as the departure point to categorize the issues identified in this review. We applied a thematic code mapping process based on an abductive approach, including inductive and deductive methods. We adjusted the list of ethical issues associated with AI by adding and renaming the issues based on Jobin’s work and referencing with EGTAI.
Eventually, based on the 45 selected documents, we identified 12 overarching main ethical issues and 19 sub-issues (
Table 5). Comparing these 19 ethical sub-issues with Jobin’s guidelines, we added “control” as a sub-issue of the main issue of freedom and autonomy. This is because EGTAI claimed that “control” is a fundamental human right related to the application of AI solutions and is essential to human autonomy [
36]. To the list presented in Jobin’s work, we also added the issue of “conflict” to our list of identified ethical issues of AI. The issue of “sustainability” in Jobin’s work and EGTAI was related to environmental aspects and was associated with environmental well-being and protection during the deployment of AI in the general domain [
36,
38]. In our literature review, sustainability was mainly related to the sustainable development of users’ health and well-being, as well as their employability [
37,
55,
59]. The issue of “solidarity” discussed in Jobin’s work was linked to AI’s implications for the labor market [
38]. However, in our study, this issue was related to justice and equality [
48,
52,
53,
56,
58] especially in regard to the sharing of burdens and benefits to prevent inequalities and discrimination [
52,
53], which is similar to the approach taken in EDGAT [
36]. We found that solidarity was also related to the relationship between patients and physicians and the social support available in community groups with the aim of improving shared AI usage and development in healthcare [
52,
62,
83]. Additionally, academic sources and legal documents were excluded from Jobin’s work. In contrast, our literature review includes both academic and gray literature on ethical issues and AI guidelines, adding the academic perspective that Jobin’s work excluded. In addition, our results suggested that ML algorithms in AI systems still cause many ethical concerns, such as issues related to biases, fairness, transparency, and explainability. We also identified different strategies to cope with the issues related to ML algorithms. Examples of such strategies are purifying the algorithms of AI-based decision support tools, managing fairness constraints in ML, and making the AI decision-making process clear for users. In addition to the strategies proposed in this review, other prevalent studies also deal with ML algorithmic issues. For instance, these studies address censorship, specifically tackling the issue of fairness in relation to censorship [
91,
92]. In other words, the effectiveness of the strategies proposed in this review are far from satisfactory. Therefore, we call for cooperation among AI developers, healthcare professionals, and policymakers in developing and applying algorithmic interventions which emphasize fairness in regard to censorship [
91,
92].
This systematic literature review was conducted between 2010 and 2020, and therefore we excluded AI solutions in healthcare developed over the past three years during the COVID pandemic [
93]. Nevertheless, the worldwide spread of COVID and the different forms of lockdown have accelerated the implementation of digital healthcare tools. In many cases, the role of digital health tools has shifted from an interesting potential opportunity to an immediate necessity. For instance, humanoid robots have been developed, including cognitive technologies to create artificial minds in order to care for vulnerable patients [
50,
94]. In this context, it is even more urgent to understand the ethical issues related to AI and the associated strategies.
In general, the work of Jobin and EGTAI support a broad understanding of the application of AI and the related ethical issues. In contrast, our literature review focused on ethical issues related to the applications of AI in healthcare. Due to our focus on healthcare, our results are in principle not generalizable to other domains. This is because the definition of each ethical issue might be different when people conduct similar reviews in other fields or databases. Thus, future work is required to explore the ethical issues related to the use of AI in other domains. Additionally, the knowledge on the ethical issues and strategies summarized in this paper can be extended to provide tools for guiding designers or stakeholders when developing AI solutions within the healthcare domain.