Next Article in Journal
Robotics Perception: Intention Recognition to Determine the Handball Occurrence during a Football or Soccer Match
Previous Article in Journal
Investigating Training Datasets of Real and Synthetic Images for Outdoor Swimmer Localisation with YOLO
Previous Article in Special Issue
Towards an ELSA Curriculum for Data Scientists
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Communication

Ethical Considerations for Artificial Intelligence Applications for HIV

1
ElevateU, Irvine, CA 92617, USA
2
Department of Informatics, University of California, Irvine, CA 92617, USA
3
Department of Emergency Medicine, University of California, Irvine, CA 92697, USA
*
Author to whom correspondence should be addressed.
AI 2024, 5(2), 594-601; https://doi.org/10.3390/ai5020031
Submission received: 18 February 2024 / Revised: 25 April 2024 / Accepted: 30 April 2024 / Published: 7 May 2024
(This article belongs to the Special Issue Standards and Ethics in AI)

Abstract

:
Human Immunodeficiency Virus (HIV) is a stigmatizing disease that disproportionately affects African Americans and Latinos among people living with HIV (PLWH). Researchers are increasingly utilizing artificial intelligence (AI) to analyze large amounts of data such as social media data and electronic health records (EHR) for various HIV-related tasks, from prevention and surveillance to treatment and counseling. This paper explores the ethical considerations surrounding the use of AI for HIV with a focus on acceptability, trust, fairness, and transparency. To improve acceptability and trust towards AI systems for HIV, informed consent and a Federated Learning (FL) approach are suggested. In regard to unfairness, stakeholders should be wary of AI systems for HIV further stigmatizing or even being used as grounds to criminalize PLWH. To prevent criminalization, in particular, the application of differential privacy on HIV data generated by data linkage should be studied. Participatory design is crucial in designing the AI systems for HIV to be more transparent and inclusive. To this end, the formation of a data ethics committee and the construction of relevant frameworks and principles may need to be concurrently implemented. Lastly, the question of whether the amount of transparency beyond a certain threshold may overwhelm patients, thereby unexpectedly triggering negative consequences, is posed.

1. Background

Although HIV cases are declining overall, HIV continues to disproportionally affecting certain populations. For instance, Black/African Americans accounted for 45% of the new diagnoses and Hispanics/Latinos accounted for 31% among transgender people in 2021 in the U.S. [1]. This disparity was also present in new HIV diagnoses resulting from heterosexual contact in 2021. For those populations, Black/African Americans comprised 58% of such cases and Hispanics/Latinos made up 20%.
Artificial intelligence (AI) is more widely used by health researchers to analyze large amounts of data, including social media data and electronic health records (EHR). In regard to HIV, AI has been used to make predictions of HIV outbreak locations or clusters, develop prevention or treatment interventions, optimize long-term maintenance dosing of medication, encourage medication uptake, and enhance counseling outcomes [2,3,4,5].
There are a number of ethical concerns around the use of AI for public health surveillance. Mello and Wang presented several ethical considerations for the use of AI in digital epidemiology including respect for privacy and autonomy, minimizing risk of error, and accountability, and presented policy and process recommendations based on those considerations [6]. However, there are limited perspectives, guidelines, and frameworks on the ethical considerations of AI tailored specifically to HIV. This paper aims to present these ethical considerations for researchers and public health practitioners making use of AI applications for HIV with a focus on patient acceptability and trust, fairness, and transparency.

2. Ethical Considerations

2.1. Ethical Considerations Regarding Acceptability and Trust

HIV patients and primary care physicians often hold differing perspectives on the utility of AI-based tools for HIV. In an interview study, primary care physicians showed a greater willingness to utilize HIV risk prediction tools alongside patient medical history and their own clinical judgment. However, some Men who have Sex with Men (MSM) expressed fear, anxiety, and mistrust towards these AI-based tools [7].
The question of acceptability and trust becomes increasingly apparent within the realm of AI-powered chatbots—online conversational agents engineered to replicate real-time conversations with humans. Physicians generally acknowledge the usefulness of chatbots as digital aids for tasks like scheduling appointments and delivering medical information. However, some express reservations regarding patients self-diagnosing or relying on information they may not fully comprehend [8]. Given the prevalence of HIV misinformation on online platforms, including social media, chatbots trained on such data sources might disseminate misleading medical information to at-risk populations and those with limited internet literacy [9].
These concerns were supported by several studies conducted with HIV patients who shared their perspectives on chatbots used for HIV-related interventions. Patients generally welcomed the idea of a health chatbot, although their hesitancy mainly revolved around apprehensions regarding privacy, confidentiality, cyber-security, and further stigmatization [5,10]. Furthermore, in another qualitative study, participants described concerns with information accuracy and indicated their skepticism of the ability of AI-powered chatbots to provide emotional support and resolve convoluted problems as the main drivers for having lower performance expectations [5]. Collectively, these factors contribute to reduced patient acceptability and trust towards AI-powered chatbots. This raises ethical questions about promoting the use of chatbots that lack full trust and acceptance from HIV patients. With the growing use of new large language models like ChatGPT, addressing these concerns prior to implementation becomes increasingly crucial.
Recent developments have enabled chatbots to more effectively combat the misinformation that results in decreased trust and acceptability of health and medical AI systems of relevant stakeholders. Xiao et al. created Jennifer, an AI chatbot, serving as a credible and easy-to-access information portal on COVID-19 for individuals [11]. This chatbot was designed to be more trustworthy than previous chatbots as 150 COVID-19 experts including scientists and health professionals were invited to the participatory development process. These experts were interviewed to better understand the challenges in the development process and opportunities for future improvement. Furthermore, an online experiment that examined the effectiveness of Jennifer in assisting information seekers in locating COVID-19 information and gaining their trust was also conducted. General-purpose chatbots such as the iconic ChatGPT, often used by users for obtaining basic medical information or to receive emotional and mental support regarding some health condition, have also evolved to become more robust against misinformation in their newest versions. GPT-3.5 and GPT-4.0, for instance, were tested against the text from the World Health Organization’s 11 “myths and misconceptions” about vaccinations [12]. Multiple aspects including the correctness, clarity, and exhaustiveness of ChatGPT responses to common vaccination myths and misconceptions were assessed. The raters perceived that the ChatGPT responses provided accurate and comprehensive information on common myths and misconceptions about vaccination in an easy-to-comprehend and conversational manner, without including misinformation or harmful information. However, the study did raise concerns regarding the risk of exposing users to misleading responses, especially when they are non-experts consulting ChatGPT without the support of expert medical advice. HIV researchers and experts may want to reference these recent developments surrounding chatbots when developing HIV-customized medical and health chatbots.
Informed consent stands as a pivotal factor in enhancing acceptability and trust towards AI systems for HIV. While most studies on acceptability and trust focus on patient perspectives, it is essential to broaden this focus to include non-patient stakeholders as their views and attitudes significantly influence the overall trustworthiness of AI systems. It is worth considering, however, that informed consent is not a one-dimensional concept. Molldrem et al., for instance, conducted semi-structured interviews to collect different perspectives on informed consent from critical stakeholders involved in molecular HIV surveillance [13]. Within their research, they presented five approaches to consent, each differing in its deviation from the status quo and in its speculation about potential future consent practices in HIV surveillance and prevention. Not only did this study suggest that the concept of informed consent manifests in various forms but also that the extent of acceptability towards such consent practices differs across stakeholders. The sole consensus among stakeholders was the acknowledgment of the necessity of informed consent itself. This implies that customizing informed consent practices towards each stakeholder will more likely enhance acceptability and trust towards AI applications for HIV. The 6Ws (Who, How, What, When, Where, and Why), ingrained in journalism and now in many other fields, should be utilized as a conceptual framework for informed consent in AI systems for HIV. The framework is summarized in Table 1.
Another approach for improving acceptability and trust that is gaining more traction recently is Federated Learning (FL). It refers to the idea of decentralizing AI systems so that multiple data owners collaboratively train an AI model without requiring access to the raw data of one another [14]. FL reduces the risk of privacy breaches, protects sensitive data through its local distribution system, and prevents unauthorized access owing to the collaborative nature of the model training [15]. This privacy-preserving nature of FL enables the AI systems to acquire additional acceptance and trust from stakeholders. However, FL is not without challenges. The distributed nature of the AI system may come at the cost of decreased overall performance, hinder the development of a robust global model, and expose the system to be more vulnerable to malicious attacks [15]. A study that applied FL in an AI system for HIV was conducted by Nguyen et al., who proposed a FL framework for predicting the risk of sexually transmissible infections (STIs) and HIV [16]. The FL system encompassed multiple clinics and key stakeholders. It ensured that the AI models were shared across clinics only, safeguarding personal information from leaking. Demographic and behavior data were used to predict the risk of HIV and STIs. The amount of such data that needed to be shared across healthcare entities was minimized owing to the adaptable aggregation feature of the FL system. This acted as a safety measure for protecting patient information throughout the model training process. Aside from this study, applications of FL on HIV-related AI systems are limited. It is desirable for future studies to identify the different types of privacy applications in which FL can be used by observing, interviewing, and surveying various stakeholders.

2.2. Ethical Considerations Regarding Fairness

Fairness, accountability, and transparency (FAT) is widely used for assessing the ethical and bias implications of algorithms or systems [17]. Recent studies have noted the existence of gender [18] and race [19] bias in AI programs that may alienate communities and call into question the role of AI in being part of the solution. This is especially true in the context of AI applications for HIV. In a study, researchers recently developed an HIV prediction model to identify potential PrEP candidates who are most likely to benefit from pre-exposure prophylaxis for HIV in a large healthcare system [20]. While this model correctly identified 38.6% of future cases of HIV, model sensitivity on the validation set was 46.4% for men and 0% for women, underscoring the importance of intersectionality for fair outcomes in health care. Several other studies also pointed to similar issues in fairness where existing HIV risk prediction tools based on the Centers for Disease Control and Prevention criteria for PrEP use have been revealed to underestimate HIV risk among Black Men who have Sex with Men (MSM) [21,22].
Endeavors to develop AI applications that address this unfairness or cater to marginalized HIV patient populations are underway in various institutions. For example, an ongoing project aims to develop and validate an algorithm tailored to identify women who could benefit from PrEP in a county with a high HIV incidence among women. This study utilized HER data from public health clinics in Florida [23]. In another initiative, multiple groups have collaborated to create the LaPHIE HIV care system, a data sharing platform for HIV data. This platform not only seeks to promote fairness among different HIV populations with varying data availability but also aims to enhance retention in care [22,23].
Addressing the issue of unfairness becomes trickier when it conflicts with other values in ethics. One such value would be privacy. When building and releasing AI applications for HIV identification and prediction, sensitive information that may lead to re-identification and privacy breaches including race, ethnicity, gender, HIV status, and substance abuse are often omitted from the published data [24]. Unfortunately, the omission of these dimensions may reduce the predictive performance of the AI application for HIV patients, further amplifying the existing health disparities. It can also pose a challenge to evaluating the fairness of the AI application and the data used to build it [25]. It is critical to consider which ethical value to prioritize over the other seemingly conflicting values if a trade-off is inevitable.
In addition to the issue of certain populations within the HIV patient community being treated unfairly, HIV patients themselves are already exposed to unfairness in medical and public health domains due to stigmatization [26]. Developers and designers of AI for HIV should be informed from previous cases of how AI systems have the risk of inadvertently stigmatizing populations who acquire certain diseases or conditions. The use of AI to identify COVID-19 hotspots, for example, was an epitome of how AI systems for certain diseases or conditions can impose stigmatization on particular populations based on the erroneous view that a specific variant of COVID-19 (e.g., Omicron) could be easily confined to those populations [27]. AI systems for HIV need to ensure such stigmatization is not exacerbated by the very systems that aim to be beneficial and useful for people living with HIV (PLWH). However, previous studies illustrate cases where data used in the AI applications for HIV or the applications themselves backfired to aggravate this stigmatization, even leading to potential criminalization [28,29]. Molecular HIV surveillance was where this concern was the most prominent. This can be attributed to the fact that genomic sequence data could be used in criminal proceedings to accuse the individual of HIV transmissions in states where the contraction of HIV is illegal [28,29]. Traditional anonymization and de-identification techniques may not completely guarantee the prevention of privacy and confidentiality breaches and eventually the protection of PLWH from criminalization. In this case, differential privacy, a concept first proposed in 2006, may be a valuable area of further exploration for researchers of AI for HIV to anonymize and de-identify big data created via record linkages [30]. Differential privacy captures the increased risk to the privacy of an individual incurred by participating in a database and often provides extremely accurate information about the database while simultaneously maintaining decently high levels of privacy. Studies that already applied differential privacy on electronic health records (EHR) and public health surveillance data generated through record linkage should be referenced [31,32].

2.3. Ethical Considerations Regarding Transparency

Transparency, which is often used as a synonym for explainability, interpretability, openness, accessibility, and visibility, is difficult to define in simple terms [33]. Nevertheless, in the context of this paper, transparency can refer to clear communication about how AI applications for HIV are developed and what decision-making processes are involved.
Inclusive and participatory approaches in the development and deployment of AI applications for HIV are indispensable for enhanced transparency. Participatory methods in qualitative research including focus groups and co-design sessions may help improve transparency as already demonstrated by a study that proposed a framework for co-designing a digital health platform for HIV surveillance and care [34]. In 2022, Davis and several researchers explored the power of digital health to strengthen health systems in low- and middle-income countries while taking into account potential threats to human rights. During this process, various stakeholders from people living with HIV to HIV activists and human rights lawyers were invited to participate in the study design, digital ethnography, focus group discussions, key informant interviews, and data analysis [14]. This participatory process contributed to a more transparent study where the design and results of the study were fully understood by those who are the most impacted by it.
The participatory design of AI for HIV beyond digital health interventions requires additional considerations. Devising an AI system is usually composed of design, development, and deployment [35]. Should this participatory approach be adapted to each of these three phases? When developing an AI that pinpoints predicted hotspot areas of HIV outbreaks, for instance, how do we determine the ground truths (e.g., which case numbers would serve as an appropriate threshold)? Which stakeholders will be involved in setting those ground truths? Which software will be used for building the AI-powered predictive model? How will the different inputs of stakeholders become incorporated into the model training process? A case study from Winter and Carusi may provide some reference to answering some of these questions [36]. This article examines the intricate collaboration processes among various stakeholders including data scientists and clinicians in developing an AI technology for the early diagnosis of pulmonary hypertension, a rare respiratory disease. It elucidates how the knowledge and views of different participants in its development are integrated and inscribed into the technology, allowing the AI system to be not just technical but also social, thereby empowering it to be more transparent and less unfair.
Committees specialized in HIV data ethics encompassing relevant stakeholders from diverse backgrounds should be formed to facilitate this participatory co-creation of AI systems for HIV. This is crucial as the ethics surrounding data specifically affect the ethical dimensions of the AI system itself which uses that data. A study proposing an advisory committee customized towards data ethics for the sub-Sahara region can serve as a model [37]. It invited a wide range of stakeholders from data scientists and bioethicists to researchers with ample knowledge about COVID-19 and HIV and legal experts. This framework was intended to address ethical concerns specific to big health data projects since a generic ethics review board may not have the full capacity to address those concerns due to data ethics being an emergent discipline in Africa. Once a similar HIV data ethics committee is established, it should work on constructing distinct governance frameworks customized to each type of AI application for HIV. For instance, when considering AI-driven chatbots designed to promote testing among people living with HIV (PLWH) and offer mental health counseling, the World Economic Forum (WEF) has proposed 10 ethical principles for AI-powered chatbots utilized in healthcare [38,39]. These principles can serve as a comprehensive framework for assessing the ethical integrity of such systems.
A caveat to consider is the potential of transparency overwhelming some stakeholders depending on the amount and type of information being made transparent. Nolan investigated this possibility by extending the concept of therapeutic privilege to AI applications in medical settings [40]. Nolan explains that therapeutic privilege is the discretion of a clinician to limit the transparency of a medical decision based on the grounds that the patient’s physical or mental health may be seriously harmed by providing the information. In the context of AI systems for HIV, should this therapeutic privilege ever be used and, if so, when and under what circumstances? Are there certain types or amounts of information such as the inner workings of an AI algorithm for HIV cluster detection that may overwhelm patients? Is there a risk of such overwhelmed patients being dissuaded from participating in research on AI systems for HIV or treatment of HIV altogether?

3. Conclusions

Advancement of technologies, namely AI, present opportunities for novel applications in HIV research. To this end, ethical considerations merit discussion to ensure that utilizing AI can maximize patient trust, acceptability, and transparency while minimizing unfairness. This paper explores the ethical considerations surrounding the use of AI for HIV with a focus on acceptability, trust, fairness, and transparency. For each of the ethical focus areas, recommendations and critical questions are offered. Some of recommendations included a framework of informed consent, the use of Federated Learning (FL) to further reinforce privacy protection, the application of differential privacy on HIV data, and the inclusive and participatory design of AI accompanied by the formation of a data ethics committee and the construction of relevant frameworks and principles. These considerations and suggestions will inform relevant stakeholders including PLWH, clinicians, policymakers, and researchers in areas including intervention design and policymaking.

Author Contributions

Conceptualization, R.G. and S.D.Y.; methodology, R.G., S.K., and S.D.Y.; writing, R.G., S.K., and S.D.Y.; response to reviewer comments and major revisions, S.K.; funding acquisition, R.G. and S.D.Y. All authors have read and agreed to the published version of the manuscript.

Funding

This study was supported by grants from the National Institute on Drug Abuse (NIDA), National Institute for Minority Health and Health Disparities (NIMHD), and National Center for Complementary and Integrative Health (NCCIH).

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

No data was used as this was a communication manuscript.

Conflicts of Interest

SDY is an advisor and consultant to digital health companies, including ones working on Artificial Intelligence.

References

  1. HIV Diagnoses|HIV in the US|HIV Statistics Center|HIV|CDC. Available online: https://www.cdc.gov/hiv/statistics/overview/in-us/diagnoses.html (accessed on 11 April 2024).
  2. Roche, S.D.; Ekwunife, O.I.; Mendonca, R.; Kwach, B.; Omollo, V.; Zhang, S.; Ongwen, P.; Hattery, D.; Smedinghoff, S.; Morris, S.; et al. Measuring the performance of computer vision artificial intelligence to interpret images of HIV self-testing results. Front. Public Health 2024, 12, 1334881. [Google Scholar] [CrossRef] [PubMed]
  3. Young, S.D.; Yu, W.; Wang, W. Toward Automating HIV Identification: Machine Learning for Rapid Identification of HIV-related Social Media Data. J. Acquir. Immune. Defic. Syndr. 2017, 74 (Suppl. S2), S128–S131. [Google Scholar] [CrossRef] [PubMed]
  4. Balzer, L.B.; Havlir, D.V.; Kamya, M.R.; Chamie, G.; Charlebois, E.D.; Clark, T.D.; Koss, C.; Kwarisiima, D.; Ayieko, J.; Sang, N.; et al. Machine Learning to Identify Persons at High-Risk of Human Immunodeficiency Virus Acquisition in Rural Kenya and Uganda. Clin. Infect. Dis. 2019, 71, 2326–2333. [Google Scholar] [CrossRef] [PubMed]
  5. Peng, M.L.; Wickersham, J.; Altice, F.L.; Shrestha, R.; Azwa, I.; Zhou, X.; Ab Halim, M.A.; Ikhtiaruddin, W.M.; Tee, V.; Kamarulzaman, A.; et al. Formative Evaluation of the Acceptance of HIV Prevention Artificial Intelligence Chatbots By Men Who Have Sex with Men in Malaysia: Focus Group Study. JMIR Form. Res. 2022, 6, e42055. [Google Scholar] [CrossRef] [PubMed]
  6. Mello, M.M.; Wang, C.J. Ethics and governance for digital disease surveillance. Science 2020, 368, 951–954. [Google Scholar] [CrossRef] [PubMed]
  7. Gilkey, M.B.; Marcus, J.L.; Garrell, J.M.; Powell, V.E.; Maloney, K.M.; Krakower, D.S. Using HIV Risk Prediction Tools to Identify Candidates for Pre-Exposure Prophylaxis: Perspectives from Patients and Primary Care Providers. AIDS Patient Care STDs 2019, 33, 372–378. [Google Scholar] [CrossRef] [PubMed]
  8. Palanica, A.; Flaschner, P.; Thommandram, A.; Li, M.; Fossat, Y. Physicians’ Perceptions of Chatbots in Health Care: Cross-Sectional Web-Based Survey. J. Med. Internet Res. 2019, 21, e12887. [Google Scholar] [CrossRef] [PubMed]
  9. Romm, T. Facebook Ads Push Misinformation about HIV Prevention Drugs, LGBT Activists Say, ‘Harming Public Health’. Washington Post. 9 December 2019. Available online: https://www.washingtonpost.com/technology/2019/12/09/facebook-ads-are-pushing-misinformation-about-hiv-prevention-drugs-lgbt-activists-say-harming-public-health/ (accessed on 11 April 2024).
  10. Nadarzynski, T.; Miles, O.; Cowie, A.; Ridge, D. Acceptability of artificial intelligence (AI)-led chatbot services in healthcare: A mixed-methods study. Digit. Health 2019, 5, 2055207619871808. [Google Scholar] [CrossRef] [PubMed]
  11. Xiao, Z.; Liao, Q.V.; Zhou, M.; Grandison, T.; Li, Y. Powering an AI Chatbot with Expert Sourcing to Support Credible Health Information Access. In Proceedings of the 28th International Conference on Intelligent User Interfaces, Sydney, Australia, 27–31 March 2023; IUI ’23; Association for Computing Machinery: New York, NY, USA, 2023; pp. 2–18. [Google Scholar] [CrossRef]
  12. Deiana, G.; Dettori, M.; Arghittu, A.; Azara, A.; Gabutti, G.; Castiglia, P. Artificial Intelligence and Public Health: Evaluating ChatGPT Responses to Vaccination Myths and Misconceptions. Vaccines 2023, 11, 1217. [Google Scholar] [CrossRef] [PubMed]
  13. Molldrem, S.; Smith, A.K.J.; Subrahmanyam, V. Toward Consent in Molecular HIV Surveillance?: Perspectives of Critical Stakeholders. AJOB Empir. Bioeth. 2024, 15, 66–79. [Google Scholar] [CrossRef] [PubMed]
  14. Davis, S.L.M.; Pham, T.; Kpodo, I.; Imalingat, T.; Muthui, A.K.; Mjwana, N.; Sandset, T.; Ayeh, E.; Dong, D.D.; Large, K.; et al. Digital health and human rights of young adults in Ghana, Kenya and Vietnam: A qualitative participatory action research study. BMJ Glob. Health 2023, 8, e011254. [Google Scholar] [CrossRef]
  15. Raza, A. Secure and Privacy-Preserving Federated Learning with Explainable Artificial Intelligence for Smart Healthcare System. Phdthesis, Université de Lille; University of Kent (Canterbury, Royaume-Uni). 2023. Available online: https://theses.hal.science/tel-04398455 (accessed on 11 April 2024).
  16. Van Nguyen, T.P.; Yang, W.; Tang, Z.; Xia, X.; Mullens, A.B.; Dean, J.A.; Li, Y. Lightweight federated learning for STIs/HIV prediction. Sci. Rep. 2024, 14, 6560. [Google Scholar] [CrossRef] [PubMed]
  17. Shaban-Nejad, A.; Michalowski, M.; Brownstein, J.; Buckeridge, D. Guest Editorial Explainable AI: Towards Fairness, Accountability, Transparency and Trust in Healthcare. IEEE J. Biomed. Health Inform. 2021, 25, 2374–2375. [Google Scholar] [CrossRef]
  18. Zhao, J.; Wang, T.; Yatskar, M.; Ordonez, V.; Chang, K.-W. Men Also Like Shopping: Reducing Gender Bias Amplification using Corpus-level Constraints. In Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing, Copenhagen, Denmark, 7–11 September 2017; Palmer, M., Hwa, R., Riedel, S., Eds.; Association for Computational Linguistics: Copenhagen, Denmark, 2017; pp. 2979–2989. [Google Scholar] [CrossRef]
  19. Buolamwini, J.; Gebru, T. Gender Shades: Intersectional Accuracy Disparities in Commercial Gender Classification. In Proceedings of the 1st Conference on Fairness, Accountability and Transparency, PMLR, New York, NY, USA, 23–24 February 2018; pp. 77–91. Available online: https://proceedings.mlr.press/v81/buolamwini18a.html (accessed on 11 April 2024).
  20. Use of Electronic Health Record Data and Machine Learning to Identify Candidates for HIV Pre-Exposure Prophylaxis: A Modelling Study—The Lancet HIV. Available online: https://www.thelancet.com/journals/lanhiv/article/PIIS2352-3018(19)30137-7/abstract (accessed on 11 April 2024).
  21. Lancki, N.; Almirol, E.; Alon, L.; McNulty, M.; Schneider, J. PrEP guidelines have low sensitivity for identifying seroconverters in a sample of Young Black men who have sex with men in Chicago. AIDS 2018, 32, 383–392. [Google Scholar] [CrossRef] [PubMed]
  22. Assessing the Performance of 3 Human Immunodeficiency Virus…: Sexually Transmitted Diseases. Available online: https://journals.lww.com/stdjournal/fulltext/2017/05000/assessing_the_performance_of_3_human.8.aspx (accessed on 11 April 2024).
  23. Artificial Intelligence and Machine Learning for HIV Prevention: Emerging Approaches to Ending the Epidemic|Current HIV/AIDS Reports. Available online: https://link.springer.com/article/10.1007/s11904-020-00490-6 (accessed on 11 April 2024).
  24. Tomasev, N.; McKee, K.R.; Kay, J.; Mohamed, S. Fairness for Unobserved Characteristics: Insights from Technological Impacts on Queer Communities. In Proceedings of the 2021 AAAI/ACM Conference on AI, Ethics, and Society, Santa Clara, CA, USA, 21–23 October 2024; AIES ’21. Association for Computing Machinery: New York, NY, USA, 2021; pp. 254–265. [Google Scholar] [CrossRef]
  25. Offering Women with PrEP with Education, Shared Decision-Making and Trauma-Informed Care: The OPENS Trial|Global Research Projects. Available online: https://globalprojects.ucsf.edu/project/offering-women-prep-education-shared-decision-making-and-trauma-informed-care-opens-trial (accessed on 11 April 2024).
  26. Wilandika, A.; Yusuf, A. The Driving Factors of Social Stigma Against People with HIV/AIDS: An Integrative Review. Malaysian J. Med. Health Sci. 2023, 19, 164–172. [Google Scholar]
  27. Jamrozik, E.; Munung, N.S.; Abeler-Dorner, L.; Parker, M. Public health use of HIV phylogenetic data in sub-Saharan Africa: Ethical issues. BMJ Glob. Health 2023, 8, e011884. [Google Scholar] [CrossRef] [PubMed]
  28. Molldrem, S.; Smith, A.K.J. Health policy counterpublics: Enacting collective resistances to US molecular HIV surveillance and cluster detection and response programs. Soc. Stud. Sci. 2023, 03063127231211933. [Google Scholar] [CrossRef] [PubMed]
  29. Watson, M. Perspectives on the Use of Molecular HIV Data for Public Health. Electronic Theses and Dissertations. 2023. Available online: https://digitalcommons.georgiasouthern.edu/etd/2688 (accessed on 11 April 2024).
  30. Dwork, C. Differential Privacy. In Automata, Languages and Programming; Bugliesi, M., Preneel, B., Sassone, V., Wegener, I., Eds.; Springer: Berlin/Heidelberg, Germany, 2006; pp. 1–12. [Google Scholar] [CrossRef]
  31. Zia, M.T.; Khan, M.A.; El-Sayed, H. Application of Differential Privacy Approach in Healthcare Data—A Case Study. In Proceedings of the 2020 14th International Conference on Innovations in Information Technology (IIT), Al Ain, United Arab Emirates, 17–18 November 2020; pp. 35–39. [Google Scholar] [CrossRef]
  32. Anjum, A.; Malik, S.U.R.; Choo, K.-K.R.; Khan, A.; Haroon, A.; Khan, S.; Khan, S.U.; Ahmad, N.; Raza, B. An efficient privacy mechanism for electronic health records. Comput. Secur. 2018, 72, 196–211. [Google Scholar] [CrossRef]
  33. Felzmann, H.; Fosch-Villaronga, E.; Lutz, C.; Tamò-Larrieux, A. Towards Transparency by Design for Artificial Intelligence. Sci. Eng. Ethics 2020, 26, 3333–3361. [Google Scholar] [CrossRef] [PubMed]
  34. Marent, B.; Henwood, F.; Darking, M. Participation through the lens of care: Situated accountabilities in the codesign of a digital health platform for HIV care. Soc. Sci. Med. 2023, 337, 116307. [Google Scholar] [CrossRef]
  35. Exploring Patient Perspectives on How They Can and Should Be Engaged in the Development of Artificial Intelligence (AI) Applications in Health Care|BMC Health Services Research|Full Text. Available online: https://bmchealthservres.biomedcentral.com/articles/10.1186/s12913-023-10098-2 (accessed on 11 April 2024).
  36. Winter, P.; Carusi, A. Professional expectations and patient expectations concerning the development of Artificial Intelligence (AI) for the early diagnosis of Pulmonary Hypertension (PH). J. Responsible Technol. 2022, 12, 100052. [Google Scholar] [CrossRef] [PubMed]
  37. Kling, S.; Singh, S.; Burgess, T.L.; Nair, G. The role of an ethics advisory committee in data science research in sub-Saharan Africa. S. Afr. J. Sci. 2023, 119, 1–3. [Google Scholar] [CrossRef]
  38. Chatbots RESET: A Framework for Governing Responsible Use of Conversational AI in Healthcare. World Economic Forum. Available online: https://www.weforum.org/publications/chatbots-reset-a-framework-for-governing-responsible-use-of-conversational-ai-in-healthcare/ (accessed on 11 April 2024).
  39. van Heerden, A.; Bosman, S.; Swendeman, D.; Comulada, W.S. Chatbots for HIV Prevention and Care: A Narrative Review. Curr. HIV/AIDS Rep. 2023, 20, 481–486. [Google Scholar] [CrossRef]
  40. Artificial Intelligence in Medicine—Is too Much Transparency a Good Thing?—Paul Nolan. 2023. Available online: https://journals.sagepub.com/doi/10.1177/00258172221141243?url_ver=Z39.88-2003&rfr_id=ori:rid:crossref.org&rfr_dat=cr_pub%20%200pubmed (accessed on 11 April 2024).
Table 1. Rubric for informed consent regarding AI for HIV.
Table 1. Rubric for informed consent regarding AI for HIV.
6WsConsiderations
WhoWho are the stakeholders?
HowHow should informed consent be rolled out? What medium or method should be used to convey essential information that stakeholders need to be aware of?
WhatWhat information will be distributed to the stakeholders to consent?
WhenWhen will the stakeholders be prompted to consent? Upon initial diagnosis or other periods in the timeline?
WhereWhere is informed consent taking place? Should informed consent occur only under in-person settings or can be it more flexible?
WhyWhy do stakeholders deem informed consent to be important and valuable? Why do they consent in the first place?
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Garett, R.; Kim, S.; Young, S.D. Ethical Considerations for Artificial Intelligence Applications for HIV. AI 2024, 5, 594-601. https://doi.org/10.3390/ai5020031

AMA Style

Garett R, Kim S, Young SD. Ethical Considerations for Artificial Intelligence Applications for HIV. AI. 2024; 5(2):594-601. https://doi.org/10.3390/ai5020031

Chicago/Turabian Style

Garett, Renee, Seungjun Kim, and Sean D. Young. 2024. "Ethical Considerations for Artificial Intelligence Applications for HIV" AI 5, no. 2: 594-601. https://doi.org/10.3390/ai5020031

Article Metrics

Back to TopTop