AI Fairness in Data Management and Analytics: A Review on Challenges, Methodologies and Applications
Abstract
:1. Introduction
1.1. Background
1.2. Directions
- Algorithmic bias mitigation [18,19,20]: Exploration of techniques, like pre-processing, in-processing, and post-processing, to mitigate algorithmic biases. Critical analysis of their effectiveness in different scenarios, offering insights into the trade-offs between bias reduction and model performance.
- Fair representation learning [21,22,23]: Introduction of techniques for learning fair representations, including adversarial debiasing and adversarial learning frameworks. Investigation into their potential for producing fair and informative representations, fostering a deeper comprehension of their role in mitigating biases, understanding the true sources of disparities, aiding in the design of more targeted interventions.
1.3. Scope
1.4. Contributions
1.5. Organization of This Article
2. Preliminary
2.1. Status Quo
2.2. Review Methodology
2.2.1. Materials
2.2.2. Criteria
- Duplication: We strive to offer diverse and original content to our audience. To avoid redundancy, we review submissions to ensure that the material we publish does not duplicate the existing content in our collection.
- Ineligible content: Our selection process also involves evaluating whether the submitted content meets our eligibility criteria, including adhering to our guidelines and standards.
- Publishing time: We value timeliness and relevance. We prioritize materials that are current and align with the most recent developments and trends in the respective field.
- Quality of publication: Ensuring the quality of the content we publish is of utmost importance. We assess the accuracy, credibility, and overall value of the material to ensure it meets our quality standards.
- Accessibility: Our goal is to make information accessible to a wide range of readers. We select materials that are well structured, clear, and easily understandable, catering to readers with varying levels of expertise.
- Similarity of content: While covering a broad spectrum of topics, we also strive for variety and distinctiveness in our content selection. We aim to present diverse perspectives and insights to enrich the reader experience.
2.3. Limitations
3. Definition and Problems
3.1. Definition
3.2. Problems
4. Bias Analysis
4.1. Data Bias
4.2. Algorithmic Bias
4.3. User Interaction Bias
5. Fair Training
5.1. Fair Training Methods
5.2. Pre-Processing Fairness
5.3. In Processing Fairness
5.4. Post-Processing Fairness
5.5. Regularization Based Fairness
5.6. Counterfactual Fairness
6. Discussion
6.1. Fair Data Collection
6.2. Regular Auditing and Monitoring
7. AI Fairness in Practice
7.1. Social Administration
7.1.1. Health Care
7.1.2. Education
7.1.3. Criminal Justice and Sentencing
7.2. Business
7.2.1. Hiring and Recruiting
7.2.2. Loan and Credit Decisions
7.2.3. Online Advertising
7.2.4. Customer Service
8. Conclusions
Author Contributions
Funding
Conflicts of Interest
References
- Angerschmid, A.; Zhou, J.; Theuermann, K.; Chen, F.; Holzinger, A. Fairness and explanation in AI-informed decision making. Mach. Learn. Knowl. Extr. 2022, 4, 556–579. [Google Scholar] [CrossRef]
- Kratsch, W.; Manderscheid, J.; Röglinger, M.; Seyfried, J. Machine learning in business process monitoring: A comparison of deep learning and classical approaches used for outcome prediction. Bus. Inf. Syst. Eng. 2021, 63, 261–276. [Google Scholar] [CrossRef]
- Kraus, M.; Feuerriegel, S.; Oztekin, A. Deep learning in business analytics and operations research: Models, applications and managerial implications. Eur. J. Oper. Res. 2020, 281, 628–641. [Google Scholar] [CrossRef]
- Varona, D.; Suárez, J.L. Discrimination, bias, fairness, and trustworthy AI. Appl. Sci. 2022, 12, 5826. [Google Scholar] [CrossRef]
- Saghiri, A.M.; Vahidipour, S.M.; Jabbarpour, M.R.; Sookhak, M.; Forestiero, A. A survey of Artificial Intelligence challenges: Analyzing the definitions, relationships, and evolutions. Appl. Sci. 2022, 12, 4054. [Google Scholar] [CrossRef]
- Barocas, S.; Selbst, A.D. Big data’s disparate impact. Calif. Law Rev. 2016, 104, 671–732. [Google Scholar] [CrossRef]
- Corsello, A.; Santangelo, A. May Artificial Intelligence Influence Future Pediatric Research?—The Case of ChatGPT. Children 2023, 10, 757. [Google Scholar] [CrossRef] [PubMed]
- Von Zahn, M.; Feuerriegel, S.; Kuehl, N. The cost of fairness in AI: Evidence from e-commerce. Bus. Inf. Syst. Eng. 2021, 64, 335–348. [Google Scholar] [CrossRef]
- Liu, L.T.; Dean, S.; Rolf, E.; Simchowitz, M.; Hardt, M. Delayed impact of fair machine learning. In Proceedings of the International Conference on Machine Learning, PMLR, Stockholm, Sweden, 10–15 July 2018; pp. 3150–3158. [Google Scholar]
- Cathy, O. How Big Data Increases Inequality and Threatens Democracy; Crown Publishing Group: New York, NY, USA, 2016. [Google Scholar]
- Hardt, M.; Price, E.; Srebro, N. Equality of opportunity in supervised learning. Adv. Neural Inf. Process. Syst. 2016, 29, 3323–3331. [Google Scholar]
- Trewin, S. AI fairness for people with disabilities: Point of view. arXiv 2018, arXiv:1811.10670. [Google Scholar]
- Kodiyan, A.A. An overview of ethical issues in using AI systems in hiring with a case study of Amazon’s AI based hiring tool. Researchgate Prepr. 2019, 1–19. [Google Scholar]
- Righetti, L.; Madhavan, R.; Chatila, R. Unintended consequences of biased robotic and Artificial Intelligence systems [ethical, legal, and societal issues]. IEEE Robot. Autom. Mag. 2019, 26, 11–13. [Google Scholar] [CrossRef]
- Garg, P.; Villasenor, J.; Foggo, V. Fairness metrics: A comparative analysis. In Proceedings of the 2020 IEEE International Conference on Big Data (Big Data), Atlanta, GA, USA, 10–13 December 2020; IEEE: Piscataway, NJ, USA, 2020; pp. 3662–3666. [Google Scholar]
- Mehrotra, A.; Sachs, J.; Celis, L.E. Revisiting Group Fairness Metrics: The Effect of Networks. Proc. Acm Hum. Comput. Interact. 2022, 6, 1–29. [Google Scholar] [CrossRef]
- Ezzeldin, Y.H.; Yan, S.; He, C.; Ferrara, E.; Avestimehr, A.S. Fairfed: Enabling group fairness in federated learning. In Proceedings of the AAAI Conference on Artificial Intelligence, Washington, DC, USA, 7–14 February 2023; Volume 37, pp. 7494–7502. [Google Scholar]
- Hooker, S. Moving beyond “algorithmic bias is a data problem”. Patterns 2021, 2, 100241. [Google Scholar] [CrossRef]
- Amini, A.; Soleimany, A.P.; Schwarting, W.; Bhatia, S.N.; Rus, D. Uncovering and mitigating algorithmic bias through learned latent structure. In Proceedings of the 2019 AAAI/ACM Conference on AI, Ethics, and Society, Honolulu, HI, USA, 27–28 January 2019; pp. 289–295. [Google Scholar]
- Yang, J.; Soltan, A.A.; Eyre, D.W.; Yang, Y.; Clifton, D.A. An adversarial training framework for mitigating algorithmic biases in clinical machine learning. NPJ Digit. Med. 2023, 6, 55. [Google Scholar] [CrossRef]
- Li, S. Towards Trustworthy Representation Learning. In Proceedings of the 2023 SIAM International Conference on Data Mining (SDM), Minneapolis, MN, USA, 27–29 April 2023; SIAM: Philadelphia, PA, USA, 2023; pp. 957–960. [Google Scholar]
- Creager, E.; Madras, D.; Jacobsen, J.H.; Weis, M.; Swersky, K.; Pitassi, T.; Zemel, R. Flexibly fair representation learning by disentanglement. In Proceedings of the International Conference on Machine Learning, PMLR, Long Beach, CA, USA, 10–15 June 2019; pp. 1436–1445. [Google Scholar]
- McNamara, D.; Ong, C.S.; Williamson, R.C. Costs and benefits of fair representation learning. In Proceedings of the 2019 AAAI/ACM Conference on AI, Ethics, and Society, Honolulu, HI, USA, 27–28 January 2019; pp. 263–270. [Google Scholar]
- Sahlgren, O. The politics and reciprocal (re) configuration of accountability and fairness in data-driven education. Learn. Media Technol. 2023, 48, 95–108. [Google Scholar] [CrossRef]
- Ravishankar, P.; Mo, Q.; McFowland III, E.; Neill, D.B. Provable Detection of Propagating Sampling Bias in Prediction Models. Proc. AAAI Conf. Artif. Intell. 2023, 37, 9562–9569. [Google Scholar] [CrossRef]
- Park, J.; Ellezhuthil, R.D.; Isaac, J.; Mergerson, C.; Feldman, L.; Singh, V. Misinformation Detection Algorithms and Fairness across Political Ideologies: The Impact of Article Level Labeling. In Proceedings of the 15th ACM Web Science Conference 2023, Austin, TX, USA, 30 April–1 May 2023; pp. 107–116. [Google Scholar]
- Friedrich, J. Primary error detection and minimization (PEDMIN) strategies in social cognition: A reinterpretation of confirmation bias phenomena. Psychol. Rev. 1993, 100, 298. [Google Scholar] [CrossRef] [PubMed]
- Frincke, D.; Tobin, D.; McConnell, J.; Marconi, J.; Polla, D. A framework for cooperative intrusion detection. In Proceedings of the 21st NIST-NCSC National Information Systems Security Conference, Arlington, VA, USA, 2–8 April 1998; pp. 361–373. [Google Scholar]
- Estivill-Castro, V.; Brankovic, L. Data swapping: Balancing privacy against precision in mining for logic rules. In International Conference on Data Warehousing and Knowledge Discovery; Springer: Berlin/Heidelberg, Germany, 1999; pp. 389–398. [Google Scholar]
- Bellamy, R.K.; Dey, K.; Hind, M.; Hoffman, S.C.; Houde, S.; Kannan, K.; Lohia, P.; Martino, J.; Mehta, S.; Mojsilovic, A.; et al. AI Fairness 360: An extensible toolkit for detecting, understanding, and mitigating unwanted algorithmic bias. arXiv 2018, arXiv:1810.01943. [Google Scholar]
- Zhang, Y.; Bellamy, R.K.; Singh, M.; Liao, Q.V. Introduction to AI fairness. In Proceedings of the Extended Abstracts of the 2020 CHI Conference on Human Factors in Computing Systems, Yokohama, Japan, 8–13 May 2020; pp. 1–4. [Google Scholar]
- Mahoney, T.; Varshney, K.; Hind, M. AI Fairness; O’Reilly Media Incorporated: Sebastopol, CA, USA, 2020. [Google Scholar]
- Mosteiro, P.; Kuiper, J.; Masthoff, J.; Scheepers, F.; Spruit, M. Bias discovery in machine learning models for mental health. Information 2022, 13, 237. [Google Scholar] [CrossRef]
- Wing, J.M. Trustworthy AI. Commun. ACM 2021, 64, 64–71. [Google Scholar] [CrossRef]
- Percy, C.; Dragicevic, S.; Sarkar, S.; d’Avila Garcez, A. Accountability in AI: From principles to industry-specific accreditation. AI Commun. 2021, 34, 181–196. [Google Scholar] [CrossRef]
- Benjamins, R.; Barbado, A.; Sierra, D. Responsible AI by design in practice. arXiv 2019, arXiv:1909.12838. [Google Scholar]
- Dignum, V. The myth of complete AI-fairness. In Proceedings of the Artificial Intelligence in Medicine: 19th International Conference on Artificial Intelligence in Medicine, AIME 2021, Virtual, 15–18 June 2021; Springer: Cham, Switzerland, 2021; pp. 3–8. [Google Scholar]
- Silberg, J.; Manyika, J. Notes from the AI Frontier: Tackling Bias in AI (and in Humans); McKinsey Global Institute: San Francisco, CA, USA, 2019; Volume 1. [Google Scholar]
- Bird, S.; Kenthapadi, K.; Kiciman, E.; Mitchell, M. Fairness-aware machine learning: Practical challenges and lessons learned. In Proceedings of the Twelfth ACM International Conference on Web Search and Data Mining, Melbourne, Australia, 11–15 February 2019; pp. 834–835. [Google Scholar]
- Dwork, C.; Hardt, M.; Pitassi, T.; Reingold, O.; Zemel, R. Fairness through awareness. In Proceedings of the 3rd Innovations in Theoretical Computer Science Conference, Cambridge, MA, USA, 8–10 January 2012; pp. 214–226. [Google Scholar]
- Islam, R.; Keya, K.N.; Pan, S.; Sarwate, A.D.; Foulds, J.R. Differential Fairness: An Intersectional Framework for Fair AI. Entropy 2023, 25, 660. [Google Scholar] [CrossRef] [PubMed]
- Barocas, S.; Hardt, M.; Narayanan, A. Fairness in machine learning. Nips Tutor. 2017, 1, 2017. [Google Scholar]
- Zafar, M.B.; Valera, I.; Rogriguez, M.G.; Gummadi, K.P. Fairness constraints: Mechanisms for fair classification. In Proceedings of the Artificial Intelligence and Statistics, PMLR, Ft. Lauderdale, FL, USA, 20–22 April 2017; pp. 962–970. [Google Scholar]
- Cornacchia, G.; Anelli, V.W.; Biancofiore, G.M.; Narducci, F.; Pomo, C.; Ragone, A.; Di Sciascio, E. Auditing fairness under unawareness through counterfactual reasoning. Inf. Process. Manag. 2023, 60, 103224. [Google Scholar] [CrossRef]
- Kusner, M.J.; Loftus, J.; Russell, C.; Silva, R. Counterfactual fairness. Adv. Neural Inf. Process. Syst. 2017, 30. [Google Scholar]
- Feldman, M.; Friedler, S.A.; Moeller, J.; Scheidegger, C.; Venkatasubramanian, S. Certifying and removing disparate impact. In Proceedings of the 21th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, Sydney, Australia, 11–14 August 2015; pp. 259–268. [Google Scholar]
- Kearns, M.; Neel, S.; Roth, A.; Wu, Z.S. Preventing fairness gerrymandering: Auditing and learning for subgroup fairness. In Proceedings of the International Conference on Machine Learning, PMLR, Stockholm, Sweden, 10–15 July 2018; pp. 2564–2572. [Google Scholar]
- Fleisher, W. What’s fair about individual fairness? In Proceedings of the 2021 AAAI/ACM Conference on AI, Ethics, and Society, New York, NY, USA, 19–21 May 2021; pp. 480–490. [Google Scholar]
- Mukherjee, D.; Yurochkin, M.; Banerjee, M.; Sun, Y. Two simple ways to learn individual fairness metrics from data. In Proceedings of the International Conference on Machine Learning, PMLR, Copenhagen, Denmark, 16–19 December 2020; pp. 7097–7107. [Google Scholar]
- Dwork, C.; Ilvento, C. Group fairness under composition. In Proceedings of the 2018 Conference on Fairness, Accountability, and Transparency (FAT* 2018), New York, NY, USA, 23–24 February 2018; Volume 3. [Google Scholar]
- Binns, R. On the apparent conflict between individual and group fairness. In Proceedings of the 2020 Conference on Fairness, Accountability, and Transparency, Barcelona, Spain, 27–30 January 2020; pp. 514–524. [Google Scholar]
- Chen, R.J.; Wang, J.J.; Williamson, D.F.; Chen, T.Y.; Lipkova, J.; Lu, M.Y.; Sahai, S.; Mahmood, F. Algorithmic fairness in Artificial Intelligence for medicine and healthcare. Nat. Biomed. Eng. 2023, 7, 719–742. [Google Scholar] [CrossRef]
- Sloan, R.H.; Warner, R. Beyond bias: Artificial Intelligence and social justice. Va. J. Law Technol. 2020, 24, 1. [Google Scholar] [CrossRef]
- Feuerriegel, S.; Dolata, M.; Schwabe, G. Fair AI: Challenges and opportunities. Bus. Inf. Syst. Eng. 2020, 62, 379–384. [Google Scholar] [CrossRef]
- Bing, L.; Pettit, B.; Slavinski, I. Incomparable punishments: How economic inequality contributes to the disparate impact of legal fines and fees. RSF Russell Sage Found. J. Soc. Sci. 2022, 8, 118–136. [Google Scholar] [CrossRef] [PubMed]
- Wang, L.; Zhu, H. How are ML-Based Online Content Moderation Systems Actually Used? Studying Community Size, Local Activity, and Disparate Treatment. In Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, Seoul, Republic of Korea, 21–24 June 2022; pp. 824–838. [Google Scholar]
- Tom, D.; Computing, D. Eliminating Disparate Treatment in Modeling Default of Credit Card Clients; Technical Report; Center for Open Science: Charlottesville, VA, USA, 2023. [Google Scholar]
- Shui, C.; Xu, G.; Chen, Q.; Li, J.; Ling, C.X.; Arbel, T.; Wang, B.; Gagné, C. On learning fairness and accuracy on multiple subgroups. Adv. Neural Inf. Process. Syst. 2022, 35, 34121–34135. [Google Scholar]
- Mayernik, M.S. Open data: Accountability and transparency. Big Data Soc. 2017, 4, 2053951717718853. [Google Scholar] [CrossRef]
- Zhou, N.; Zhang, Z.; Nair, V.N.; Singhal, H.; Chen, J.; Sudjianto, A. Bias, Fairness, and Accountability with AI and ML Algorithms. arXiv 2021, arXiv:2105.06558. [Google Scholar]
- Shin, D. User perceptions of algorithmic decisions in the personalized AI system: Perceptual evaluation of fairness, accountability, transparency, and explainability. J. Broadcast. Electron. Media 2020, 64, 541–565. [Google Scholar] [CrossRef]
- Sokol, K.; Hepburn, A.; Poyiadzi, R.; Clifford, M.; Santos-Rodriguez, R.; Flach, P. Fat forensics: A python toolbox for implementing and deploying fairness, accountability and transparency algorithms in predictive systems. arXiv 2022, arXiv:2209.03805. [Google Scholar] [CrossRef]
- Gevaert, C.M.; Carman, M.; Rosman, B.; Georgiadou, Y.; Soden, R. Fairness and accountability of AI in disaster risk management: Opportunities and challenges. Patterns 2021, 2, 100363. [Google Scholar] [CrossRef]
- Morris, M.R. AI and accessibility. Commun. ACM 2020, 63, 35–37. [Google Scholar] [CrossRef]
- Israni, S.T.; Matheny, M.E.; Matlow, R.; Whicher, D. Equity, inclusivity, and innovative digital technologies to improve adolescent and young adult health. J. Adolesc. Health 2020, 67, S4–S6. [Google Scholar] [CrossRef]
- Ntoutsi, E.; Fafalios, P.; Gadiraju, U.; Iosifidis, V.; Nejdl, W.; Vidal, M.E.; Ruggieri, S.; Turini, F.; Papadopoulos, S.; Krasanakis, E.; et al. Bias in data-driven Artificial Intelligence systems—An introductory survey. Wiley Interdiscip. Rev. Data Min. Knowl. Discov. 2020, 10, e1356. [Google Scholar] [CrossRef]
- Baeza-Yates, R. Bias on the web. Commun. ACM 2018, 61, 54–61. [Google Scholar] [CrossRef]
- Pessach, D.; Shmueli, E. Improving fairness of Artificial Intelligence algorithms in Privileged-Group Selection Bias data settings. Expert Syst. Appl. 2021, 185, 115667. [Google Scholar] [CrossRef]
- Wang, Y.; Singh, L. Analyzing the impact of missing values and selection bias on fairness. Int. J. Data Sci. Anal. 2021, 12, 101–119. [Google Scholar] [CrossRef]
- Russell, G.; Mandy, W.; Elliott, D.; White, R.; Pittwood, T.; Ford, T. Selection bias on intellectual ability in autism research: A cross-sectional review and meta-analysis. Mol. Autism 2019, 10, 1–10. [Google Scholar] [CrossRef]
- Bolukbasi, T.; Chang, K.W.; Zou, J.Y.; Saligrama, V.; Kalai, A.T. Man is to computer programmer as woman is to homemaker? debiasing word embeddings. Adv. Neural Inf. Process. Syst. 2016, 29. [Google Scholar]
- Mehrabi, N.; Morstatter, F.; Saxena, N.; Lerman, K.; Galstyan, A. A survey on bias and fairness in machine learning. ACM Comput. Surv. (CSUR) 2021, 54, 1–35. [Google Scholar] [CrossRef]
- Torralba, A.; Efros, A.A. Unbiased look at dataset bias. In Proceedings of the CVPR, Colorado Springs, CO, USA, 20–25 June 2011; IEEE: Piscataway, NJ, USA, 2011; pp. 1521–1528. [Google Scholar]
- Liao, Y.; Naghizadeh, P. The impacts of labeling biases on fairness criteria. In Proceedings of the 10th International Conference on Learning Representations, ICLR, Virtually, 25–29 April 2022; pp. 25–29. [Google Scholar]
- Paulus, J.K.; Kent, D.M. Predictably unequal: Understanding and addressing concerns that algorithmic clinical prediction may increase health disparities. NPJ Digit. Med. 2020, 3, 99. [Google Scholar] [CrossRef]
- Zhao, J.; Wang, T.; Yatskar, M.; Ordonez, V.; Chang, K.W. Men also like shopping: Reducing gender bias amplification using corpus-level constraints. arXiv 2017, arXiv:1707.09457. [Google Scholar]
- Yang, N.; Yuan, D.; Liu, C.Z.; Deng, Y.; Bao, W. FedIL: Federated Incremental Learning from Decentralized Unlabeled Data with Convergence Analysis. arXiv 2023, arXiv:2302.11823. [Google Scholar]
- Tripathi, S.; Musiolik, T.H. Fairness and ethics in Artificial Intelligence-based medical imaging. In Research Anthology on Improving Medical Imaging Techniques for Analysis and Intervention; IGI Global: Hershey, PA, USA, 2023; pp. 79–90. [Google Scholar]
- Mashhadi, A.; Kyllo, A.; Parizi, R.M. Fairness in Federated Learning for Spatial-Temporal Applications. arXiv 2022, arXiv:2201.06598. [Google Scholar]
- Zhao, D.; Yu, G.; Xu, P.; Luo, M. Equivalence between dropout and data augmentation: A mathematical check. Neural Netw. 2019, 115, 82–89. [Google Scholar] [CrossRef] [PubMed]
- Chun, J.S.; Brockner, J.; De Cremer, D. How temporal and social comparisons in performance evaluation affect fairness perceptions. Organ. Behav. Hum. Decis. Process. 2018, 145, 1–15. [Google Scholar] [CrossRef]
- Asiedu, M.N.; Dieng, A.; Oppong, A.; Nagawa, M.; Koyejo, S.; Heller, K. Globalizing Fairness Attributes in Machine Learning: A Case Study on Health in Africa. arXiv 2023, arXiv:2304.02190. [Google Scholar]
- Hutiri, W.T.; Ding, A.Y. Bias in automated speaker recognition. In Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, Seoul, Republic of Korea, 21–24 June 2022; pp. 230–247. [Google Scholar]
- Makhlouf, K.; Zhioua, S.; Palamidessi, C. Machine learning fairness notions: Bridging the gap with real-world applications. Inf. Process. Manag. 2021, 58, 102642. [Google Scholar] [CrossRef]
- Kallus, N.; Zhou, A. Residual unfairness in fair machine learning from prejudiced data. In Proceedings of the International Conference on Machine Learning, PMLR, Stockholm, Sweden, 10–15 July 2018; pp. 2439–2448. [Google Scholar]
- Yang, N.; Yuan, D.; Zhang, Y.; Deng, Y.; Bao, W. Asynchronous Semi-Supervised Federated Learning with Provable Convergence in Edge Computing. IEEE Netw. 2022, 36, 136–143. [Google Scholar] [CrossRef]
- So, W.; Lohia, P.; Pimplikar, R.; Hosoi, A.; D’Ignazio, C. Beyond Fairness: Reparative Algorithms to Address Historical Injustices of Housing Discrimination in the US. In Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, Seoul, Republic of Korea, 21–24 June 2022; pp. 988–1004. [Google Scholar]
- Alikhademi, K.; Drobina, E.; Prioleau, D.; Richardson, B.; Purves, D.; Gilbert, J.E. A review of predictive policing from the perspective of fairness. Artif. Intell. Law 2022, 30, 1–17. [Google Scholar] [CrossRef]
- Rajkomar, A.; Hardt, M.; Howell, M.D.; Corrado, G.; Chin, M.H. Ensuring fairness in machine learning to advance health equity. Ann. Intern. Med. 2018, 169, 866–872. [Google Scholar] [CrossRef]
- Woo, S.E.; LeBreton, J.M.; Keith, M.G.; Tay, L. Bias, fairness, and validity in graduate-school admissions: A psychometric perspective. Perspect. Psychol. Sci. 2023, 18, 3–31. [Google Scholar] [CrossRef]
- Weerts, H.; Pfisterer, F.; Feurer, M.; Eggensperger, K.; Bergman, E.; Awad, N.; Vanschoren, J.; Pechenizkiy, M.; Bischl, B.; Hutter, F. Can Fairness be Automated? Guidelines and Opportunities for Fairness-aware AutoML. arXiv 2023, arXiv:2303.08485. [Google Scholar]
- Hauer, K.E.; Park, Y.S.; Bullock, J.L.; Tekian, A. “My Assessments Are Biased!” Measurement and Sociocultural Approaches to Achieve Fairness in Assessment in Medical Education. Acad. Med. J. Assoc. Am. Med. Coll. 2023. online ahead of print. [Google Scholar]
- Chen, Y.; Mahoney, C.; Grasso, I.; Wali, E.; Matthews, A.; Middleton, T.; Njie, M.; Matthews, J. Gender bias and under-representation in natural language processing across human languages. In Proceedings of the 2021 AAAI/ACM Conference on AI, Ethics, and Society, New York, NY, USA, 19–21 May 2021; pp. 24–34. [Google Scholar]
- Chai, J.; Wang, X. Fairness with adaptive weights. In Proceedings of the International Conference on Machine Learning, PMLR, Baltimore, MD, USA, 17–23 July 2022; pp. 2853–2866. [Google Scholar]
- Zhou, Q.; Mareček, J.; Shorten, R. Fairness in Forecasting of Observations of Linear Dynamical Systems. J. Artif. Intell. Res. 2023, 76, 1247–1280. [Google Scholar] [CrossRef]
- Spinelli, I.; Scardapane, S.; Hussain, A.; Uncini, A. Fairdrop: Biased edge dropout for enhancing fairness in graph representation learning. IEEE Trans. Artif. Intell. 2021, 3, 344–354. [Google Scholar] [CrossRef]
- Yu, C.; Liao, W. Professionalism and homophily bias: A study of Airbnb stay choice and review positivity. Int. J. Hosp. Manag. 2023, 110, 103433. [Google Scholar] [CrossRef]
- Lerchenmueller, M.; Hoisl, K.; Schmallenbach, L. Homophily, biased attention, and the gender gap in science. In Academy of Management Proceedings; Academy of Management Briarcliff Manor: New York, NY, USA, 2019; Volume 2019, p. 14784. [Google Scholar]
- Vogrin, M.; Wood, G.; Schmickl, T. Confirmation Bias as a Mechanism to Focus Attention Enhances Signal Detection. J. Artif. Soc. Soc. Simul. 2023, 26, 2. [Google Scholar] [CrossRef]
- Kulkarni, A.; Shivananda, A.; Manure, A. Actions, Biases, and Human-in-the-Loop. In Introduction to Prescriptive AI: A Primer for Decision Intelligence Solutioning with Python; Springer: Berkeley, CA, USA, 2023; pp. 125–142. [Google Scholar]
- Gwebu, K.L.; Wang, J.; Zifla, E. Can warnings curb the spread of fake news? The interplay between warning, trust and confirmation bias. Behav. Inf. Technol. 2022, 41, 3552–3573. [Google Scholar] [CrossRef]
- Miller, A.C. Confronting confirmation bias: Giving truth a fighting chance in the information age. Soc. Educ. 2016, 80, 276–279. [Google Scholar]
- Ghazimatin, A.; Kleindessner, M.; Russell, C.; Abedjan, Z.; Golebiowski, J. Measuring fairness of rankings under noisy sensitive information. In Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, Seoul, Republic of Korea, 21–24 June 2022; pp. 2263–2279. [Google Scholar]
- Warner, R.; Sloan, R.H. Making Artificial Intelligence transparent: Fairness and the problem of proxy variables. Crim. Justice Ethics 2021, 40, 23–39. [Google Scholar] [CrossRef]
- Mazilu, L.; Paton, N.W.; Konstantinou, N.; Fernandes, A.A. Fairness in data wrangling. In Proceedings of the 2020 IEEE 21st International Conference on Information Reuse and Integration for Data Science (IRI), Las Vegas, NV, USA, 11–13 August 2020; IEEE: Piscataway, NJ, USA, 2020; pp. 341–348. [Google Scholar]
- Caliskan, A.; Bryson, J.J.; Narayanan, A. Semantics derived automatically from language corpora contain human-like biases. Science 2017, 356, 183–186. [Google Scholar] [CrossRef] [PubMed]
- Helms, J.E. Fairness is not validity or cultural bias in racial-group assessment: A quantitative perspective. Am. Psychol. 2006, 61, 845. [Google Scholar] [CrossRef]
- Danks, D.; London, A.J. Algorithmic Bias in Autonomous Systems. Ijcai 2017, 17, 4691–4697. [Google Scholar]
- Kordzadeh, N.; Ghasemaghaei, M. Algorithmic bias: Review, synthesis, and future research directions. Eur. J. Inf. Syst. 2022, 31, 388–409. [Google Scholar] [CrossRef]
- Shen, X.; Plested, J.; Caldwell, S.; Gedeon, T. Exploring biases and prejudice of facial synthesis via semantic latent space. In Proceedings of the 2021 International Joint Conference on Neural Networks (IJCNN), Shenzhen, China, 18–22 July 2021; IEEE: Piscataway, NJ, USA, 2021; pp. 1–8. [Google Scholar]
- Garcia, M. Racist in the Machine. World Policy J. 2016, 33, 111–117. [Google Scholar] [CrossRef]
- Heffernan, T. Sexism, racism, prejudice, and bias: A literature review and synthesis of research surrounding student evaluations of courses and teaching. Assess. Eval. High. Educ. 2022, 47, 144–154. [Google Scholar] [CrossRef]
- Prabhu, A.; Dognin, C.; Singh, M. Sampling bias in deep active classification: An empirical study. arXiv 2019, arXiv:1909.09389. [Google Scholar]
- Cortes, C.; Mohri, M. Domain adaptation and sample bias correction theory and algorithm for regression. Theor. Comput. Sci. 2014, 519, 103–126. [Google Scholar] [CrossRef]
- Griffith, G.J.; Morris, T.T.; Tudball, M.J.; Herbert, A.; Mancano, G.; Pike, L.; Sharp, G.C.; Sterne, J.; Palmer, T.M.; Davey Smith, G.; et al. Collider bias undermines our understanding of COVID-19 disease risk and severity. Nat. Commun. 2020, 11, 5749. [Google Scholar] [CrossRef]
- Kleinberg, J.; Mullainathan, S.; Raghavan, M. Inherent trade-offs in the fair determination of risk scores. arXiv 2016, arXiv:1609.05807. [Google Scholar]
- Mansoury, M.; Abdollahpouri, H.; Pechenizkiy, M.; Mobasher, B.; Burke, R. Feedback loop and bias amplification in recommender systems. In Proceedings of the 29th ACM International Conference on Information & Knowledge Management, Virtual, 19–23 October 2020; pp. 2145–2148. [Google Scholar]
- Pan, W.; Cui, S.; Wen, H.; Chen, K.; Zhang, C.; Wang, F. Correcting the user feedback-loop bias for recommendation systems. arXiv 2021, arXiv:2109.06037. [Google Scholar]
- Taori, R.; Hashimoto, T. Data feedback loops: Model-driven amplification of dataset biases. In Proceedings of the International Conference on Machine Learning, PMLR, Honolulu, HI, USA, 23–29 July 2023; pp. 33883–33920. [Google Scholar]
- Vokinger, K.N.; Feuerriegel, S.; Kesselheim, A.S. Mitigating bias in machine learning for medicine. Commun. Med. 2021, 1, 25. [Google Scholar] [CrossRef]
- Kuhlman, C.; Jackson, L.; Chunara, R. No computation without representation: Avoiding data and algorithm biases through diversity. arXiv 2020, arXiv:2002.11836. [Google Scholar]
- Raub, M. Bots, bias and big data: Artificial Intelligence, algorithmic bias and disparate impact liability in hiring practices. Ark. L. Rev. 2018, 71, 529. [Google Scholar]
- Norori, N.; Hu, Q.; Aellen, F.M.; Faraci, F.D.; Tzovara, A. Addressing bias in big data and AI for health care: A call for open science. Patterns 2021, 2, 100347. [Google Scholar] [CrossRef]
- Kafai, Y.; Proctor, C.; Lui, D. From theory bias to theory dialogue: Embracing cognitive, situated, and critical framings of computational thinking in K-12 CS education. ACM Inroads 2020, 11, 44–53. [Google Scholar] [CrossRef]
- Celi, L.A.; Cellini, J.; Charpignon, M.L.; Dee, E.C.; Dernoncourt, F.; Eber, R.; Mitchell, W.G.; Moukheiber, L.; Schirmer, J.; Situ, J.; et al. Sources of bias in Artificial Intelligence that perpetuate healthcare disparities—A global review. PLoS Digit. Health 2022, 1, e0000022. [Google Scholar] [CrossRef] [PubMed]
- Schemmer, M.; Kühl, N.; Benz, C.; Satzger, G. On the influence of explainable AI on automation bias. arXiv 2022, arXiv:2204.08859. [Google Scholar]
- Alon-Barkat, S.; Busuioc, M. Human–AI interactions in public sector decision making: “Automation bias” and “selective adherence” to algorithmic advice. J. Public Adm. Res. Theory 2023, 33, 153–169. [Google Scholar] [CrossRef]
- Jones-Jang, S.M.; Park, Y.J. How do people react to AI failure? Automation bias, algorithmic aversion, and perceived controllability. J. Comput. Mediat. Commun. 2023, 28, zmac029. [Google Scholar] [CrossRef]
- Strauß, S. Deep automation bias: How to tackle a wicked problem of ai? Big Data Cogn. Comput. 2021, 5, 18. [Google Scholar] [CrossRef]
- Raisch, S.; Krakowski, S. Artificial Intelligence and management: The automation–augmentation paradox. Acad. Manag. Rev. 2021, 46, 192–210. [Google Scholar] [CrossRef]
- Lyons, J.B.; Guznov, S.Y. Individual differences in human–machine trust: A multi-study look at the perfect automation schema. Theor. Issues Ergon. Sci. 2019, 20, 440–458. [Google Scholar] [CrossRef]
- Nakao, Y.; Stumpf, S.; Ahmed, S.; Naseer, A.; Strappelli, L. Toward involving end-users in interactive human-in-the-loop AI fairness. ACM Trans. Interact. Intell. Syst. (TiiS) 2022, 12, 1–30. [Google Scholar] [CrossRef]
- Yarger, L.; Cobb Payton, F.; Neupane, B. Algorithmic equity in the hiring of underrepresented IT job candidates. Online Inf. Rev. 2020, 44, 383–395. [Google Scholar] [CrossRef]
- Zhou, Y.; Kantarcioglu, M.; Clifton, C. On Improving Fairness of AI Models with Synthetic Minority Oversampling Techniques. In Proceedings of the 2023 SIAM International Conference on Data Mining (SDM), Minneapolis, MN, USA, 27–29 April 2023; SIAM: Philadelphia, PA, USA, 2023; pp. 874–882. [Google Scholar]
- Obermeyer, Z.; Powers, B.; Vogeli, C.; Mullainathan, S. Dissecting racial bias in an algorithm used to manage the health of populations. Science 2019, 366, 447–453. [Google Scholar] [CrossRef]
- Calmon, F.; Wei, D.; Vinzamuri, B.; Natesan Ramamurthy, K.; Varshney, K.R. Optimized pre-processing for discrimination prevention. Adv. Neural Inf. Process. Syst. 2017, 30. [Google Scholar]
- Zhang, B.H.; Lemoine, B.; Mitchell, M. Mitigating unwanted biases with adversarial learning. In Proceedings of the 2018 AAAI/ACM Conference on AI, Ethics, and Society, New Orleans, LA, USA, 2–3 February 2018; pp. 335–340. [Google Scholar]
- Chiappa, S. Path-specific counterfactual fairness. AAAI Conf. Artif. Intell. 2019, 33, 7801–7808. [Google Scholar] [CrossRef]
- Sun, Y.; Haghighat, F.; Fung, B.C. Trade-off between accuracy and fairness of data-driven building and indoor environment models: A comparative study of pre-processing methods. Energy 2022, 239, 122273. [Google Scholar] [CrossRef]
- Sun, Y.; Fung, B.C.; Haghighat, F. The generalizability of pre-processing techniques on the accuracy and fairness of data-driven building models: A case study. Energy Build. 2022, 268, 112204. [Google Scholar] [CrossRef]
- Wan, M.; Zha, D.; Liu, N.; Zou, N. In-processing modeling techniques for machine learning fairness: A survey. ACM Trans. Knowl. Discov. Data 2023, 17, 1–27. [Google Scholar] [CrossRef]
- Sun, Y.; Fung, B.C.; Haghighat, F. In-Processing fairness improvement methods for regression Data-Driven building Models: Achieving uniform energy prediction. Energy Build. 2022, 277, 112565. [Google Scholar] [CrossRef]
- Petersen, F.; Mukherjee, D.; Sun, Y.; Yurochkin, M. Post-processing for individual fairness. Adv. Neural Inf. Process. Syst. 2021, 34, 25944–25955. [Google Scholar]
- Lohia, P.K.; Ramamurthy, K.N.; Bhide, M.; Saha, D.; Varshney, K.R.; Puri, R. Bias mitigation post-processing for individual and group fairness. In Proceedings of the Icassp 2019—2019 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), Brighton, UK, 12–17 May 2019; IEEE: Piscataway, NJ, USA, 2019; pp. 2847–2851. [Google Scholar]
- Putzel, P.; Lee, S. Blackbox post-processing for multiclass fairness. arXiv 2022, arXiv:2201.04461. [Google Scholar]
- Jung, S.; Park, T.; Chun, S.; Moon, T. Re-weighting Based Group Fairness Regularization via Classwise Robust Optimization. arXiv 2023, arXiv:2303.00442. [Google Scholar]
- Lal, G.R.; Geyik, S.C.; Kenthapadi, K. Fairness-aware online personalization. arXiv 2020, arXiv:2007.15270. [Google Scholar]
- Wu, Y.; Zhang, L.; Wu, X. Counterfactual fairness: Unidentification, bound and algorithm. In Proceedings of the Twenty-Eighth International Joint Conference on Artificial Intelligence, Macao, China, 10–16 August 2019. [Google Scholar]
- Cheong, J.; Kalkan, S.; Gunes, H. Counterfactual fairness for facial expression recognition. In European Conference on Computer Vision; Springer: Cham, Switzerland, 2022; pp. 245–261. [Google Scholar]
- Wang, X.; Li, B.; Su, X.; Peng, H.; Wang, L.; Lu, C.; Wang, C. Autonomous dispatch trajectory planning on flight deck: A search-resampling-optimization framework. Eng. Appl. Artif. Intell. 2023, 119, 105792. [Google Scholar] [CrossRef]
- Xie, S.M.; Santurkar, S.; Ma, T.; Liang, P. Data selection for language models via importance resampling. arXiv 2023, arXiv:2302.03169. [Google Scholar]
- Khushi, M.; Shaukat, K.; Alam, T.M.; Hameed, I.A.; Uddin, S.; Luo, S.; Yang, X.; Reyes, M.C. A comparative performance analysis of data resampling methods on imbalance medical data. IEEE Access 2021, 9, 109960–109975. [Google Scholar] [CrossRef]
- Ghorbani, R.; Ghousi, R. Comparing different resampling methods in predicting students’ performance using machine learning techniques. IEEE Access 2020, 8, 67899–67911. [Google Scholar] [CrossRef]
- He, E.; Xie, Y.; Liu, L.; Chen, W.; Jin, Z.; Jia, X. Physics Guided Neural Networks for Time-Aware Fairness: An Application in Crop Yield Prediction. AAAI Conf. Artif. Intell. 2023, 37, 14223–14231. [Google Scholar] [CrossRef]
- Wang, S.; Wang, B.; Zhang, Z.; Heidari, A.A.; Chen, H. Class-aware sample reweighting optimal transport for multi-source domain adaptation. Neurocomputing 2023, 523, 213–223. [Google Scholar] [CrossRef]
- Song, P.; Li, P.; Dai, L.; Wang, T.; Chen, Z. Boosting R-CNN: Reweighting R-CNN samples by RPN’s error for underwater object detection. Neurocomputing 2023, 530, 150–164. [Google Scholar] [CrossRef]
- Jin, M.; Ju, C.J.T.; Chen, Z.; Liu, Y.C.; Droppo, J.; Stolcke, A. Adversarial reweighting for speaker verification fairness. arXiv 2022, arXiv:2207.07776. [Google Scholar]
- Kieninger, S.; Donati, L.; Keller, B.G. Dynamical reweighting methods for Markov models. Curr. Opin. Struct. Biol. 2020, 61, 124–131. [Google Scholar] [CrossRef]
- Zhou, X.; Lin, Y.; Pi, R.; Zhang, W.; Xu, R.; Cui, P.; Zhang, T. Model agnostic sample reweighting for out-of-distribution learning. In Proceedings of the International Conference on Machine Learning, PMLR, Baltimore, MD, USA, 17–23 July 2022; pp. 27203–27221. [Google Scholar]
- Khalifa, N.E.; Loey, M.; Mirjalili, S. A comprehensive survey of recent trends in deep learning for digital images augmentation. Artif. Intell. Rev. 2022, 55, 2351–2377. [Google Scholar] [CrossRef]
- Pastaltzidis, I.; Dimitriou, N.; Quezada-Tavarez, K.; Aidinlis, S.; Marquenie, T.; Gurzawska, A.; Tzovaras, D. Data augmentation for fairness-aware machine learning: Preventing algorithmic bias in law enforcement systems. In Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, Seoul, Republic of Korea, 21–24 June 2022; pp. 2302–2314. [Google Scholar]
- Kose, O.D.; Shen, Y. Fair node representation learning via adaptive data augmentation. arXiv 2022, arXiv:2201.08549. [Google Scholar]
- Zhang, Y.; Sang, J. Towards accuracy-fairness paradox: Adversarial example-based data augmentation for visual debiasing. In Proceedings of the 28th ACM International Conference on Multimedia, Seattle, WA, USA, 12–16 October 2020; pp. 4346–4354. [Google Scholar]
- Zheng, L.; Zhu, Y.; He, J. Fairness-aware Multi-view Clustering. In Proceedings of the 2023 SIAM International Conference on Data Mining (SDM), Minneapolis, MN, USA, 27–29 April 2023; SIAM: Philadelphia, PA, USA, 2023; pp. 856–864. [Google Scholar]
- Le Quy, T.; Friege, G.; Ntoutsi, E. A Review of Clustering Models in Educational Data Science Toward Fairness-Aware Learning. In Educational Data Science: Essentials, Approaches, and Tendencies: Proactive Education based on Empirical Big Data Evidence; Springer: Singapore, 2023; pp. 43–94. [Google Scholar]
- Chierichetti, F.; Kumar, R.; Lattanzi, S.; Vassilvitskii, S. Fair clustering through fairlets. Adv. Neural Inf. Process. Syst. 2017, 30. [Google Scholar]
- Kamishima, T.; Akaho, S.; Asoh, H.; Sakuma, J. Fairness-aware classifier with prejudice remover regularizer. In Proceedings of the Machine Learning and Knowledge Discovery in Databases: European Conference, ECML PKDD 2012, Bristol, UK, 24–28 September 2012; Springer: Berlin/Heidelberg, Germany, 2012; pp. 35–50. [Google Scholar]
- Chakraborty, J.; Majumder, S.; Menzies, T. Bias in machine learning software: Why? how? what to do? In Proceedings of the 29th ACM Joint Meeting on European Software Engineering Conference and Symposium on the Foundations of Software Engineering, Athens, Greece, 23–28 August 2021; pp. 429–440. [Google Scholar]
- Blagus, R.; Lusa, L. SMOTE for high-dimensional class-imbalanced data. BMC Bioinform. 2013, 14, 106. [Google Scholar] [CrossRef] [PubMed]
- Blagus, R.; Lusa, L. Evaluation of smote for high-dimensional class-imbalanced microarray data. In Proceedings of the 2012 11th International Conference on Machine Learning and Applications, Boca Raton, FL, USA, 12–15 December 2012; IEEE: Piscataway, NJ, USA, 2012; Volume 2, pp. 89–94. [Google Scholar]
- Chawla, N.V.; Bowyer, K.W.; Hall, L.O.; Kegelmeyer, W.P. SMOTE: Synthetic minority over-sampling technique. J. Artif. Intell. Res. 2002, 16, 321–357. [Google Scholar] [CrossRef]
- Zhao, W.; Alwidian, S.; Mahmoud, Q.H. Adversarial Training Methods for Deep Learning: A Systematic Review. Algorithms 2022, 15, 283. [Google Scholar] [CrossRef]
- Bai, T.; Luo, J.; Zhao, J.; Wen, B.; Wang, Q. Recent advances in adversarial training for adversarial robustness. arXiv 2021, arXiv:2102.01356. [Google Scholar]
- Wong, E.; Rice, L.; Kolter, J.Z. Fast is better than free: Revisiting adversarial training. arXiv 2020, arXiv:2001.03994. [Google Scholar]
- Andriushchenko, M.; Flammarion, N. Understanding and improving fast adversarial training. Adv. Neural Inf. Process. Syst. 2020, 33, 16048–16059. [Google Scholar]
- Shafahi, A.; Najibi, M.; Ghiasi, M.A.; Xu, Z.; Dickerson, J.; Studer, C.; Davis, L.S.; Taylor, G.; Goldstein, T. Adversarial training for free! Adv. Neural Inf. Process. Syst. 2019, 32. [Google Scholar]
- Lim, J.; Kim, Y.; Kim, B.; Ahn, C.; Shin, J.; Yang, E.; Han, S. BiasAdv: Bias-Adversarial Augmentation for Model Debiasing. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Vancouver, BC, Canada, 18–22 June 2023; pp. 3832–3841. [Google Scholar]
- Hong, J.; Zhu, Z.; Yu, S.; Wang, Z.; Dodge, H.H.; Zhou, J. Federated adversarial debiasing for fair and transferable representations. In Proceedings of the 27th ACM SIGKDD Conference on Knowledge Discovery & Data Mining, Singapore, 14–18 August 2021; pp. 617–627. [Google Scholar]
- Darlow, L.; Jastrzębski, S.; Storkey, A. Latent adversarial debiasing: Mitigating collider bias in deep neural networks. arXiv 2020, arXiv:2011.11486. [Google Scholar]
- Mishler, A.; Kennedy, E.H.; Chouldechova, A. Fairness in risk assessment instruments: Post-processing to achieve counterfactual equalized odds. In Proceedings of the 2021 ACM Conference on Fairness, Accountability, and Transparency, Virtual Event/Toronto, ON, Canada, 3–10 March 2021; pp. 386–400. [Google Scholar]
- Roy, S.; Salimi, B. Causal inference in data analysis with applications to fairness and explanations. In Reasoning Web. Causality, Explanations and Declarative Knowledge: 18th International Summer School 2022, Berlin, Germany, 27–30 September 2022; Springer: Cham, Switzerland, 2023; pp. 105–131. [Google Scholar]
- Madras, D.; Creager, E.; Pitassi, T.; Zemel, R. Fairness through causal awareness: Learning causal latent-variable models for biased data. In Proceedings of the Conference on Fairness, Accountability, and Transparency, Atlanta, GA, USA, 29–31 January 2019; pp. 349–358. [Google Scholar]
- Loftus, J.R.; Russell, C.; Kusner, M.J.; Silva, R. Causal reasoning for algorithmic fairness. arXiv 2018, arXiv:1805.05859. [Google Scholar]
- Hinnefeld, J.H.; Cooman, P.; Mammo, N.; Deese, R. Evaluating fairness metrics in the presence of dataset bias. arXiv 2018, arXiv:1809.09245. [Google Scholar]
- Modén, M.U.; Lundin, J.; Tallvid, M.; Ponti, M. Involving teachers in meta-design of AI to ensure situated fairness. Proceedings 2022, 1613, 0073. [Google Scholar]
- Zhao, C.; Li, C.; Li, J.; Chen, F. Fair meta-learning for few-shot classification. In Proceedings of the 2020 IEEE International Conference on Knowledge Graph (ICKG), Nanjing, China, 9–11 August 2020; IEEE: Piscataway, NJ, USA, 2020; pp. 275–282. [Google Scholar]
- Hsu, B.; Chen, X.; Han, Y.; Namkoong, H.; Basu, K. An Operational Perspective to Fairness Interventions: Where and How to Intervene. arXiv 2023, arXiv:2302.01574. [Google Scholar]
- Salvador, T.; Cairns, S.; Voleti, V.; Marshall, N.; Oberman, A. Faircal: Fairness calibration for face verification. arXiv 2021, arXiv:2106.03761. [Google Scholar]
- Noriega-Campero, A.; Bakker, M.A.; Garcia-Bulle, B.; Pentland, A. Active fairness in algorithmic decision making. In Proceedings of the 2019 AAAI/ACM Conference on AI, Ethics, and Society, Honolulu, HI, USA, 27–28 January 2019; pp. 77–83. [Google Scholar]
- Pleiss, G.; Raghavan, M.; Wu, F.; Kleinberg, J.; Weinberger, K.Q. On fairness and calibration. Adv. Neural Inf. Process. Syst. 2017, 30. [Google Scholar]
- Tahir, A.; Cheng, L.; Liu, H. Fairness through Aleatoric Uncertainty. arXiv 2023, arXiv:2304.03646. [Google Scholar]
- Tubella, A.A.; Barsotti, F.; Koçer, R.G.; Mendez, J.A. Ethical implications of fairness interventions: What might be hidden behind engineering choices? Ethics Inf. Technol. 2022, 24, 12. [Google Scholar] [CrossRef]
- Kamishima, T.; Akaho, S.; Asoh, H.; Sakuma, J. Model-based and actual independence for fairness-aware classification. Data Min. Knowl. Discov. 2018, 32, 258–286. [Google Scholar] [CrossRef]
- Kasmi, M.L. Machine Learning Fairness in Finance: An Application to Credit Scoring. Ph.D. Thesis, Tilburg University, Tilburg, The Netherlands, 2021. [Google Scholar]
- Zhang, T.; Zhu, T.; Li, J.; Han, M.; Zhou, W.; Philip, S.Y. Fairness in semi-supervised learning: Unlabeled data help to reduce discrimination. IEEE Trans. Knowl. Data Eng. 2020, 34, 1763–1774. [Google Scholar] [CrossRef]
- Caton, S.; Haas, C. Fairness in machine learning: A survey. arXiv 2020, arXiv:2010.04053. [Google Scholar] [CrossRef]
- Small, E.A.; Sokol, K.; Manning, D.; Salim, F.D.; Chan, J. Equalised Odds is not Equal Individual Odds: Post-processing for Group and Individual Fairness. arXiv 2023, arXiv:2304.09779. [Google Scholar]
- Jang, T.; Shi, P.; Wang, X. Group-aware threshold adaptation for fair classification. AAAI Conf. Artif. Intell. 2022, 36, 6988–6995. [Google Scholar] [CrossRef]
- Nguyen, D.; Gupta, S.; Rana, S.; Shilton, A.; Venkatesh, S. Fairness improvement for black-box classifiers with Gaussian process. Inf. Sci. 2021, 576, 542–556. [Google Scholar] [CrossRef]
- Iosifidis, V.; Fetahu, B.; Ntoutsi, E. Fae: A fairness-aware ensemble framework. In Proceedings of the 2019 IEEE International Conference on Big Data (Big Data), Los Angeles, CA, USA, 9–12 December 2019; IEEE: Piscataway, NJ, USA, 2019; pp. 1375–1380. [Google Scholar]
- Zhong, M.; Tandon, R. Learning Fair Classifiers via Min-Max F-divergence Regularization. arXiv 2023, arXiv:2306.16552. [Google Scholar]
- Nandy, P.; Diciccio, C.; Venugopalan, D.; Logan, H.; Basu, K.; El Karoui, N. Achieving Fairness via Post-Processing in Web-Scale Recommender Systems. In Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, Seoul, Republic of Korea, 21–24 June 2022; pp. 715–725. [Google Scholar]
- Boratto, L.; Fenu, G.; Marras, M. Interplay between upsampling and regularization for provider fairness in recommender systems. User Model. User Adapt. Interact. 2021, 31, 421–455. [Google Scholar] [CrossRef]
- Yao, S.; Huang, B. Beyond parity: Fairness objectives for collaborative filtering. Adv. Neural Inf. Process. Syst. 2017, 30. [Google Scholar]
- Yu, B.; Wu, J.; Ma, J.; Zhu, Z. Tangent-normal adversarial regularization for semi-supervised learning. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Long Beach, CA, USA, 15–20 June 2019; pp. 10676–10684. [Google Scholar]
- Sato, M.; Suzuki, J.; Kiyono, S. Effective adversarial regularization for neural machine translation. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, Florence, Italy, 28 July–2 August 2019; pp. 204–210. [Google Scholar]
- Nasr, M.; Shokri, R.; Houmansadr, A. Machine learning with membership privacy using adversarial regularization. In Proceedings of the 2018 ACM SIGSAC Conference on Computer and Communications Security, Toronto, ON, Canada, 15–19 October 2018; pp. 634–646. [Google Scholar]
- Mertikopoulos, P.; Papadimitriou, C.; Piliouras, G. Cycles in adversarial regularized learning. In Proceedings of the Twenty-Ninth Annual ACM-SIAM Symposium on Discrete Algorithms, New Orleans, LA, USA, 7–10 January 2018; SIAM: Philadelphia, PA, USA, 2018; pp. 2703–2717. [Google Scholar]
- Du, M.; Yang, F.; Zou, N.; Hu, X. Fairness in deep learning: A computational perspective. IEEE Intell. Syst. 2020, 36, 25–34. [Google Scholar] [CrossRef]
- Horesh, Y.; Haas, N.; Mishraky, E.; Resheff, Y.S.; Meir Lador, S. Paired-consistency: An example-based model-agnostic approach to fairness regularization in machine learning. In Proceedings of the Machine Learning and Knowledge Discovery in Databases: International Workshops of ECML PKDD 2019, Würzburg, Germany, 16–20 September 2019; Springer: Cham, Switzerland, 2020; pp. 590–604. [Google Scholar]
- Lohaus, M.; Kleindessner, M.; Kenthapadi, K.; Locatello, F.; Russell, C. Are Two Heads the Same as One? Identifying Disparate Treatment in Fair Neural Networks. Adv. Neural Inf. Process. Syst. 2022, 35, 16548–16562. [Google Scholar]
- Romano, Y.; Bates, S.; Candes, E. Achieving equalized odds by resampling sensitive attributes. Adv. Neural Inf. Process. Syst. 2020, 33, 361–371. [Google Scholar]
- Cho, J.; Hwang, G.; Suh, C. A fair classifier using mutual information. In Proceedings of the 2020 IEEE International Symposium on Information Theory (ISIT), Los Angeles, CA, USA, 21–26 June 2020; IEEE: Piscataway, NJ, USA, 2020; pp. 2521–2526. [Google Scholar]
- Wieling, M.; Nerbonne, J.; Baayen, R.H. Quantitative social dialectology: Explaining linguistic variation geographically and socially. PLoS ONE 2011, 6, e23613. [Google Scholar] [CrossRef]
- Bhanot, K.; Qi, M.; Erickson, J.S.; Guyon, I.; Bennett, K.P. The problem of fairness in synthetic healthcare data. Entropy 2021, 23, 1165. [Google Scholar] [CrossRef]
- Brusaferri, A.; Matteucci, M.; Spinelli, S.; Vitali, A. Probabilistic electric load forecasting through Bayesian mixture density networks. Appl. Energy 2022, 309, 118341. [Google Scholar] [CrossRef]
- Errica, F.; Bacciu, D.; Micheli, A. Graph mixture density networks. In Proceedings of the International Conference on Machine Learning, PMLR, Virtual, 18–24 July 2021; pp. 3025–3035. [Google Scholar]
- Makansi, O.; Ilg, E.; Cicek, O.; Brox, T. Overcoming limitations of mixture density networks: A sampling and fitting framework for multimodal future prediction. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Long Beach, CA, USA, 15–20 June 2019; pp. 7144–7153. [Google Scholar]
- John, P.G.; Vijaykeerthy, D.; Saha, D. Verifying individual fairness in machine learning models. In Proceedings of the Conference on Uncertainty in Artificial Intelligence, PMLR, Virtual, 3–6 August 2020; pp. 749–758. [Google Scholar]
- Han, X.; Baldwin, T.; Cohn, T. Towards equal opportunity fairness through adversarial learning. arXiv 2022, arXiv:2203.06317. [Google Scholar]
- Shen, A.; Han, X.; Cohn, T.; Baldwin, T.; Frermann, L. Optimising equal opportunity fairness in model training. arXiv 2022, arXiv:2205.02393. [Google Scholar]
- Verma, S.; Rubin, J. Fairness definitions explained. In Proceedings of the International Workshop on Software Fairness, Gothenburg, Sweden, 29 May 2018; pp. 1–7. [Google Scholar]
- Balashankar, A.; Wang, X.; Packer, B.; Thain, N.; Chi, E.; Beutel, A. Can we improve model robustness through secondary attribute counterfactuals? In Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, Virtual, 7–11 November 2021; pp. 4701–4712. [Google Scholar]
- Dong, Z.; Zhu, H.; Cheng, P.; Feng, X.; Cai, G.; He, X.; Xu, J.; Wen, J. Counterfactual learning for recommender system. In Proceedings of the 14th ACM Conference on Recommender Systems, Virtual Event, Brazil, 22–26 September 2020; pp. 568–569. [Google Scholar]
- Veitch, V.; D’Amour, A.; Yadlowsky, S.; Eisenstein, J. Counterfactual invariance to spurious correlations in text classification. Adv. Neural Inf. Process. Syst. 2021, 34, 16196–16208. [Google Scholar]
- Chang, Y.C.; Lu, C.J. Oblivious polynomial evaluation and oblivious neural learning. In Proceedings of the Advances in Cryptology—ASIACRYPT 2001: 7th International Conference on the Theory and Application of Cryptology and Information Security, Gold Coast, Australia, 9–13 December 2001; Springer: Berlin/Heidelberg, Germany, 2001; pp. 369–384. [Google Scholar]
- Meister, M.; Sheikholeslami, S.; Andersson, R.; Ormenisan, A.A.; Dowling, J. Towards distribution transparency for supervised ML with oblivious training functions. In Proceedings of the Workshop MLOps Syst, Austin, TX, USA, 2–4 March 2020; pp. 1–3. [Google Scholar]
- Liu, J.; Juuti, M.; Lu, Y.; Asokan, N. Oblivious neural network predictions via minionn transformations. In Proceedings of the 2017 ACM SIGSAC Conference on Computer and Communications Security, Dallas, TX, USA, 30 October–3 November 2017; pp. 619–631. [Google Scholar]
- Goel, N.; Yaghini, M.; Faltings, B. Non-discriminatory machine learning through convex fairness criteria. In Proceedings of the 2018 AAAI/ACM Conference on AI, Ethics, and Society, New Orleans, LA, USA, 2–3 February 2018; p. 116. [Google Scholar]
- Makhlouf, K.; Zhioua, S.; Palamidessi, C. Survey on causal-based machine learning fairness notions. arXiv 2020, arXiv:2010.09553. [Google Scholar]
- Gölz, P.; Kahng, A.; Procaccia, A.D. Paradoxes in fair machine learning. Adv. Neural Inf. Process. Syst. 2019, 32. [Google Scholar]
- Ferryman, K.; Pitcan, M. Fairness in Precision Medicine; Data and Society Research Institute: New York, NY, USA, 2018. [Google Scholar]
- Dempsey, W.; Foster, I.; Fraser, S.; Kesselman, C. Sharing begins at home: How continuous and ubiquitous FAIRness can enhance research productivity and data reuse. Harv. Data Sci. Rev. 2022, 4, 10–11. [Google Scholar] [CrossRef]
- Durand, C.M.; Segev, D.; Sugarman, J. Realizing HOPE: The ethics of organ transplantation from HIV-positive donors. Ann. Intern. Med. 2016, 165, 138–142. [Google Scholar] [CrossRef]
- Rubinstein, Y.R.; McInnes, P. NIH/NCATS/GRDR® Common Data Elements: A leading force for standardized data collection. Contemp. Clin. Trials 2015, 42, 78–80. [Google Scholar] [CrossRef]
- Frick, K.D. Micro-costing quantity data collection methods. Med. Care 2009, 47, S76. [Google Scholar] [CrossRef] [PubMed]
- Rothstein, M.A. Informed consent for secondary research under the new NIH data sharing policy. J. Law Med. Ethics 2021, 49, 489–494. [Google Scholar] [CrossRef] [PubMed]
- Greely, H.T.; Grady, C.; Ramos, K.M.; Chiong, W.; Eberwine, J.; Farahany, N.A.; Johnson, L.S.M.; Hyman, B.T.; Hyman, S.E.; Rommelfanger, K.S.; et al. Neuroethics guiding principles for the NIH BRAIN initiative. J. Neurosci. 2018, 38, 10586. [Google Scholar] [CrossRef] [PubMed]
- Nijhawan, L.P.; Janodia, M.D.; Muddukrishna, B.; Bhat, K.M.; Bairy, K.L.; Udupa, N.; Musmade, P.B. Informed consent: Issues and challenges. J. Adv. Pharm. Technol. Res. 2013, 4, 134. [Google Scholar]
- Elliot, M.; Mackey, E.; O’Hara, K.; Tudor, C. The Anonymisation Decision-Making Framework; UKAN: Manchester, UK, 2016; p. 171. [Google Scholar]
- Rosner, G. De-Identification as Public Policy. J. Data Prot. Priv. 2019, 3, 1–18. [Google Scholar]
- Moretón, A.; Jaramillo, A. Anonymisation and re-identification risk for voice data. Eur. Data Prot. L. Rev. 2021, 7, 274. [Google Scholar] [CrossRef]
- Rumbold, J.M.; Pierscionek, B.K. A critique of the regulation of data science in healthcare research in the European Union. BMC Med. Ethics 2017, 18, 27. [Google Scholar] [CrossRef] [PubMed]
- Stalla-Bourdillon, S.; Knight, A. Anonymous data v. personal data-false debate: An EU perspective on anonymization, pseudonymization and personal data. Wis. Int’l LJ 2016, 34, 284. [Google Scholar]
- Ilavsky, J. Nika: Software for two-dimensional data reduction. J. Appl. Crystallogr. 2012, 45, 324–328. [Google Scholar] [CrossRef]
- Fietzke, J.; Liebetrau, V.; Günther, D.; Gürs, K.; Hametner, K.; Zumholz, K.; Hansteen, T.; Eisenhauer, A. An alternative data acquisition and evaluation strategy for improved isotope ratio precision using LA-MC-ICP-MS applied to stable and radiogenic strontium isotopes in carbonates. J. Anal. At. Spectrom. 2008, 23, 955–961. [Google Scholar] [CrossRef]
- Gwynne, S. Conventions in the Collection and Use of Human Performance Data; National Institute of Standards and Technology: Gaithersburg, MD, USA, 2010; pp. 10–928.
- Buckleton, J.S.; Bright, J.A.; Cheng, K.; Budowle, B.; Coble, M.D. NIST interlaboratory studies involving DNA mixtures (MIX13): A modern analysis. Forensic Sci. Int. Genet. 2018, 37, 172–179. [Google Scholar] [CrossRef] [PubMed]
- Sydes, M.R.; Johnson, A.L.; Meredith, S.K.; Rauchenberger, M.; South, A.; Parmar, M.K. Sharing data from clinical trials: The rationale for a controlled access approach. Trials 2015, 16, 104. [Google Scholar] [CrossRef] [PubMed]
- Abdul Razack, H.I.; Aranjani, J.M.; Mathew, S.T. Clinical trial transparency regulations: Implications to various scholarly publishing stakeholders. Sci. Public Policy 2022, 49, 951–961. [Google Scholar] [CrossRef]
- Alemayehu, D.; Anziano, R.J.; Levenstein, M. Perspectives on clinical trial data transparency and disclosure. Contemp. Clin. Trials 2014, 39, 28–33. [Google Scholar] [CrossRef]
- Force, J.T.; Initiative, T. Security and privacy controls for federal information systems and organizations. NIST Spec. Publ. 2013, 800, 8–13. [Google Scholar]
- Plans, B.E.A. Assessing security and privacy controls in federal information systems and organizations. NIST Spec. Publ. 2014, 800, 53A. [Google Scholar]
- Dempsey, K.; Witte, G.; Rike, D. Summary of NIST SP 800-53, Revision 4: Security and Privacy Controls for Federal Information Systems and Organizations; Technical Report; National Institute of Standards and Technology: Gaithersburg, MD, USA, 2014.
- Passi, S.; Jackson, S.J. Trust in data science: Collaboration, translation, and accountability in corporate data science projects. Proc. ACM Hum. Comput. Interact. 2018, 2, 1–28. [Google Scholar] [CrossRef]
- Hutt, E.; Polikoff, M.S. Toward a framework for public accountability in education reform. Educ. Res. 2020, 49, 503–511. [Google Scholar] [CrossRef]
- Carle, S.D. A social movement history of Title VII Disparate Impact analysis. Fla. L. Rev. 2011, 63, 251. [Google Scholar] [CrossRef]
- Griffith, D.; McKinney, B. Using Disparate Impact Analysis to Develop Anti-Racist Policies: An Application to Coronavirus Liability Waivers. J. High. Educ. Manag. 2021, 36, 104–116. [Google Scholar]
- Liu, S.; Ge, Y.; Xu, S.; Zhang, Y.; Marian, A. Fairness-aware federated matrix factorization. In Proceedings of the 16th ACM Conference on Recommender Systems, Seattle, WA, USA, 18–22 September 2022; pp. 168–178. [Google Scholar]
- Gao, R.; Ge, Y.; Shah, C. FAIR: Fairness-aware information retrieval evaluation. J. Assoc. Inf. Sci. Technol. 2022, 73, 1461–1473. [Google Scholar] [CrossRef]
- Zhang, W.; Ntoutsi, E. Faht: An adaptive fairness-aware decision tree classifier. arXiv 2019, arXiv:1907.07237. [Google Scholar]
- Serna, I.; DeAlcala, D.; Morales, A.; Fierrez, J.; Ortega-Garcia, J. IFBiD: Inference-free bias detection. arXiv 2021, arXiv:2109.04374. [Google Scholar]
- Li, B.; Peng, H.; Sainju, R.; Yang, J.; Yang, L.; Liang, Y.; Jiang, W.; Wang, B.; Liu, H.; Ding, C. Detecting gender bias in transformer-based models: A case study on BERT. arXiv 2021, arXiv:2110.15733. [Google Scholar]
- Constantin, R.; Dück, M.; Alexandrov, A.; Matošević, P.; Keidar, D.; El-Assady, M. How Do Algorithmic Fairness Metrics Align with Human Judgement? A Mixed-Initiative System for Contextualized Fairness Assessment. In Proceedings of the 2022 IEEE Workshop on TRust and EXpertise in Visual Analytics (TREX), Oklahoma City, OK, USA, 16 October 2022; IEEE: Piscataway, NJ, USA, 2022; pp. 1–7. [Google Scholar]
- Goel, Z. Algorithmic Fairness Final Report.
- Bird, S.; Dudík, M.; Edgar, R.; Horn, B.; Lutz, R.; Milan, V.; Sameki, M.; Wallach, H.; Walker, K. Fairlearn: A toolkit for assessing and improving fairness in AI. Microsoft Tech. Rep. 2020. [Google Scholar]
- Jethani, N.; Sudarshan, M.; Aphinyanaphongs, Y.; Ranganath, R. Have We Learned to Explain?: How Interpretability Methods Can Learn to Encode Predictions in their Interpretations. In Proceedings of the International Conference on Artificial Intelligence and Statistics, PMLR, Virtual, 13–15 April 2021; pp. 1459–1467. [Google Scholar]
- Stiglic, G.; Kocbek, P.; Fijacko, N.; Zitnik, M.; Verbert, K.; Cilar, L. Interpretability of machine learning-based prediction models in healthcare. Wiley Interdiscip. Rev. Data Min. Knowl. Discov. 2020, 10, e1379. [Google Scholar] [CrossRef]
- Moraffah, R.; Karami, M.; Guo, R.; Raglin, A.; Liu, H. Causal interpretability for machine learning-problems, methods and evaluation. ACM SIGKDD Explor. Newsl. 2020, 22, 18–33. [Google Scholar] [CrossRef]
- Jacovi, A.; Swayamdipta, S.; Ravfogel, S.; Elazar, Y.; Choi, Y.; Goldberg, Y. Contrastive explanations for model interpretability. arXiv 2021, arXiv:2103.01378. [Google Scholar]
- Jeffries, A.C.; Wallace, L.; Coutts, A.J.; McLaren, S.J.; McCall, A.; Impellizzeri, F.M. Athlete-reported outcome measures for monitoring training responses: A systematic review of risk of bias and measurement property quality according to the COSMIN guidelines. Int. J. Sport. Physiol. Perform. 2020, 15, 1203–1215. [Google Scholar] [CrossRef] [PubMed]
- Oliveira-Rodrigues, C.; Correia, A.M.; Valente, R.; Gil, Á.; Gandra, M.; Liberal, M.; Rosso, M.; Pierce, G.; Sousa-Pinto, I. Assessing data bias in visual surveys from a cetacean monitoring programme. Sci. Data 2022, 9, 682. [Google Scholar] [CrossRef]
- Memarian, B.; Doleck, T. Fairness, Accountability, Transparency, and Ethics (FATE) in Artificial Intelligence (AI), and higher education: A systematic review. Comput. Educ. Artif. Intell. 2023, 5, 100152. [Google Scholar] [CrossRef]
- Marcinkowski, F.; Kieslich, K.; Starke, C.; Lünich, M. Implications of AI (un-) fairness in higher education admissions: The effects of perceived AI (un-) fairness on exit, voice and organizational reputation. In Proceedings of the 2020 Conference on Fairness, Accountability, and Transparency, Barcelona, Spain, 27–30 January 2020; pp. 122–130. [Google Scholar]
- Kizilcec, R.F.; Lee, H. Algorithmic fairness in education. In The Ethics of Artificial Intelligence in Education; Routledge: Boca Raton, FL, USA, 2022; pp. 174–202. [Google Scholar]
- Mashhadi, A.; Zolyomi, A.; Quedado, J. A Case Study of Integrating Fairness Visualization Tools in Machine Learning Education. In Proceedings of the CHI Conference on Human Factors in Computing Systems Extended Abstracts, New Orleans, LA, USA, 29 April–5 May 2022; pp. 1–7. [Google Scholar]
- Fenu, G.; Galici, R.; Marras, M. Experts’ view on challenges and needs for fairness in artificial intelligence for education. In International Conference on Artificial Intelligence in Education; Springer: Cham, Switzerland, 2022; pp. 243–255. [Google Scholar]
- Chen, R.J.; Chen, T.Y.; Lipkova, J.; Wang, J.J.; Williamson, D.F.; Lu, M.Y.; Sahai, S.; Mahmood, F. Algorithm fairness in ai for medicine and healthcare. arXiv 2021, arXiv:2110.00603. [Google Scholar]
- Gichoya, J.W.; McCoy, L.G.; Celi, L.A.; Ghassemi, M. Equity in essence: A call for operationalising fairness in machine learning for healthcare. BMJ Health Care Inform. 2021, 28, e100289. [Google Scholar] [CrossRef]
- Johnson, K.B.; Wei, W.Q.; Weeraratne, D.; Frisse, M.E.; Misulis, K.; Rhee, K.; Zhao, J.; Snowdon, J.L. Precision medicine, AI, and the future of personalized health care. Clin. Transl. Sci. 2021, 14, 86–93. [Google Scholar] [CrossRef]
- Chiao, V. Fairness, accountability and transparency: Notes on algorithmic decision-making in criminal justice. Int. J. Law Context 2019, 15, 126–139. [Google Scholar] [CrossRef]
- Angwin, J.; Larson, J.; Mattu, S.; Kirchner, L. Machine bias. In Ethics of Data and Analytics; Auerbach Publications: Boca Raton, FL, USA, 2022; pp. 254–264. [Google Scholar]
- Berk, R.; Heidari, H.; Jabbari, S.; Kearns, M.; Roth, A. Fairness in criminal justice risk assessments: The state of the art. Sociol. Methods Res. 2021, 50, 3–44. [Google Scholar] [CrossRef]
- Mujtaba, D.F.; Mahapatra, N.R. Ethical considerations in AI-based recruitment. In Proceedings of the 2019 IEEE International Symposium on Technology and Society (ISTAS), Medford, MA, USA, 15–16 November 2019; IEEE: Piscataway, NJ, USA, 2019; pp. 1–7. [Google Scholar]
- Hunkenschroer, A.L.; Luetge, C. Ethics of AI-enabled recruiting and selection: A review and research agenda. J. Bus. Ethics 2022, 178, 977–1007. [Google Scholar] [CrossRef]
- Nugent, S.E.; Scott-Parker, S. Recruitment AI has a Disability Problem: Anticipating and mitigating unfair automated hiring decisions. In Towards Trustworthy Artificial Intelligent Systems; Springer: Cham, Swizeraland, 2022; pp. 85–96. [Google Scholar]
- Hurlin, C.; Pérignon, C.; Saurin, S. The fairness of credit scoring models. arXiv 2022, arXiv:2205.10200. [Google Scholar] [CrossRef]
- Gemalmaz, M.A.; Yin, M. Understanding Decision Subjects’ Fairness Perceptions and Retention in Repeated Interactions with AI-Based Decision Systems. In Proceedings of the 2022 AAAI/ACM Conference on AI, Ethics, and Society, Oxford, UK, 19–21 May 2021; pp. 295–306. [Google Scholar]
- Genovesi, S.; Mönig, J.M.; Schmitz, A.; Poretschkin, M.; Akila, M.; Kahdan, M.; Kleiner, R.; Krieger, L.; Zimmermann, A. Standardizing fairness-evaluation procedures: Interdisciplinary insights on machine learning algorithms in creditworthiness assessments for small personal loans. AI Ethics 2023, 1–17. [Google Scholar] [CrossRef]
- Hiller, J.S. Fairness in the eyes of the beholder: Ai; fairness; and alternative credit scoring. W. Va. L. Rev. 2020, 123, 907. [Google Scholar]
- Kumar, I.E.; Hines, K.E.; Dickerson, J.P. Equalizing credit opportunity in algorithms: Aligning algorithmic fairness research with us fair lending regulation. In Proceedings of the 2022 AAAI/ACM Conference on AI, Ethics, and Society, Oxford, UK, 19–21 May 2021; pp. 357–368. [Google Scholar]
- Moldovan, D. Algorithmic decision making methods for fair credit scoring. IEEE Access 2023, 11, 59729–59743. [Google Scholar] [CrossRef]
- Rodgers, W.; Nguyen, T. Advertising benefits from ethical Artificial Intelligence algorithmic purchase decision pathways. J. Bus. Ethics 2022, 178, 1043–1061. [Google Scholar] [CrossRef]
- Yuan, D. Artificial Intelligence, Fairness and Productivity. Ph.D. Thesis, University of Pittsburgh, Pittsburgh, PA, USA, 2023. [Google Scholar]
- Bateni, A.; Chan, M.C.; Eitel-Porter, R. AI fairness: From principles to practice. arXiv 2022, arXiv:2207.09833. [Google Scholar]
- Rossi, F. Building trust in Artificial Intelligence. J. Int. Aff. 2018, 72, 127–134. [Google Scholar]
- Bang, J.; Kim, S.; Nam, J.W.; Yang, D.G. Ethical chatbot design for reducing negative effects of biased data and unethical conversations. In Proceedings of the 2021 International Conference on Platform Technology and Service (PlatCon), Jeju, Republic of Korea, 23–25 August 2021; IEEE: Piscataway, NJ, USA, 2021; pp. 1–5. [Google Scholar]
- Følstad, A.; Araujo, T.; Law, E.L.C.; Brandtzaeg, P.B.; Papadopoulos, S.; Reis, L.; Baez, M.; Laban, G.; McAllister, P.; Ischen, C.; et al. Future directions for chatbot research: An interdisciplinary research agenda. Computing 2021, 103, 2915–2942. [Google Scholar] [CrossRef]
- Lewicki, K.; Lee, M.S.A.; Cobbe, J.; Singh, J. Out of Context: Investigating the Bias and Fairness Concerns of “Artificial Intelligence as a Service”. In Proceedings of the 2023 CHI Conference on Human Factors in Computing Systems, Hamburg, Germany, 23–28 April 2023; pp. 1–17. [Google Scholar]
- Chen, Q.; Lu, Y.; Gong, Y.; Xiong, J. Can AI chatbots help retain customers? Impact of AI service quality on customer loyalty. Internet Res. 2023. [Google Scholar] [CrossRef]
- Chen, Y.; Jensen, S.; Albert, L.J.; Gupta, S.; Lee, T. Artificial Intelligence (AI) student assistants in the classroom: Designing chatbots to support student success. Inf. Syst. Front. 2023, 25, 161–182. [Google Scholar] [CrossRef]
- Simbeck, K. FAccT-Check on AI regulation: Systematic Evaluation of AI Regulation on the Example of the Legislation on the Use of AI in the Public Sector in the German Federal State of Schleswig-Holstein. In Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, Seoul, Republic of Korea, 21–24 June 2022; pp. 89–96. [Google Scholar]
- Srivastava, B.; Rossi, F.; Usmani, S.; Bernagozzi, M. Personalized chatbot trustworthiness ratings. IEEE Trans. Technol. Soc. 2020, 1, 184–192. [Google Scholar] [CrossRef]
- Hulsen, T. Explainable Artificial Intelligence (XAI): Concepts and Challenges in Healthcare. AI 2023, 4, 652–666. [Google Scholar] [CrossRef]
- Chen, Z. Collaboration among recruiters and Artificial Intelligence: Removing human prejudices in employment. Cogn. Technol. Work. 2023, 25, 135–149. [Google Scholar] [CrossRef] [PubMed]
- Rieskamp, J.; Hofeditz, L.; Mirbabaie, M.; Stieglitz, S. Approaches to improve fairness when deploying ai-based algorithms in hiring—Using a systematic literature review to guide future research. In Proceedings of the 56th Hawaii International Conference on System Sciences, HICSS 2023, Maui, HI, USA, 3–6 January 2023. [Google Scholar]
- Hunkenschroer, A.L.; Kriebitz, A. Is AI recruiting (un) ethical? A human rights perspective on the use of AI for hiring. AI Ethics 2023, 3, 199–213. [Google Scholar] [CrossRef] [PubMed]
- Dastin, J. Amazon scraps secret AI recruiting tool that showed bias against women. In Ethics of Data and Analytics; Auerbach Publications: Boca Raton, FL, USA, 2022; pp. 296–299. [Google Scholar]
- Hunkenschroer, A.L.; Lütge, C. How to improve fairness perceptions of AI in hiring: The crucial role of positioning and sensitization. AI Ethics J. 2021, 2, 1–19. [Google Scholar] [CrossRef]
Data Bias | Definition | Main Cause | References |
---|---|---|---|
Individual Fairness | Similarity at the individual level | Treat similar individuals similarly | [40,41,48,49] |
Group Fairness | Equitable outcomes for demographic groups | Avoid disparities among groups | [42,50,51] |
Fairness through Unawareness | Ignoring sensitive attributes | Treat individuals as if attributes are unknown | [44,45,52] |
Equality of Opportunity | Equal chances for similar qualifications | Ensure equal chances for outcomes | [11,53,54] |
Disparate Impact | Disproportionate negative effects | Evaluate disparities in outcomes | [6,46,55] |
Disparate Treatment | Explicit unequal treatment | Detect explicit biases in treatment | [43,56,57] |
Subgroup Fairness | Fairness at the intersection of multiple attributes | Consider fairness for multiple groups | [47,58] |
Data Bias | Definition | Main Cause | Impact on AI | References |
---|---|---|---|---|
Selection Bias | Certain groups are over/under-represented | Biased data collection process | AI models may not be representative, leading to biased decisions | [68,69,70,71] |
Sampling Bias | Data are not a random sample | Incomplete or biased sampling | Poor generalization to new data, biased predictions | [25,72,73] |
Labeling Bias | Errors in data labeling | Annotators’ biases or societal stereotypes | AI models learn and perpetuate biased labels | [26,74,75,76] |
Temporal Bias | Historical societal biases | Outdated data reflecting past biases | AI models may reinforce outdated biases | [78,79,80,81] |
Aggregation Bias | Data combined from multiple sources | Differing biases in individual sources | AI models may produce skewed outcomes due to biased data | [82,83,84,85] |
Historical Bias | Training data reflect past societal biases | Biases inherited from historical societal discrimination | Model may perpetuate historical biases and reinforce inequalities | [52,87,88,89] |
Measurement Bias | Errors or inaccuracies in data collection | Data collection process introduces measurement errors | Model learns from flawed data, leading to inaccurate predictions | [4,90,91,92] |
Confirmation Bias | Focus on specific patterns or attributes | Data collection or algorithmic bias towards specific features | Model may overlook relevant information and reinforce existing biases | [27,99,100,101,102] |
Proxy Bias | Indirect reliance on sensitive attributes | Use of correlated proxy variables instead of sensitive attributes | Model indirectly relies on sensitive information, leading to biased outcomes | [42,103,104,105] |
Cultural Bias | Data reflect cultural norms and values | Cultural influences in data collection or annotation | Model predictions may be biased for individuals from different cultural backgrounds | [72,106,107] |
Under-representation Bias | Certain groups are significantly underrepresented | Low representation of certain groups in the training data | Model performance is poorer for underrepresented groups | [93,94,95] |
Homophily Bias | Predictions based on similarity between instances | Tendency of models to make predictions based on similarity | Model may reinforce existing patterns and exacerbate biases | [96,97,98] |
Algorithmic Bias | Definition | Main Cause | Impact on AI | References |
---|---|---|---|---|
Prejudice Bias | AI models trained on biased data | Biased training data and societal prejudices | Reinforces biases, leads to discriminatory outcomes | [76,110,111,112] |
Sampling Bias | Data do not represent the target population | Incomplete or skewed sampling methods | Poor generalization, biased predictions | [85,113,114,115] |
Feedback Loop Bias | Self-reinforcing bias cycle in AI predictions | Biased predictions influencing biased feedback | Amplifies biases, perpetuates discrimination | [116,117,118,119,120] |
Lack of Diversity Bias | Training on limited or homogeneous datasets | Insufficient representation of diverse groups | Performs poorly for underrepresented groups | [40,121,122,123,124,125] |
Automation Bias | Human over-reliance on AI decisions | Blind trust in AI without critical evaluation | Perpetuates biases without human intervention | [126,127,128,129,130,131] |
Bias Type | Definition | Main Cause | Impact on AI | Reference |
---|---|---|---|---|
User Feedback Bias | User Feedback Bias occurs when biased user feedback or responses influence the behavior of AI systems. | Biased user feedback or responses can be influenced by users’ subjective preferences, opinions, or prejudices. The AI system learns from this feedback and incorporates it into its decision-making process. | AI models may generate biased predictions and decisions based on the biased feedback, potentially leading to unequal treatment of certain user groups. User satisfaction and trust in the AI system can be affected by biased outputs. | [116,117,118] |
Biases from Underrepresented or Biased User Data | This bias arises when the data collected from users lack diversity or contain inherent biases, which can lead to biased model predictions and decisions that disproportionately affect certain user groups. | Lack of diversity or inherent biases in user data can result from biased data collection practices, data preprocessing, or historical biases reflected in the data. | AI systems trained on biased user data may produce unfair outcomes, disproportionately impacting specific user groups. Biases in data can lead to the perpetuation and amplification of existing inequalities. | [133,134,135] |
Automation Bias in Human–AI Interaction | Automation bias refers to biased decision making by users when utilizing AI systems, potentially influencing the AI system’s outcomes and recommendations. | Automation bias can occur when users over-rely on AI recommendations without critically evaluating or verifying the results. Human trust in AI systems and the perceived authority of the AI can contribute to automation bias. | Automation bias can lead to the uncritical acceptance of AI-generated outputs, even when they are biased or inaccurate. It may result in erroneous or unfair decisions based on AI recommendations. Awareness of automation bias is crucial to avoid blindly accepting AI decisions without human oversight. | [126,128,129] |
Fair Training Method | Definition | Implementation | Key Features | References |
---|---|---|---|---|
Pre-processing Fairness | Modifying training data before feeding into the model | Re-sampling, re-weighting, data augmentation | Addresses bias at the data level | [136,139,140] |
In-processing Fairness | Modifying learning algorithms or objective functions | Adversarial training, adversarial debiasing | Simultaneously optimizes for accuracy and fairness | [137,141,142] |
Post-processing Fairness | Adjusting the model’s predictions after training | Re-ranking, calibration | Does not require access to the model’s internals | [46,143,144,145] |
Regularization-based Fairness | Adding fairness constraints to the optimization process | Penalty terms in the loss function | Can be combined with various learning algorithms | [43,146,147] |
Counterfactual Fairness | Measuring fairness based on changes in sensitive attributes | Counterfactual reasoning | Focuses on individual-level fairness | [45,148,149] |
Pre-Processing Fairness Method | Features | Pros | Cons | References |
---|---|---|---|---|
Re-sampling Techniques | Balance representation of different groups | Simple and easy to implement | May lead to loss of information and increased computation | [150,151,152,153] |
Re-weighting Techniques | Assign higher weights to underrepresented groups | Does not alter the original dataset | Requires careful selection of appropriate weights | [154,155,156,157,158,159] |
Data Augmentation | Generate synthetic data to increase representation | Increases the diversity of the training dataset | Synthetic data may not fully represent real-world samples | [160,161,162,163] |
Fairness-aware Clustering | Cluster data points while maintaining fairness | Incorporates fairness constraints during clustering | May not guarantee perfect fairness in all clusters | [164,165,166,167] |
Synthetic Minority Over-sampling Technique (SMOTE) | Generate synthetic samples for the minority class | Addresses class imbalance | May result in overfitting or noisy samples | [168,169,170,171] |
In-Processing Fairness Method | Features | Pros | Cons | References |
---|---|---|---|---|
Adversarial Training | Adversarial component to minimize bias impact | Enhances model’s fairness while maintaining accuracy | Sensitive to adversarial attacks, requires additional computational resources | [172,173,174,175,176] |
Adversarial Debiasing | Adversarial network to remove sensitive attributes | Simultaneously reduces bias and improves model’s fairness | Adversarial training challenges, potential loss of predictive performance | [137,177,178,179] |
Equalized Odds Post-processing | Adjust model predictions to ensure equalized odds | Guarantees fairness in binary classification tasks | May lead to suboptimal trade-offs between fairness and model performance | [11,144,177,180] |
Causal Learning for Fairness | Focus on causal relationships to adjust for bias | Addresses confounding factors to achieve fairer predictions | Requires causal assumptions, may be limited by data availability | [45,181,182,183,184] |
Meta Fairness | Learns fairness-aware optimization algorithm | Adapts fairness-accuracy trade-off to changing requirements | Complexity in learning the optimization algorithm, potential increased complexity | [163,185,186] |
Post-Processing Fairness Method | Features | Pros | Cons | References |
---|---|---|---|---|
Equalized Odds Post-processing | Adjust model predictions to ensure equalized odds | Ensures equalized false positive and true positive rates across groups | May lead to suboptimal trade-offs between fairness and model performance | [11,144,177,180] |
Calibration Post-processing | Calibrates model’s predicted probabilities | Improves fairness by aligning confidence scores with true likelihood | Calibration may not entirely remove bias from the model | [187,188,189,190] |
Reject Option Classification (ROC) Post-processing | Introduces a “reject” option in classification decisions | Allows the model to abstain from predictions in high fairness concern cases | May lead to lower accuracy due to abstaining from predictions | [144,191,192,193] |
Preferential Sampling Post-processing | Modifies the training data distribution by resampling instances | Improves fairness by adjusting the representation of different groups | May not address the root causes of bias in the model | [194,195,196] |
Threshold Optimization Post-processing | Adjusts decision thresholds for fairness and accuracy trade-off | Allows fine-tuning of fairness and performance balance | May not fully eliminate all biases in the model | [197,198,199,200] |
Regularization Post-processing | Applies fairness constraints during model training | Encourages fairness during the optimization process | Fairness constraints might impact model performance | [201,202,203,204] |
Regularization-Based Fairness Method | Features | Pros | Cons | References |
---|---|---|---|---|
Adversarial Regularization | Introduces adversarial component | Encourages disentanglement of sensitive attributes | Computationally expensive | [205,206,207,208] |
Demographic Parity Regularization | Enforces similar distributions across groups | Addresses group fairness | May lead to accuracy trade-offs | [201,204,209,210,211] |
Equalized Odds Regularization | Ensures similar false/true positive rates | Emphasizes fairness in both rates | May lead to accuracy trade-offs | [201,212,213] |
Covariate Shift Regularization | Reduces impact of biased/underrepresented subgroups | Addresses bias due to distributional differences | Sensitive to noise in the data | [214,215] |
Mixture Density Network Regularization | Models uncertainty in predictions | Provides probabilistic approach to fairness regularization | Requires larger amount of data to estimate probability distributions | [216,217,218] |
Counterfactual Fairness Method | Features | Pros | Cons | References |
---|---|---|---|---|
Individual Fairness | Focuses on treating similar individuals similarly based on their features | Considers fairness at the individual level, promoting personalized fairness | Defining similarity metrics and enforcing individual fairness can be challenging | [40,196,219] |
Equal Opportunity Fairness | Minimizes disparate impact on true positive rates across sensitive attribute groups | Targets fairness in favor of historically disadvantaged groups | May neglect other fairness concerns, such as false positive rates or overall accuracy | [220,221,222] |
Equalized Odds Fairness | Aims for similar false positive and true positive rates across sensitive attribute groups | Addresses fairness in both false positives and false negatives | May lead to accuracy trade-offs between groups | [229,230,231] |
Reweighted Counterfactual Fairness | Assigns different weights to instances based on similarity to counterfactual scenarios | Provides better fairness control by adjusting instance weights | Determining appropriate weights and balancing fairness and accuracy can be challenging | [223,224,225] |
Oblivious Training | Trains the model to be ignorant of certain sensitive attributes during learning | Offers a simple and effective way to mitigate the impact of sensitive attributes | May result in lower model performance when sensitive attributes are relevant to the task | [226,227,228] |
Method Category | Features | Pros | Cons | References |
---|---|---|---|---|
Informed Consent | Obtain explicit consent from participants | Respects individual autonomy | May lead to selection bias | [232,233,234] |
Informed Consent | Clear explanation of data collection purpose | Builds trust with participants | Consent may not always be fully informed | [235,236] |
Informed Consent | Informed of potential risks | Difficulties with complex research studies | [237,238,239] | |
Privacy and Anonymity | Data anonymization, aggregation, de-identification | Protects participant privacy | Reduced utility of anonymized data | [240,241] |
Privacy and Anonymity | Prevents re-identification of individuals | Minimizes risk of data breaches | Challenges in preserving data utility | [242,243,244] |
Data Minimization | Collect only necessary data | Reduces data collection and storage costs | Limited data for certain analyses | [28,245,246] |
Data Minimization | Avoid gathering excessive/inappropriate data | Mitigates privacy risks | Potential loss of insights | [247,248] |
Transparency | Clear communication of data collection process | Builds trust with data subjects | May lead to privacy concerns | [249,250,251] |
Transparency | Information on methods and data use | Increases data sharing and collaboration | Difficulties in balancing transparency | [249,250,251] |
Data Security | Encryption, access controls, security audits | Protects data from unauthorized access | Implementation costs | [252,253,254] |
Data Security | Safeguards data from breaches | Prevents data manipulation and tampering | Potential usability impact | [252,253,254] |
Accuracy and Accountability | Processes for data accuracy and accountability | Ensures reliability of data | Requires resource allocation for auditing | [24,255,256] |
Method | Features | Pros | Cons | References |
---|---|---|---|---|
Disparate Impact Analysis | Measures disparate impact ratios | Easy to implement and interpret | Only captures one aspect of fairness (impact ratios) | [6,257,258] |
Fairness-aware Performance Metrics | Simultaneously evaluates accuracy and fairness | Provides a holistic view of model performance and fairness | Choice of fairness metric may not fully capture desired notions of fairness | [259,260,261] |
Bias Detection Techniques | Identifies biases in data or model predictions | Alerts to potential fairness issues early | May require domain expertise for interpreting and addressing identified biases | [71,262,263] |
Algorithmic Fairness Dashboards | Real-time visualizations and metrics for monitoring | Enables continuous fairness monitoring | Complexity in designing comprehensive dashboards | [264,265,266] |
Model Explanation and Interpretability | Provides insights into decision-making | Facilitates understanding of model behavior and potential biases | May not fully capture complex interactions in the model, leading to limited interpretability | [267,268,269,270] |
Continual Bias Monitoring | Ongoing and regular assessment | Detects and addresses emerging fairness issues over time | May require significant resources for continuous monitoring | [47,271,272] |
Application | Issues | Mechanism | Opportunities | Challenges |
---|---|---|---|---|
Health Care | Racial and gender biases in diagnosis and treatment. Unequal healthcare due to socioeconomic factors. | diversifying representative datasets. Personalized treatment plans based on individual characteristics. | Enhancing healthcare access and outcomes for all individuals. Reducing healthcare disparities. | Ensuring patient privacy and data security. Addressing biases in data collection and data sources. |
Education | Bias in admissions and resource allocation. Unequal access to quality education. | Fair criteria for admissions and resource allocation. Personalized learning for individual needs. Identifying and assisting at-risk students. | Reducing educational disparities. Enhancing learning outcomes for all students. | ethical considerations regarding data privacy in educational settings. avoiding undue focus on standardized testing. |
Criminal Justice and Sentencing | Racial Bias in predictive policing and sentencing. Unfair allocation of resources for crime prevention. | focus on rehabilitation with regular auditing and updating the models with transparency in decision-making. | Reducing biased arrests and sentencing. Allocating resources more efficiently. | The ethical implications of using AI in criminal justice. Ensuring model accountability and avoiding “tech-washing”. |
Application | Issues | Mechanism | Opportunities | Challenges |
---|---|---|---|---|
Recruiting | Bias in job ads and candidate selection. Lack of diversity in hiring. | Debiasing job descriptions, candidate screening and removing identifiable information, diversifying training data. | Increasing workforce diversity. Reducing hiring discrimination. | Balancing fairness and competence. Ensuring fairness across different demographics. |
Lending and Credit Decisions | Discrimination in loan approvals. Lack of transparency in decision making. | Implementing fairness-aware algorithms, explaining model decisions, alternative data to creditworthiness. | Expanding access to credit for marginalized groups. Improving overall lending practices. | Striking a balance between fairness and risk assessment. Handling potential adversarial attacks on models. |
Online Advertising | Targeting ads based on sensitive attributes. Reinforcing stereotypes through ad delivery. | Differential privacy to protect privacy, biased message screening, providing users preference controls. | Improving user experience and privacy protection. Fostering a positive brand image. | The balance between targeted ads and user privacy. Identifying and Addressing hidden biases in ad delivery. |
Customer Service and Chatbots | biased responses and inappropriate interactions. Lack of understanding diverse linguistic expressions. | Training chatbots on inclusive and diverse datasets with reinforcement learning to improve interactions with feedback on bot behavior. | Enhancing user experience and customer satisfaction. Scaling customer support efficiently. | Minimizing harmful or offensive responses. Dealing with novel inputs and out-of-distribution data. |
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2023 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Chen, P.; Wu, L.; Wang, L. AI Fairness in Data Management and Analytics: A Review on Challenges, Methodologies and Applications. Appl. Sci. 2023, 13, 10258. https://doi.org/10.3390/app131810258
Chen P, Wu L, Wang L. AI Fairness in Data Management and Analytics: A Review on Challenges, Methodologies and Applications. Applied Sciences. 2023; 13(18):10258. https://doi.org/10.3390/app131810258
Chicago/Turabian StyleChen, Pu, Linna Wu, and Lei Wang. 2023. "AI Fairness in Data Management and Analytics: A Review on Challenges, Methodologies and Applications" Applied Sciences 13, no. 18: 10258. https://doi.org/10.3390/app131810258