Next Article in Journal
Ensemble Learning with Highly Variable Class-Based Performance
Next Article in Special Issue
Not in My Face: Challenges and Ethical Considerations in Automatic Face Emotion Recognition Technology
Previous Article in Journal
Efficient Visual-Aware Fashion Recommendation Using Compressed Node Features and Graph-Based Learning
Previous Article in Special Issue
A Data Science and Sports Analytics Approach to Decode Clutch Dynamics in the Last Minutes of NBA Games
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

A Multi-Objective Framework for Balancing Fairness and Accuracy in Debiasing Machine Learning Models

by
Rashmi Nagpal
1,
Ariba Khan
1,
Mihir Borkar
2 and
Amar Gupta
1,*
1
Computer Science and Artificial Intelligence Laboratory, Massachusetts Institute of Technology, 32 Vassar St, Cambridge, MA 02139, USA
2
Department of Computer Science, San Jose State University, San Jose, CA 95112, USA
*
Author to whom correspondence should be addressed.
Mach. Learn. Knowl. Extr. 2024, 6(3), 2130-2148; https://doi.org/10.3390/make6030105
Submission received: 11 July 2024 / Revised: 9 September 2024 / Accepted: 10 September 2024 / Published: 20 September 2024

Abstract

:
Machine learning algorithms significantly impact decision-making in high-stakes domains, necessitating a balance between fairness and accuracy. This study introduces an in-processing, multi-objective framework that leverages the Reject Option Classification (ROC) algorithm to simultaneously optimize fairness and accuracy while safeguarding protected attributes such as age and gender. Our approach seeks a multi-objective optimization solution that balances accuracy, group fairness loss, and individual fairness loss. The framework integrates fairness objectives without relying on a weighted summation method, instead focusing on directly optimizing the trade-offs. Empirical evaluations on publicly available datasets, including German Credit, Adult Income, and COMPAS, reveal several significant findings: the ROC-based approach demonstrates superior performance, achieving an accuracy of 94.29%, an individual fairness loss of 0.04, and a group fairness loss of 0.06 on the German Credit dataset. These results underscore the effectiveness of our framework, particularly the ROC component, in enhancing both the fairness and performance of machine learning models.

1. Introduction

Machine learning algorithms have revolutionized decision-making in many high-impact industries like finance, healthcare, and safety. However, decisions made by such systems can have adverse effects, making it essential to develop new techniques that ensure that the decisions made are accurate and fair. Research in this domain aids the progress of Artificial Intelligence, emphasizing the importance of fairness in AI systems amidst the growing use of automated decision-making processes. Research has continuously [1] exposed that ML algorithms are often biased towards privileged groups and fail to ensure fairness for unprivileged groups. Modern ML algorithms prioritize high accuracy by focusing on a single, unconstrained objective, making them unable to account for the many real-world constraints that must be met when deploying an ML model. For instance, the COMPAS score, utilized in American courts to anticipate the likelihood of recidivism, is a widely recognized instance of algorithmic discrimination due to disregard of fairness constraints  [2]. It frequently predicts black defendants to be at significantly higher risk than white defendants, even when the former group does not re-offend. This could reinforce systemic bias through a negative feedback loop and result in increasingly biased predictions over time. As a result, there is heightened interest from academia and industry toward developing highly unbiased and precise ML algorithms.
To address bias in ML models, many existing pipelines suffer from a fairness-accuracy trade-off, where increasing fairness can lead to a decrease in accuracy and vice versa. For instance, an algorithm fine-tuned for maximum accuracy may inadvertently perpetuate bias against specific groups. Conversely, an algorithm designed to achieve fairness may compromise accuracy by making incorrect classifications for particular data points. The complexity of debiasing stems from its nature as a multi-objective problem rather than a single-objective problem, highlighting the necessity for refined and advanced AI methodologies equipped to tackle such challenges.
We investigate the potential of multi-objective optimization in improving model accuracy while simultaneously addressing multiple fairness metrics that may conflict with one another on – German Credit [3], Adult Income [4] and COMPAS dataset [5]. This study introduces a novel in-processing, multi-objective framework that utilizes the Reject Option Classification algorithm to balance fairness and accuracy without penalizing protected attributes such as age and gender. Our methodology demonstrates its effectiveness through comprehensive evaluations of German Credit, Adult Income, and COMPAS datasets. While existing literature often focuses on optimizing for group fairness alone, our framework integrates both individual and group fairness with accuracy, showcasing ROC’s superior performance across these datasets. This contrasts with prior work that predominantly addressed group fairness (e.g., [6,7,8,9]).
In summary, the key contributions of this research include:
  • The development of a robust fairness framework employing multi-objective optimization to simultaneously optimize accuracy and fairness without penalizing protective attributes.
  • A comprehensive experimental analysis conducted across various datasets and comparing our framework to baseline methods demonstrate its capacity to increase fairness while significantly improving performance.
This paper is structured as follows: Section 2 offers a literature review that delves into prior research on fairness and optimization. In Section 3, we articulate the background of our framework and fairness definitions used across our work. Section 4 provides the experimental setup of our framework along with the dataset particulars. Section 5 gives a comprehensive overview of our research findings, which focus on performance in terms of accuracy, individual fairness, and achieved group fairness for sensitive features like race and gender. In Section 6 encapsulates the conclusion of our paper. Lastly, in Section 7, we explore potential research directions for our work.

2. Related Work

2.1. Pre-Processing Fairness Techniques

The growing importance of fairness in ML has led to increasing research that seeks to address existing biases in decision-making [10] and statistical limitations of fairness definitions [11]. To comprehensively address these biases, fairness techniques are broadly classified into three categories: pre-processing, in-processing, and post-processing [12]. Within this, individual and group fairness emerge as two pivotal dimensions illuminating different facets of fair decision-making. Group fairness aims to ensure nondiscrimination across protected groups, optimizing metrics, such as disparate impact [13], and individual fairness implies that fair ML models should treat comparable users similarly [14].
Before training an ML model, pre-processing techniques modify the input data to remove biases. Techniques such as re-sampling, avoidance of sensitive data collection, and probabilistic data transformations have been employed to create fair representations of the data [15,16]. While these approaches have demonstrated promise, their focus on the input data can limit their ability to address biases that emerge during the training process [17].

2.2. In-Processing Fairness Techniques

In-processing techniques attempt to address biases by modifying the learning algorithm itself. These methods often involve directly incorporating fairness techniques into the optimization process [18,19]. One popular approach is to employ adversarial training, which seeks to minimize a classifier’s ability to predict sensitive attributes while maintaining high accuracy on the task of interest [20,21]. However, these methods tend to focus on single-objective optimization or concentrate on a single sensitive attribute, thus limiting their ability to achieve fairness across multiple objectives and attributes [22].

2.3. Post-Processing Fairness Techniques

Post-processing techniques modify the predictions of a trained ML model to ensure fairness. They include threshold adjustment and calibration techniques, which aim to align the classifier’s output distribution with the desired fairness criteria [23]. While these methods can be effective in certain scenarios, their reliance on a fixed model and a large amount of sensitive data can limit their ability to optimize group and individual fairness and accuracy simultaneously [24]. Therefore, there is a need to capture individual fairness, which indirectly improves group fairness by reducing information on protected attributes [25]. The post-processing algorithms can be viewed as a single-objective graph that preserves the desired “treat similar individuals similarly” interpretation [26].

2.4. Multi-Objective Optimization Techniques

Although single-objective approaches have been commonly employed in fairness research, multi-objective optimization has emerged as a promising alternative to address the inherent trade-offs between fairness and accuracy [27]. Approaches such as the use of gradient-based optimization have been proposed to achieve this balance [28]. However, these methods often focus on single fairness objectives or single sensitive attributes, which may not be sufficient for addressing fairness in complex scenarios [29].

2.5. Recent Advances in Multi-Objective Optimization

The multi-objective optimization approach proves to be the most effective in ensuring fairness, as seen in [30,31,32,33]. The [34] multi-objective optimization framework can ensure results are fair to various demographic groups by using a combination of statistical and ML techniques. This approach can be used to solve the Pareto fair-dimensionality-reduction problem by adopting the adaptive gradient-based algorithm [35]. Additionally, Table 1 summarizes and highlights the challenges of achieving a balance between fairness and accuracy in machine learning, underscoring the relevance of our framework.

2.6. Bayesian Approaches in Machine Learning

In addition to multi-objective optimization techniques, most recent advancements in Bayesian approaches have shown promise in various machine learning downstream tasks, including positional data analysis. For instance, Bayesian model-assessment algorithms have been developed for diffusing particle data, as illustrated in recent works [44,45]. These methods focus on probabilistically assessing model performance and handling uncertainty in complex datasets, which could be instrumental to balance fairness and accuracy when integrating fairness metrics in a multi-objective optimization framework. This study [46] enhances machine-learning techniques for decoding anomalous diffusion in single-particle-tracking data by incorporating uncertainty estimates using a Bayesian deep-learning approach, thus providing accurate predictions and insights into the diffusion models. The probabilistic nature of these approaches aligns with the broader goal of ensuring fair decision-making across various sensitive groups by accounting for uncertainties inherent in real-world datasets.
Despite the significant strides made by existing methods in addressing fairness in machine learning, there remains a need for a comprehensive framework that can concurrently optimize multiple fairness objectives while considering multiple protected attributes [47,48]. The proposed approach in this study builds upon the strengths of in-processing techniques and multi-objective optimization, aiming to fill this gap and offer a more versatile fairness framework. We introduce a multi-objective framework utilizing the Reject Option Classification algorithm to optimize fairness and accuracy in machine learning models, addressing critical trade-offs inherent in high-stakes decision-making. Unlike traditional methods that rely on weighted summation to balance these objectives, our approach optimizes fairness without compromising model performance. This distinguishes our work from conventional techniques that typically apply fairness constraints during pre-processing, in-processing, or post-processing stages and often struggle with setting subjective weights for conflicting objectives, our approach balances accuracy-fairness tradeoff into the optimization process to provide a more effective and nuanced solution. Empirical evaluations on datasets such as German Credit, COMPAS, and Adult Income demonstrate our framework’s ability to maintain high accuracy while significantly enhancing fairness metrics. Our approach performs better in managing the fairness-accuracy trade-off than existing methods like stochastic multi-objective approaches to optimize fairness and accuracy [49] and fairness through unawareness [50].

3. Background

3.1. Fairness Definitions

Model fairness in supervised learning aims to use labelled training data to predict classification Y 0 , 1 using feature vector X. The fairness of the prediction Y ^ is evaluated with sensitive groups (e.g., gender or race) identified by sensitive attributes A (e.g., female or black). The sensitive attributes are binary A 0 , 1 , where A = 0 represents the unprivileged group (e.g., female) and A = 1 represents the privileged group (e.g., male).

3.1.1. Group Fairness Loss

Group fairness loss ensures a predictive model performs equitably across different demographic groups. While the literature commonly discusses several definitions of group fairness, including statistical parity, equal opportunity difference, and predictive parity, our study focuses exclusively on statistical parity as the group fairness loss objective.
Statistical Parity quantifies the deviation in predictive outcomes between different groups. Specifically, it measures how much a model’s predictions differ between groups defined by a sensitive attribute. The statistical parity loss of a binary predictor  Y ^ is defined as:
Statistical Parity Loss = | P r ( Y ^ = 1 | A = 1 ) P r ( Y ^ = 1 | A = 0 ) |
A lower statistical parity loss indicates more equitable treatment across groups, aligning with the principle of group fairness. Our study did not incorporate other group fairness measures, such as predictive parity or equalized odds, focusing solely on minimizing statistical parity loss.

3.1.2. Individual Fairness Loss

Individual fairness loss ensures that similar individuals receive similar predictions, irrespective of their group membership or other characteristics. For this study, we utilize Counterfactual Fairness to evaluate individual fairness loss.
Counterfactual Fairness is achieved if the model’s prediction for an individual remains consistent across counterfactual scenarios where only the sensitive attribute changes. Formally, an individual x with features X and sensitive attribute A is counterfactually fair if:
Counterfactual Fairness Loss = Y ^ ( x , A ) Y ^ ( x , A )
where Y ^ is the model’s prediction, x denotes the features, A represents the current sensitive attribute, and A is the counterfactual sensitive attribute. A lower loss value indicates a higher degree of counterfactual fairness.

3.2. Multi-Objective Optimization

Multi-objective optimization (MOO) involves solving problems that require the simultaneous optimization of multiple conflicting objectives. Unlike single-objective optimization, which focuses on maximizing or minimizing a single metric, MOO considers multiple objectives and seeks a set of solutions that offer the best possible trade-offs among the objectives. These solutions are known as the Pareto front, where no other solutions are strictly better in all objectives.
In our study, the objectives include accuracy, individual fairness loss, and group fairness loss. The trade-offs among these objectives are crucial since improving one might degrade the others. Instead of reducing the MOO problem to a single-objective problem by aggregating the objectives into a weighted sum, we handle them separately to maintain the multi-objective nature of the problem.
The multi-objective optimization problem can be formulated as:
min h O fair ( h , μ ) = O accuracy ( h ) , O individual _ fairness ( h , μ ) , O group _ fairness ( h , μ )
where h represents the decision variables (in this case, the model parameters), and μ  represents the weights or penalties associated with fairness constraints.
We proposed a novel approach in which these objectives are optimized using the Reject Option Classification (ROC) method, combined with Particle Swarm Optimization (PSO) and NSGA-II. These algorithms help find the Pareto front, allowing for the exploration of the best trade-offs between accuracy and fairness.

4. Experiment

4.1. Data

In our study, we conducted a comprehensive experimental analysis using the German Credit, COMPAS, and Adult Income datasets. The German Credit dataset consists of 1000 instances and 20 attributes, including credit amount, duration, and payment status, with protected attributes such as gender and age. The dataset exhibited a significant class imbalance, which we addressed using the SMOTE-Tomek technique. This method combines the Synthetic Minority Over-sampling Technique to oversample the minority class with Tomek Links to remove majority class samples closest to the minority class, thereby effectively balancing the classes. The Adult Income dataset contains 14 features, encompassing both categorical and numerical variables such as age, education, and occupation, which were used to predict whether an individual earns more than $50K annually.The COMPAS recidivism dataset includes information on over 10,000 criminal defendants in Broward County, Florida. It features attributes such as race, sex, age, criminal history, and charge degree, with a binary target variable indicating whether the defendant recidivated within two years.
We performed an 80–20 split following pre-processing for all three datasets to create training and testing sets, ensuring robust evaluation across different fairness metrics.

4.2. Models Explored

4.2.1. NSGA-II Algorithm

The Non-dominated Sorting Genetic Algorithm II (NSGA-II) is a popular evolutionary algorithm designed for multi-objective optimization problems. NSGA-II employs genetic operators, such as selection, crossover, and mutation, to evolve a population of candidate solutions towards the Pareto front.
Mathematically, a non-dominated sorting algorithm can be described as:
Let P be the population of the n candidate solutions, each with m objectives. For each solution i, define the sets S i and n i as follows:
S i = { j P i dominates j } n i = | { j P j dominates i } |
In the above equation, S i is the set of solutions dominated by solution i, and n i is the number of solutions that dominate solution i. Next, we defined the first front F 1 as the set of the solutions that are not dominated by any other solution in P, F 1 can be expressed as:
F 1 = { i P n i = 0 }
Once we identified the first front, we used a similar procedure to find the next front F 2 , which contains the solutions that were dominated by exactly one solution in F 1 , F 2 can be expressed as:
F 2 = { i P n i = 1 , j in F 1 such that i dominates j }
We then continued this process to identify all the fronts in the population, and the final result is a set of fronts, where the first front contains the non-dominated solution.

4.2.2. Particle Swarm Optimization

Particle Swarm Optimization is a global optimization algorithm designed to find the optimal value of a multi-dimensional objective function. In our case, PSO is used to optimize hyperparameters for the XGBoost classifier, aiming to enhance accuracy and AUC without compromising fairness.
Mathematically, let x i t be the position of particle i at time t, with v i t as its velocity. The best position of particle i is denoted by p b e s t i j , and the best overall position by g b e s t j .
Key updates in PSO are:
  • Global Best Position:
    g b e s t j = p b e s t i j if f ( p b e s t i j ) > f ( g b e s t j ) g b e s t j otherwise
  • Velocity Update:
    v i j t + 1 = w v i j t + c 1 r 1 ( p b e s t i j x i j t ) + c 2 r 2 ( g b e s t j x i j t )
  • Position Update:
    x i j t + 1 = x i j t + v i j t + 1
Here, i and j index the particles and dimensions, respectively, with N as the swarm size and d as the problem’s dimensionality.

4.2.3. Differential Evolution

The Differential Evolution algorithm is a population-based optimization method used to solve complex multi-objective problems. DE works by evolving a population of candidate solutions through mutation, crossover, and selection. Each candidate solution, called a vector, is updated iteratively based on the difference between randomly selected pairs of vectors.
Mathematically, the mutation operation is defined as:
v i ( t + 1 ) = x r 1 ( t ) + F · ( x r 2 ( t ) x r 3 ( t ) )
where v i ( t + 1 ) is the mutant vector, x r 1 ( t ) , x r 2 ( t ) , x r 3 ( t ) are randomly selected vectors from the population at iteration t, and F is a scaling factor.
The crossover step generates a trial vector u i ( t + 1 ) :
u i j ( t + 1 ) = v i j ( t + 1 ) if rand j C r or j = j rand x i j ( t ) otherwise
where C r is the crossover probability, and j rand is a random index ensuring at least one component is inherited from the mutant vector.
Finally, in the selection step, the trial vector u i ( t + 1 ) is compared with the target vector x i ( t ) , and the better one is selected for the next generation:
x i ( t + 1 ) = u i ( t + 1 ) if f ( u i ( t + 1 ) ) f ( x i ( t ) ) x i ( t ) otherwise
Here, f denotes the objective function to be minimized.

4.2.4. Genetic Algorithm

The Genetic Algorithm is a bio-inspired optimization technique that mimics the process of natural selection to find optimal solutions to complex problems. GA operates on a population of candidate solutions called chromosomes, which evolve over successive generations through selection, crossover, and mutation.
Mathematically, the selection process is often based on fitness proportionate selection, where the probability of selecting an individual i is:
P ( i ) = f i j = 1 N f j
where f i is the fitness of the i th chromosome, and N is the population size.
The crossover operation combines two parent chromosomes to produce offspring. For single-point crossover, if p 1 and p 2 are two parent chromosomes, then the offspring o 1 and o 2 are given by:
o 1 = [ p 1 1 : k , p 2 k + 1 : n ] , o 2 = [ p 2 1 : k , p 1 k + 1 : n ]
where k is the crossover point, and n is the chromosome length.
Mutation introduces diversity by randomly altering the gene values of offspring with a small probability. If o i j represents the j th gene of the i th offspring, the mutation operation can be represented as:
o i j new = random value if rand P m o i j otherwise
where P m is the mutation probability.
The new population is formed by selecting the best chromosomes from the current population and the newly generated offspring, ensuring that the population evolves towards better solutions over time.

4.2.5. Nelder Mead Algorithm

The Nelder-Mead algorithm is a derivative-free optimization technique used to find the minimum of an objective function in a multidimensional space. It operates on a simplex, which is a geometric figure with n + 1 vertices for an n-dimensional problem. The algorithm iteratively updates the simplex by performing reflection, expansion, contraction, or shrinkage, depending on the function values at the vertices.
Mathematically, the reflection step is defined as:
x r = x c + α ( x c x h )
where x r is the reflected point, x c is the centroid of all points except the worst point x h , and α is the reflection coefficient.
If the reflection is successful, the algorithm may perform an expansion:
x e = x c + γ ( x r x c )
where γ is the expansion coefficient.
In the case of failure, a contraction may be attempted:
x c = x c + ρ ( x h x c )
where ρ is the contraction coefficient.
This process is repeated until convergence is achieved, with the simplex gradually moving towards the function’s minimum.

4.2.6. Reject Option Classification

The Reject Option Classification technique allows us to simultaneously optimize multiple objectives, such as accuracy and fairness, without sacrificing one for the other. The basic idea is to introduce a third class of “rejected samples” in addition to the “good” and “bad” score classes. By incorporating ROC, we adapt the objective function to penalize misclassifying rejected samples, allowing us to fine-tune the balance between accuracy and fairness. Intuitively, let us say that we have a multi-objective problem with two main objectives: maximizing the accuracy and minimizing the percentage of the misclassified instances from the rejected class; our optimization can be formulated as:
maximize f 1 ( x ) = accuracy ( x )
minimize f 2 ( x ) = misclassified rejected instances ( x ) total rejected instances ( x )
subject to:
total rejected instances ( x ) α × total instances
wherein, the
x X
Note that this is a two-objective optimization problem, where f 1 ( x ) is the accuracy of the classifier and f 2 ( x ) is the misclassification rate of rejected instances. The optimization is subject to the constraint that the total number of rejected instances should be less than or equal to α times the total number of instances in the dataset, where α is a pre-specified parameter. The decision variable x belongs to the feasible set X .

4.3. Training Setup

The focus of this study is to simultaneously optimize the accuracy and fairness objectives while considering two sensitive features: gender and age. We employed tree-based classifiers, such as XGBoost, Decision Trees, and Random Forest, to find the best baseline model. Next, we employed the NSGA-II algorithm with a population size of 5000 and a maximum of 1000 generations, setting the crossover probability to 0.9 and the mutation probability to 0.1. For Particle Swarm Optimization, the training ran for 10 epochs with 2 particles and 3 dimensions. The Genetic Algorithm was configured with 10 generations and 10 solutions per population, optimizing learning rate, max depth, and number of estimators to balance accuracy and fairness. The Differential Evolution algorithm was used with bounds set for learning rate, max depth, and the number of estimators, aiming to minimize a weighted objective combining accuracy and statistical parity. Finally, the Nelder-Mead optimization was run with an initial guess and adjusted for learning rate, max depth, and the number of estimators to find the optimal configuration, focusing on improving fairness without significantly compromising accuracy. For Reject Option Classification, we defined a reject option threshold of 0.65 to compute accuracy, individual fairness loss, and counterfactual fairness loss for rejected and accepted instances.

4.4. Evaluation Metrics

We assessed model performance using accuracy, recall, precision, and F1-Score metrics. Accuracy gauges overall correctness, while recall emphasizes true positive identification. Precision focuses on minimizing false positives, and F1-Score balances precision and recall. Additionally, we employed the AUC metric to evaluate classifier performance, measuring the trade-off between sensitivity and specificity across classification thresholds, with a higher AUC value indicating better classifier performance.

5. Results

5.1. Baseline Results

We evaluated several classification algorithms to establish robust baselines for our fairness-aware framework, focusing on tree-based methods like Decision Trees, Random Forest, and XGBoost, known for handling complex, multi-attribute data. These algorithms were selected for their ability to capture intricate data relationships and align with multi-objective optimization. We also included non-tree-based methods like AdaBoost and Naive Bayes to compare broadly. This diverse set of baselines ensures a comprehensive assessment of our framework’s performance and fairness improvements.

5.1.1. German Credit Dataset

Table 2 compares five different ML classifiers—Adaboost, Decision Trees, Naive Bayes, Random Forest, and XGBoost classifier—based on their accuracy, precision, recall, and F1-score. XGBoost emerged as the top performer with an Accuracy of 82.0%, a Precision of 85.7%, and a Recall of 89.3%, resulting in an F1-Score of 87.5%. Random Forest also demonstrated strong performance, achieving an Accuracy of 80.0% and the highest Recall of 90.7%, leading to an F1-Score of 86.7%. While AdaBoost, Decision Trees, and Naive Bayes performed well, they were outpaced by the Random Forest and XGBoost ensemble methods. These results indicate that XGBoost is highly effective at balancing precision and recall, making it an intuitive choice as our primary baseline model.
In Figure 1, AdaBoost, Random Forest, and XGBoost exhibit relatively higher AUC values, ranging from 0.82 to 0.83, compared to Decision Trees, which have a moderate AUC value of 0.69. The blue dotted represents the ROC curve for a random classifier which implies it would have an equal chance of correctly classifying true positives and incorrectly classifying false positives. Therefore, the diagonal line represents a model that is essentially guessing randomly. Any model that performs better than random guessing will have its ROC curve above this line.

5.1.2. COMPAS Dataset

In Table 3, XGBoost emerged as the top-performing classifier with an accuracy of 66.72%, precision of 63.43%, and an F1-Score of 61.84%, demonstrating a strong balance between precision and recall. Adaboost also performed well, closely following XGBoost in accuracy and precision. In contrast, Decision Trees exhibited the lowest accuracy at 63.08% and the lowest recall at 56.52%, indicating its comparatively lower performance across these metrics. These results highlight the effectiveness of ensemble methods like XGBoost and Adaboost over simpler models such as Decision Trees in predicting recidivism with the COMPAS dataset.
In Figure 2, XGBoost, Random Forest, and AdaBoost have relatively higher AUC values, ranging from 0.71 to 0.73. The blue dotted represents the ROC curve for a random classifier and any model that performs better than random classifier will have its ROC curve above this line.

5.1.3. Adult Income Dataset

As shown in Table 4, XGBoost achieved the highest performance on the Adult Income dataset with an accuracy of 86.35%, precision of 72.67%, and an F1-Score of 71.09%, making it the most effective model. Random Forest followed closely with 84.83% accuracy and an F1-Score of 67.31%. Adaboost also performed well, achieving 83.80% accuracy and a balanced F1-Score of 66.96%. In contrast, Naive Bayes showed the weakest performance. These results highlight the effectiveness of ensemble methods, especially XGBoost, for this classification task.
In Figure 3, AdaBoost, Random Forest, and XGBoost have relatively higher AUC values, ranging from 0.90 to 0.92. This indicates that these algorithms perform well in distinguishing between good and bad credit instances, compared to Decision Trees, which have a moderate AUC value of 0.75.

5.2. Multi-Objective Optimization

In our study, we performed multi-objective optimization experiments across three datasets—German Credit, Adult Income, and COMPAS—to explore our approach to minimizing model error while simultaneously addressing group and individual fairness loss objectives. These experiments aimed to achieve fairness without negatively impacting sensitive attributes such as gender and age. We evaluated the model’s individual fairness loss (counterfactual fairness loss) and group fairness loss (statistical parity loss).

5.2.1. German Credit Dataset

Table 5 compares the performance metrics—accuracy, individual fairness loss, and group fairness loss—for the German Credit dataset across different multi-objective optimization algorithms. The NSGA-II algorithm achieved an accuracy of 80%, slightly lower than the baseline XGBoost model, with an individual fairness loss of 0.03 and a group fairness loss of 0.04, indicating a modest disparity in predictions. In contrast, PSO outperformed NSGA-II with an accuracy of 83%, accompanied by a lower individual fairness loss of 0.02 and a consistent group fairness loss of 0.04, suggesting more stable predictions across sensitive attributes. Differential Evolution further improved the accuracy to 84.59%, with an individual fairness loss of 0.02, though it exhibited a higher group fairness loss of 0.08, indicating a trade-off between accuracy and fairness metrics. The Genetic Algorithm, while achieving a lower accuracy of 76.10%, demonstrated the lowest individual fairness loss at 0.01, but with a slightly higher group fairness loss of 0.06. Nelder-Mead showed balanced performance with an accuracy of 79.50%, and individual and group fairness losses of 0.02 and 0.06, respectively. Lastly, the Reject Option Classification (ROC) applied to the XGBoost model significantly enhanced the accuracy to 94.29% for accepted instances, with an individual fairness loss of 0.04 and a group fairness loss of 0.06. Interestingly, the rejected instances exhibited a reduced accuracy of 75.38%, with an individual fairness loss of 0.01, underscoring the effectiveness of ROC in minimizing disparities.

5.2.2. COMPAS Dataset

Table 6 summarizes the performance of various multi-objective optimization algorithms on the COMPAS dataset. The NSGA-II algorithm achieved an accuracy of 68.60%, with an individual fairness loss of 0.18 and a group fairness loss of 0.29. In comparison, Particle Swarm Optimization (PSO) showed slightly lower accuracy at 66.80% but improved individual fairness loss to 0.27, though this resulted in a decrease in group fairness loss to 0.19. Differential Evolution and the Genetic Algorithm both achieved similar results, with accuracies of 65.59% and 65.10%, respectively, and both showed minimal individual and group fairness losses at 0.01 and 0.06. Nelder-Mead performed moderately, with an accuracy of 67.77%, while showing a higher individual fairness loss of 0.24 and maintaining a low group fairness loss of 0.06. Finally, the Reject Option Classification (ROC) improved accuracy to 71.43% for accepted instances, with a significant reduction in individual fairness loss to 0.41 and a strong reduction in group fairness loss to 0.01. However, the rejected instances displayed a lower accuracy of 64.00%, with a reduced individual fairness loss of 0.14 and better group fairness loss of 0.11, illustrating ROC’s effectiveness in balancing accuracy and fairness on this dataset.

5.2.3. Adult Income Dataset

Table 7 highlights the results for the Adult Income dataset. The NSGA-II algorithm achieved an accuracy of 84.08%, with an individual fairness loss of 0.27 and a group fairness loss of 0.39, reflecting a balance between accuracy and fairness. PSO marginally outperformed NSGA-II with an accuracy of 84.57%, slightly better individual fairness loss at 0.26, and a significant improvement in group fairness loss at 0.02, indicating more equitable treatment across gender subgroups. Differential Evolution and Genetic Algorithm demonstrated lower accuracies of 75.05% and 75.59%, respectively, but with minimal individual fairness losses of 0.01, and moderate group fairness losses. Nelder-Mead showed a balanced accuracy of 76.88%, with both fairness losses remaining relatively low. For the ROC method, the accepted instances maintained a high accuracy of 84.3%, with improved fairness metrics, while the rejected instances exhibited a substantial accuracy increase to 90.3%, with individual fairness loss dropping to 0.01 and group fairness loss remaining at 0.02, underscoring ROC’s effectiveness in reducing disparities.

5.2.4. Proposed Methodology

The proposed framework outlined in Algorithm 1 and shown in Figure 4 below, aims to address the fairness-accuracy trade-off in a machine learning model. This is demonstrated using XGBoost with a seed value of 32, a maximum depth of 8, a learning rate of 0.1, 100 estimators, and a binary logistic objective in a two-class classification context. It begins by initializing parameters and iterating until convergence is achieved. During each iteration, the dataset is balanced using the SMOTETomek technique to address the class imbalance and flips sensitive attributes in the test data for fairness considerations. The algorithm adjusts model weights based on a defined threshold and minimizes fairness objectives while training an XGBoost classifier. Finally, it sorts predicted probabilities, ultimately optimizing fairness objectives, to achieve a balance between fairness and accuracy in model predictions.
Algorithm 1: Fairness-Accuracy Trade-off with ROC on baseline (XGBoost).
    Data: data X, target Y, sensitive attributes A, initial weights μ , rejection threshold α , multi-objectives O f a i r ( )
    Result: Optimized fairness objectives O f a i r ( h , μ )
    Initialize: α 0.65
    repeat
    Make 06 00105 i001  
    until convergence;

5.2.5. Time Complexity Analysis

Our proposed framework involves multiple steps. For instance, filtering operations exhibit a time complexity of O ( n ) , where n is the number of rows. One-hot encoding of categorical features introduces additional columns, resulting in a complexity of O ( n · k ) , with k as the number of categorical features. Training the XGBoost model typically has a complexity of O ( n · m · log ( n ) ) , where m is the number of features, reflecting the computational effort of decision tree growth. Prediction complexity is O ( n · m · T ) , with T as the average number of trees. Fairness calculations, including accuracy and statistical parity, have a complexity of O ( n ) , while counterfactual fairness, involving additional predictions and sorting, has a complexity of O ( n · T + n log n ) . Overall, the training process is O ( n · m · log ( n ) ) , prediction is O ( n · m · T ) , and fairness calculations combine to time complexity of O ( n · T + n log n ) .

5.2.6. Running Time Analysis

For the time analysis, we compared the duration taken by the model in different stages, excluding data balancing time. The results, visualized in Figure 5, reveal that the COMPAS dataset requires significantly more time for model training, which could be attributed to the complexity and size of the dataset. In contrast, the German Credit dataset exhibits relatively low computational demand across all stages. This comparison allows us to understand the computational efficiency of the proposed method in real-world scenarios.

6. Conclusions

This study presents a robust framework to address the significant challenge of balancing fairness—individual and group—while maintaining high accuracy in machine learning models. By leveraging a multi-objective framework and Reject Option Classification, we have demonstrated that it is possible to mitigate bias related to sensitive attributes such as gender and age without sacrificing model performance. Our extensive experiments across three datasets—German Credit, COMPAS, and Adult Income—highlight the adaptability and robustness of the framework. Integrating various multi-objective optimization algorithms, such as NSGA-II, Particle Swarm Optimization, Differential Evolution, and Genetic Algorithms, revealed that the ROC method consistently outperforms these baselines across all datasets. For the German Credit dataset, ROC achieved the highest accuracy of 94.29% for accepted instances while maintaining a low individual fairness loss of 0.04 and group fairness loss of 0.06. This demonstrates the framework’s ability to handle complex fairness-accuracy trade-offs, as even the rejected instances showed acceptable performance (75.38% accuracy). On the COMPAS dataset, ROC improved accuracy to 71.43% for accepted instances, significantly reducing individual fairness loss to 0.41 and group fairness loss to 0.01, reflecting a major improvement in fairness. For the Adult Income dataset, ROC maintained a strong accuracy of 84.3% with fairness metrics improved across the board (individual fairness loss: 0.25, group fairness loss: 0.02), underscoring the scalability of the method across diverse domains. This study contributes to ongoing research in mitigating biases in predictive models and offers a reliable solution to balance fairness and accuracy in real-world applications.

7. Future Work

We plan to explore and analyze the integration of additional optimization algorithms to enhance the framework’s performance and robustness and investigate the impact of different fairness definitions and constraints. We plan to incorporate advanced optimization techniques, such as reinforcement learning-based methods, to enhance the balance between fairness and accuracy. Additionally, dynamic weight adjustment strategies within the multi-objective optimization process could be explored to make the fairness-accuracy trade-off more adaptive and responsive to varying contexts. Another critical direction is expanding the definition of fairness used in the study. While our work has focused on counterfactual fairness and statistical parity, there is potential to investigate how other fairness metrics, including equal opportunity, disparate impact, and demographic parity, influence the model’s performance. Exploring these alternative definitions could provide deeper insights into fairness dynamics across different domains and datasets. Given the promising results of static datasets used this study, future research would extend this framework to real-time decision-making environments, such as credit lending or judicial systems, where fairness considerations are crucial. This would involve assessing the computational efficiency of the proposed framework in dynamic, real-time settings and exploring fairness constraints in streaming models to ensure fairness is maintained in evolving real-time streams of datasets. Furthermore, this work has focused primarily on fairness concerning individual sensitive attributes like gender and age. A natural extension would be to examine intersectionality—how fairness holds up when multiple sensitive attributes intersect, such as gender and race. This would provide a more comprehensive understanding of the framework’s capabilities in ensuring fairness across diverse subgroups. Finally, we plan to integrate explainability into our proposed framework as it would be another important direction for future research. Providing transparent insights into how decisions are made, especially regarding which instances are rejected or accepted, would imply the model’s trustworthiness and support regulatory compliance. This is particularly relevant in high-stakes domains like healthcare and criminal justice, where explainability and fairness are critical.
The code for our paper can be found here: https://github.com/RN0311/Multiobjective-Optimization-For-Debiasing-Credit-System/tree/main (accessed on 15 September 2024).

Author Contributions

Conceptualization, R.N. and A.K.; methodology, R.N. and A.K.; implementation, R.N.; validation, R.N. and A.K.; investigation, R.N., A.K. and M.B.; writing—original draft preparation, R.N., A.K. and M.B; writing—review and editing, R.N., A.K., M.B. and A.G.; visualization, R.N.; supervision, A.G.; project administration, A.G.; funding acquisition, A.G. All authors have read and agreed to the published version of the manuscript.

Funding

This work is partially funded by two industry alliances at MIT Computer Science and Artificial Intelligence Lab (CSAIL): the Machine Learning Applications consortium of MIT CSAIL, which has three sponsors: Ernst and Young, British Telecom, CISCO, and SAP; and Future of Data Consortium, which has four sponsors: CapitalOne, Fidelity, MassMutual, and Visa.

Data Availability Statement

We used German Credit dataset for our experiments, which is freely available at https://archive.ics.uci.edu/dataset/144/statlog+german+credit+data (accessed on 1 May 2024), Adult income dataset which is available at https://archive.ics.uci.edu/dataset/2/adult (accessed on 1 May 2024) and COMPAS dataset which is also open-source and publically available at the https://github.com/propublica/compas-analysis (accessed on 1 May 2024).

Acknowledgments

The authors thank individuals from over a dozen organizations for their suggestions. In particular, the authors thank Sebastian Gould from Arrow Electronics, Andrew Langworthy, Detlef Nauck, and Steve Whittaker from British Telecommunications. Ali Payani and Ramana Kompella from Cisco, Rasoul Shahsavarifar and Vishal Gossain from Ernst and Young, Felix Sasaki from SAP, and Gabriel Bustamante from Itaú-Unibanco for reviewing the manuscript and providing their suggestions.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Mehrabi, N.; Morstatter, F.; Saxena, N.A.; Lerman, K.; Galstyan, A.G. A Survey on Bias and Fairness in Machine Learning. Acm Comput. Surv. (CSUR) 2019, 54, 1–35. [Google Scholar] [CrossRef]
  2. Castelnovo, A.; Crupi, R.; Greco, G.; Regoli, D.; Penco, I.G.; Cosentini, A.C. A clarification of the nuances in the fairness metrics landscape. Sci. Rep. 2022, 12, 4209. [Google Scholar] [CrossRef] [PubMed]
  3. Hoffman, P.; Bach, K. Statlog (German Credit Data); UCI Machine Learning Repository: Irvine, CA, USA, 1994; Available online: https://archive.ics.uci.edu/dataset/144/statlog+german+credit+data (accessed on 1 May 2024).
  4. Dua, D.; Graff, C. UCI Machine Learning Repository: Adult Data Set; UCI Machine Learning Repository: Irvine, CA, USA, 2019; Available online: https://archive.ics.uci.edu/dataset/2/adult (accessed on 1 May 2024).
  5. Angwin, J.; Larson, J.; Mattu, S.; Kirchner, L. Machine Bias, Propublica. 2016. Available online: https://github.com/propublica/compas-analysis (accessed on 1 May 2024).
  6. Martinez, N.; Bertran, M.; Sapiro, G. Minimax Pareto Fairness: A Multi-Objective Perspective. Proc. Mach. Learn. Res. 2020, 119, 6755–6764. [Google Scholar] [PubMed] [PubMed Central]
  7. Friedrich, T.; Horoba, C.; Neumann, F. Illustration of fairness in evolutionary multi-objective optimization. Theor. Comput. Sci. 2011, 412, 1546–1556. [Google Scholar] [CrossRef]
  8. Padh, K.; Antognini, D.; Glaude, E.L.; Faltings, B.; Musat, C.C. Addressing Fairness in Classification with a Model-Agnostic Multi-Objective Algorithm. In Proceedings of the Conference on Uncertainty in Artificial Intelligence, Online, 3–6 August 2020. [Google Scholar]
  9. Wu, H.; Ma, C.; Mitra, B.; Diaz, F.; Liu, X. A multi-objective optimization framework for multi-stakeholder fairness-aware recommendation. ACM Trans. Inf. Syst. 2022, 41, 1–29. [Google Scholar] [CrossRef]
  10. Biswas, S.; Lima, F.W.S.; Flomenbom, O. Are Socio-Econo-Physical Models Better to Explain Biases in Societies? Rep. Adv. Phys. Sci. 2018, 2, 1850006. [Google Scholar] [CrossRef]
  11. Corbett-Davies, S.; Goel, S. The measure and mismeasure of fairness: A critical review of fair machine learning. arXiv 2018, arXiv:1808.00023. [Google Scholar] [CrossRef]
  12. Barocas, S.; Hardt, M.; Narayanan, A. Fairness and Machine Learning. 2019. Available online: https://fairmlbook.org/ (accessed on 23 June 2024).
  13. Friedler, S.A.; Scheidegger, C.; Venkatasubramanian, S.; Choudhary, S.; Hamilton, E.P.; Roth, D. A comparative study of fairness-enhancing interventions in machine learning. In Proceedings of the Conference on Fairness, Accountability, and Transparency, Atlanta, GA, USA, 29–31 January 2019; ACM: New York, NY, USA, 2019; pp. 329–338. [Google Scholar] [CrossRef]
  14. Mukherjee, D.; Yurochkin, M.; Banerjee, M.; Sun, Y. Two Simple Ways to Learn Individual Fairness Metrics from Data. 2020. Available online: https://proceedings.mlr.press/v119/mukherjee20a/mukherjee20a.pdf (accessed on 1 June 2024).
  15. Kamiran, F.; Calders, T. Data preprocessing techniques for classification without discrimination. Knowl. Inf. Syst. 2012, 33, 1–33. [Google Scholar] [CrossRef]
  16. Calmon, F.; Wei, D.; Vinzamuri, B.; Ramamurthy, K.N.; Varshney, K.R. Optimized pre-processing for discrimination prevention. Adv. Neural Inf. Process. Syst. 2017, 30, 3992–4001. [Google Scholar]
  17. Hardt, M.; Price, E.; Srebro, N. Equality of opportunity in supervised learning. Adv. Neural Inf. Process. Syst. 2016, 29, 3315–3323. [Google Scholar]
  18. Agarwal, A.; Beygelzimer, A.; Dudík, M.; Langford, J.; Wallach, H. A reductions approach to fair classification. In Proceedings of the 35th International Conference on Machine Learning, Stockholm, Sweden, 10–15 July 2018; pp. 60–69. [Google Scholar]
  19. Goel, S.; Cotter, A.; Gupta, M. Model selection for fairness in linear classification. In Proceedings of the 2018 IEEE Data Science Workshop (DSW), Lausanne, Switzerland, 4–6 June 2018; IEEE: Piscataway, NJ, USA, 2018; pp. 17–21. [Google Scholar]
  20. Zhang, B.H.; Lemoine, B.; Mitchell, M. Mitigating unwanted biases with adversarial learning. In Proceedings of the 2018 AAAI/ACM Conference on AI, Ethics, and Society, New Orleans, LA, USA, 2–3 February 2018; pp. 335–340. [Google Scholar]
  21. Edwards, H.; Storkey, A. Censoring representations with an adversary. arXiv 2015, arXiv:1511.05897. [Google Scholar]
  22. Kearns, M.; Neel, S.; Roth, A.; Wu, Z.S. Preventing fairness gerrymandering: Auditing and learning for subgroup fairness. In Proceedings of the 35th International Conference on Machine Learning, Stockholm, Sweden, 10–15 July 2018; pp. 2564–2572. [Google Scholar]
  23. Pleiss, G.; Raghavan, M.; Wu, F.; Kleinberg, J.; Weinberger, K.Q. On fairness and calibration. Adv. Neural Inf. Process. Syst. 2017, 30, 5680–5689. [Google Scholar]
  24. Kim, M.P.; Ghorbani, A.; Zou, J. Multiaccuracy: Black-box post-processing for fairness in classification. In Proceedings of the 2018 AAAI/ACM Conference on AI, Ethics, and Society, Honolulu, HI, USA, 27–28 January 2019; pp. 247–254. [Google Scholar]
  25. Binns, R. On the apparent conflict between individual and group fairness. In Proceedings of the 2020 Conference on Fairness, Accountability, and Transparency, Barcelona, Spain, 27–30 January 2020. [Google Scholar] [CrossRef]
  26. Petersen, F.; Mukherjee, D.; Sun, Y.; Yurochkin, M. Post-processing for individual fairness. Adv. Neural Inf. Process. Syst. 2021, 34, 25944–25955. [Google Scholar] [CrossRef]
  27. Foulds, J.; Pan, S.J. A meta-algorithm for multi-objective optimization: Building fair and accurate models. In Proceedings of the 2018 SIAM International Conference on Data Mining, San Diego, CA, USA, 3–5 May 2018; pp. 711–719. [Google Scholar]
  28. Pessach, D.; Rabinowitz, N.; Rosenschein, J.S. Multi-objective optimization for fair machine learning. In Proceedings of the AAAI Conference on Artificial Intelligence, New York, NY, USA, 7–12 February 2020; Volume 34, pp. 6842–6849. [Google Scholar]
  29. Lin, X.; Chen, H.; Pei, C.; Sun, F.; Xiao, X.; Sun, H.; Zhang, Y.; Ou, W.; Jiang, P. A pareto-efficient algorithm for multiple objective optimization in e-commerce recommendation. In Proceedings of the 13th ACM Conference on Recommender Systems, Copenhagen, Denmark, 16–20 September 2019. [Google Scholar]
  30. Wu, H.; Ma, C.; Mitra, B.; Diaz, F.; Liu, X. Multi-FR: A Multi-Objective Optimization Method for Achieving Two-sided Fairness in E-commerce Recommendation. arXiv 2021, arXiv:2105.02951. [Google Scholar]
  31. Zhang, Q.; Liu, J.; Zhang, Z.-Q.; Wen, J.; Mao, B.; Yao, X. Fairer Machine Learning through Multi-objective Evolutionary Learning. In Lecture Notes in Computer Science; Springer: Berlin/Heidelberg, Germany, 2021; pp. 111–123. [Google Scholar]
  32. Hu, H.; Tian, S.; Guo, Q.; Ouyang, A. An Adaptive Hybrid PSO Multi-Objective Optimization Algorithm for Constrained Optimization Problems. Int. J. Pattern Recognit. Artif. Intell. 2015, 29, 1559009. [Google Scholar] [CrossRef]
  33. Leng, R.; Ouyang, A.; Liu, Y.; Yuan, L.; Wu, Z. A Multi-Objective Particle Swarm Optimization Based on Grid Distance. Int. J. Pattern Recognit. Artif. Intell. 2019, 34, 2059008. [Google Scholar] [CrossRef]
  34. Yu, G.; Ma, L.; Du, W.; Du, W.; Jin, Y. Towards Fairness-Aware Multi-Objective Optimization. arXiv 2022, arXiv:2207.12138. [Google Scholar]
  35. Kamani, M.M.; Haddadpour, F.; Forsati, R. Efficient fair principal component analysis. Mach. Learn. 2022, 111, 3671–3702. [Google Scholar] [CrossRef]
  36. Ashktorab, Z.; Hoover, B.; Agarwal, M.; Dugan, C.; Geyer, W.; Yang, B.H.; Yurochkin, M. Fairness Evaluation in Text Classification: Machine Learning Practitioner Perspectives of Individual and Group Fairness. In Proceedings of the 2023 CHI Conference on Human Factors in Computing Systems. Association for Computing Machinery, Hamburg, Germany, 23–28 April 2023; pp. 1–20. [Google Scholar] [CrossRef]
  37. Dressel, J.; Farid, H. The accuracy, fairness, and limits of predicting recidivism. Sci. Adv. 2018, 4, eaao5580. [Google Scholar] [CrossRef]
  38. Fabris, A.; Mishler, A.; Gottardi, S.; Carletti, M.; Daicampi, M.; Susto, A.G.; Silvello, G. Algorithmic Audit of Italian Car Insurance: Evidence of Unfairness in Access and Pricing. In Proceedings of the 2021 AAAI/ACM Conference on AI, Ethics, and Society, Association for Computing Machinery, Virtual Event, 19–21 May 2021; pp. 458–468. [Google Scholar] [CrossRef]
  39. Zemel, R.; Wu, Y.; Swersky, K.; Pitassi, T.; Dwork, C. Learning fair representations. In Proceedings of the 30th International Conference on International Conference on Machine Learning, Atlanta, GA, USA, 16–21 June 2013; Volume 28. [Google Scholar]
  40. Pierson, E. Demographics and discussion influence views on algorithmic fairness. arXiv 2017, arXiv:1712.09124. [Google Scholar] [CrossRef]
  41. Veale, M.; Kleek, V.M.; Binns, R. Fairness and Accountability Design Needs for Algorithmic Support in High-Stakes Public Sector Decision-Making. In Proceedings of the 2018 CHI Conference on Human Factors in Computing Systems, Association for Computing Machinery, Montreal, QC, Canada, 21–26 April 2018; pp. 1–14. [Google Scholar] [CrossRef]
  42. Fleisher, W. What’s Fair about Individual Fairness? In Proceedings of the 2021 AAAI/ACM Conference on AI, Ethics, and Society. Association for Computing Machinery, Virtual Event, 19–21 May 2021; pp. 480–490. [Google Scholar] [CrossRef]
  43. Caton, S.; Haas, C. Fairness in Machine Learning: A Survey. ACM Comput. Surv. 2020, 56, 1–38. [Google Scholar] [CrossRef]
  44. Thapa, S.; Lomholt, M.A.; Krog, J.; Cherstvy, A.G.; Metzler, R. Bayesian analysis of single-particle tracking data using the nested-sampling algorithm: Maximum-likelihood model selection applied to stochastic-diffusivity data. Phys. Chem. Chem. Phys. 2018, 20, 29018–29037. [Google Scholar] [CrossRef] [PubMed]
  45. Seckler, H.; Metzler, R. Bayesian deep learning for error estimation in the analysis of anomalous diffusion. Nat. Commun. 2022, 13, 6717. [Google Scholar] [CrossRef] [PubMed]
  46. Muñoz-Gil, G.; Garcia-March, M.A.; Manzo, C.; Martín-Guerrero, J.D.; Lewenstein, M. Single trajectory characterization via machine learning. New. J. Phys. 2020, 22, 013010. [Google Scholar] [CrossRef]
  47. Verma, S.; Rubin, J. Fairness definitions explained. In Proceedings of the International Workshop on Software Fairness, Gothenburg, Sweden, 29 May 2018; pp. 1–7. [Google Scholar]
  48. Balashankar, A.; Ramakrishnan, N.; Ravindran, B. PaReCo: A Pareto-based multi-objective approach for achieving fairness in machine learning. In Proceedings of the AAAI Conference on Artificial Intelligence, New York, NY, USA, 7–12 February 2020; Volume 34, pp. 6834–6841. [Google Scholar]
  49. Liu, S.; Vicente, L.N. Accuracy and fairness trade-offs in machine learning: A stochastic multi-objective approach. Comput. Manag. Sci. 2020, 19, 513–537. [Google Scholar] [CrossRef]
  50. Dwork, C.; Hardt, M.; Pitassi, T.; Reingold, O.; Zemel, R. Fairness through awareness. In Proceedings of the Innovations in Theoretical Computer Science Conference, Cambridge, MA, USA, 8–10 January 2012; pp. 214–226. [Google Scholar]
Figure 1. Baseline model—German Credit dataset.
Figure 1. Baseline model—German Credit dataset.
Make 06 00105 g001
Figure 2. Baseline model—COMPAS dataset.
Figure 2. Baseline model—COMPAS dataset.
Make 06 00105 g002
Figure 3. Baseline model—Adult Income dataset.
Figure 3. Baseline model—Adult Income dataset.
Make 06 00105 g003
Figure 4. Proposed multi-objective optimization approach.
Figure 4. Proposed multi-objective optimization approach.
Make 06 00105 g004
Figure 5. Running time analysis across various datasets.
Figure 5. Running time analysis across various datasets.
Make 06 00105 g005
Table 1. Summary table.
Table 1. Summary table.
Related ResearchAdvantagesDisadvantages
Ashktorab et al. [36]Provides insights into practitioner perspectives on various fairness definitions and evaluationsLimited sample size may affect the generalizability of findings.
Julia et al. [37]Evaluates the balance between accuracy and fairness in predictive models for recidivism.Highlights the challenge of achieving both accuracy and fairness simultaneously.
Alessandro et al. [38]Provides empirical evidence of unfair practices in algorithmic decision-making in insurance.Focused on a specific case study, limiting broader applicability of findings.
Zemel et al. [39]Offers a method to learn representations that satisfy fairness constraints while maintaining utility.Complexity in ensuring that representations remain useful while enforcing fairness.
Pierson et al. [40]Explores how demographic factors influence perceptions of fairness in algorithms.Findings vary significantly based on the demographic context of the study.
Veale et al. [41]Identifies design needs for ensuring fairness in high-stakes decisions made by algorithms.Highlights implementation challenges in bureaucratic setting.
Fleisher et al. [42]Discusses the philosophical underpinnings of individual fairness, highlighting its limitations.Individual fairness does not adequately address group disparities.
Caton et al. [43]Summarizes diverse definitions and metrics of fairness, contributing to a clearer framework for understanding fairness.Practical applications of various metrics vary widely.
Table 2. Comparison of Accuracy, Precision, Recall and F1-Score of various classifiers for German Credit.
Table 2. Comparison of Accuracy, Precision, Recall and F1-Score of various classifiers for German Credit.
Classification ModelAccuracyPrecisionRecallF1-Score
Adaboost78.5%85.5%83.68%84.58%
Decision Trees70.5%80.1%77.3%78.7%
Naive Bayes72.0%82.4%76.5%79.4%
Random Forest80.0%83.1%90.7%86.7%
XGBoost82.0%85.7%89.3%87.5%
Table 3. Comparison of Accuracy, Precision, Recall and F1-Score of various classifiers for COMPAS dataset.
Table 3. Comparison of Accuracy, Precision, Recall and F1-Score of various classifiers for COMPAS dataset.
Classification ModelAccuracyPrecisionRecallF1-Score
Adaboost66.40%62.52%61.96%62.24%
Decision Trees63.08%59.09%56.52%57.78%
Naive Bayes64.05%60.23%57.61%58.89%
Random Forest64.21%60.00%59.78%59.89%
XGBoost66.72%63.43%60.33%61.84%
Table 4. Comparison of Accuracy, Precision, Recall and F1-Score of various classifiers for Adult Income dataset.
Table 4. Comparison of Accuracy, Precision, Recall and F1-Score of various classifiers for Adult Income dataset.
Classification ModelAccuracyPrecisionRecallF1-Score
Adaboost83.80%65.91%68.05%66.96%
Decision Trees81.38%60.98%63.27%62.11%
Naive Bayes79.66%65.97%32.34%43.40%
Random Forest84.83%70.09%64.74%67.31%
XGBoost86.35%72.67%69.57%71.09%
Table 5. Performance results of different multi-objective optimization algorithms by sensitive subgroup (Gender—Male or Female) compared to the ROC on the baseline XGBoost model—GERMAN CREDIT dataset.
Table 5. Performance results of different multi-objective optimization algorithms by sensitive subgroup (Gender—Male or Female) compared to the ROC on the baseline XGBoost model—GERMAN CREDIT dataset.
AlgorithmAcc.Ind. Fairness Loss—GenGroup Fairness Loss—Gen
NSGA-II80.010.030.04
Particle Swarm Optimization83.030.020.04
Differential Evolution84.590.020.08
Genetic Algorithm76.100.010.06
Nelder Mead79.500.020.06
Reject Option Classification
    Accepted Instances94.290.040.06
    Rejected Instances75.380.010.06
Table 6. Performance results of different multi-objective optimization algorithms by sensitive subgroup (Gender—Male or Female) compared to the ROC on the baseline XGBoost model—COMPAS Dataset.
Table 6. Performance results of different multi-objective optimization algorithms by sensitive subgroup (Gender—Male or Female) compared to the ROC on the baseline XGBoost model—COMPAS Dataset.
AlgorithmAcc.Ind. Fairness Loss—GenGroup Fairness Loss—Gen
NSGA-II68.600.180.29
Particle Swarm Optimization66.800.270.19
Differential Evolution65.590.010.06
Genetic Algorithm65.100.010.06
Nelder Mead67.770.240.06
Reject Option Classification
    Accepted Instances71.430.410.01
    Rejected Instances64.000.140.11
Table 7. Performance results of different multi-objective optimization algorithms by sensitive subgroup (Gender—Male or Female) compared to the ROC on the baseline XGBoost model—ADULT INCOME dataset.
Table 7. Performance results of different multi-objective optimization algorithms by sensitive subgroup (Gender—Male or Female) compared to the ROC on the baseline XGBoost model—ADULT INCOME dataset.
AlgorithmOverall Acc.Ind. Fairness Loss—GenGroup Fairness Loss—Gen
NSGA-II84.080.270.39
Particle Swarm Optimization84.570.260.02
Differential Evolution75.050.010.06
Genetic Algorithm75.590.010.03
Nelder Mead76.880.020.06
Reject Option Classification
    Accepted Instances84.30.250.02
    Rejected Instances90.30.010.02
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Nagpal, R.; Khan, A.; Borkar, M.; Gupta, A. A Multi-Objective Framework for Balancing Fairness and Accuracy in Debiasing Machine Learning Models. Mach. Learn. Knowl. Extr. 2024, 6, 2130-2148. https://doi.org/10.3390/make6030105

AMA Style

Nagpal R, Khan A, Borkar M, Gupta A. A Multi-Objective Framework for Balancing Fairness and Accuracy in Debiasing Machine Learning Models. Machine Learning and Knowledge Extraction. 2024; 6(3):2130-2148. https://doi.org/10.3390/make6030105

Chicago/Turabian Style

Nagpal, Rashmi, Ariba Khan, Mihir Borkar, and Amar Gupta. 2024. "A Multi-Objective Framework for Balancing Fairness and Accuracy in Debiasing Machine Learning Models" Machine Learning and Knowledge Extraction 6, no. 3: 2130-2148. https://doi.org/10.3390/make6030105

Article Metrics

Back to TopTop