Next Article in Journal
Draft Genome Sequence of the Commercial Strain Rhizobium ruizarguesonis bv. viciae RCAM1022
Previous Article in Journal
Machine Learning Classification Workflow and Datasets for Ionospheric VLF Data Exclusion
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Can Data and Machine Learning Change the Future of Basic Income Models? A Bayesian Belief Networks Approach

Research Group E-Government, Faculty of Computer Science, University of Koblenz, D-56070 Koblenz, Germany
Submission received: 21 December 2023 / Revised: 10 January 2024 / Accepted: 12 January 2024 / Published: 23 January 2024

Abstract

:
Appeals to governments for implementing basic income are contemporary. The theoretical backgrounds of the basic income notion only prescribe transferring equal amounts to individuals irrespective of their specific attributes. However, the most recent basic income initiatives all around the world are attached to certain rules with regard to the attributes of the households. This approach is facing significant challenges to appropriately recognize vulnerable groups. A possible alternative for setting rules with regard to the welfare attributes of the households is to employ artificial intelligence algorithms that can process unprecedented amounts of data. Can integrating machine learning change the future of basic income by predicting households vulnerable to future poverty? In this paper, we utilize multidimensional and longitudinal welfare data comprising one and a half million individuals’ data and a Bayesian beliefs network approach to examine the feasibility of predicting households’ vulnerability to future poverty based on the existing households’ welfare attributes.

Graphical Abstract

1. Introduction

The idea of basic income, a minimum income transferred by the state to each member of a society, is widespread. Appeals to governments for implementing basic income programs are contemporary, including in the United Kingdom [1,2], Germany [3,4], and Spain [5,6]. In addition to the major programs and plans, there are a large number of small-scale pilot projects, which mostly revolve around several experiments in the United States [7] and serve as scientific controlled trials to capture the potential up and downs of performing this idea [8]. A complete list of major implemented or ongoing basic income programs can be found in the world bank study [9]. Basic income systems per definition do not attach any specific attributes such as age, marital status, gender, health status, social class, etc., to any individual as eligibility criteria [8,10,11,12,13,14]. In addition, basic income should be paid uniformly to each person of the society [10,12,13,14,15,16,17].
The idea of paying uniformly distributed basic income to all members of a society might improve the quality of life and reduce poverty; however, there are yet theoretical debates [7,18,19,20,21] regarding the financing of a broad basic income program. If the equally transferred cash to all individuals is set too low, it becomes insufficient in reducing poverty. On the other hand, setting too-high cash transfers paid to each individual can become extremely costly and infeasible in the face of governments’ budget constraints [22].
One possible idea to work around basic income’s budget constraint dilemma is the smart prediction of households vulnerable to future poverty.
In this paper, we proceed forward along with the notion of vulnerability to future poverty [23] to identify households deserving of receiving basic income. We utilize the application of Bayesian belief networks (BBNs) to predict vulnerable households. Thereby, we extend the scarce research literature [24] on Bayesian networks’ application to economic analysis and policy. The reason for selecting BBNs is their interpretability. Contrary to black-box machine learning models (e.g., artificial neural networks), BBNs do not require a post hoc explanation [25] in order to be understood by a human as in BBNs the probabilistic dependencies among the explanatory and dependent variables are made tractable via a directed acyclic graph. BBNs’ interpretability makes them an excellent paradigm for interpretable artificial intelligence [26]. In addition, while massive panel data are rarely available in developing countries, we design our experiments based on thirty welfare attributes of one and a half million individuals’ data from the first real basic income experiment in the world in Iran, which can enrich the robustness of the outcomes. Third, while, none of the existing literature of vulnerability to poverty explores vulnerability across time using longitudinal data, our study investigates the feasibility of predicting vulnerable households in a future time step by incorporating the existing set of the households’ welfare attributes in multiple preceding time steps.
The remainder of the paper is as follows. In Section 2, the paper’s idea is contextualized in the existing literature. Section 3 elucidates the main welfare attributes of the individuals within the source data of the research. How the Bayesian model is constructed and analyzed is explained in Section 4. The results of the analysis are presented in Section 5. Concluding remarks are highlighted in Section 6.

2. Literature Background

The evidence of the expansiveness of basic income comes not only from a theoretical perspective but also from empirical experiences. Ref. [18] estimate a broad basic income program not attached to social and demographic variables to cost about twice the cost of all existing transfers in the United States. A universal no questions asked public transfer to everyone would necessitate significant tax rises, as well as reductions in essential existing benefits [20]. Ref. [19] predicts that implementing a broad basic income program would increase tax rates for below-median-income workers up to 80 percent if the basic income level is set at one-half of Canada’s median income. Ref. [21] prognose that if in China, in 2014, the government would have decided to pay every adult a monthly income of 336 RMB (if living in urban areas) or 231 RMB (if living in rural areas), this would have required a yearly government expenditure of 3.472 trillion RMB, equivalent to approximately 5.46% of overall Chinese GDP and almost half of the overall Chinese government expenditure.
Iran is known as the first country in the world to provide a de facto basic income based on the definition of a World bank [9] to all its citizens. In December 2010, Iran launched a cash transfer program that paid every Iranian residing in the country the equivalent of $40–45 a month, unconditionally. The program, while still continuing after thirteen years, has lost much of its desired effect as the purchasing power of the transfers has been largely receding through inflation. It is now witnessed as insufficient for vulnerable households and simultaneously as of little value for the relatively wealthier households, while worsening the government’s budget considering its large aggregate size.
Subsequently, in recent years, it has become inevitable for the Iranian administration to pursue the idea of a basic income, which incorporates a households’ eligibility examination in its system. Apart from Iran’s experience, the most recent or currently ongoing basic income initiatives all around the world are attached to certain socioeconomic conditions to select the eligible receivers [7].
In recent times, the Iranian government has been considering a set of rules with regard to the welfare attributes of households to enable them to become eligible. This approach is facing significant challenges with regard to appropriate recognition of the vulnerable groups. A possible alternative for setting rules with regard to the welfare attributes of households is to employ machine learning algorithms that can process unprecedented amounts of data. Can integrating machine learning change the future of basic income by the smart perdition of vulnerable households? In light of the Iranian evidence, this question is identified by us as a research gap in the context of the existing literature on basic income as a counterfactual scenario for the future.
Prediction of vulnerable households requires an exact definition of the concept vulnerability.
The literature regarding poverty [27] highlights a basic distinction between the concepts of poverty and vulnerability. Measuring poverty can be performed based on monetary poverty measurements or based on multidimensional poverty measurements [28]. Monetary poverty comprises people at risk when their disposable income, (which is the money available for spending or saving after tax, social transfers, and other deductions) is below some certain threshold, e.g., the poverty line. Multidimensional poverty measurements consider multiple well-being measures, i.e., educational, health services, etc., alongside monetary measures of an appropriate assessment of poverty. While a multidimensional definition of poverty appears to be more promising in a comprehensive sense [29], in this paper we work first with the more understandable definition: the monetary poverty measurement.
Once we define poverty, measuring vulnerability can be performed based on the risk of non-poor people of falling below a certain welfare threshold, e.g., the poverty line, in the future or the risk that poor people remain poor in the future [30,31]. Hence, vulnerability must be distinguished from poverty as it measures the ex-ante risk of being poor, that is, before the uncertainty is resolved [32,33,34]. In light of the above definitions, in this paper we aim at employing machine learning to predict the posterior probability of the not-observable vulnerability to future poverty of a household by inputting a set of present observable welfare attributes of it.
We use a monetary parameter as the poverty line for households to be the criterion for receiving basic income. The monetary criterion to be compared with the selected poverty line is the average cash accessibility of a household expressed in the average account balance of a household. The average account balance of a household is equal to the remaining total amount of accessible money, which exists on average in the bank accounts of the entire members of a household, after all deposits and credits have been balanced with any charges or debits. This parameter is presumed to be suitable to represent the cash accessibility of a household which is under examination to receive further cash within a basic income program. Hence, we name this measure the cash accessibility of a household throughout. To predict this parameter, the administration considers a complete set of observable welfare attributes of that family within recent years. The machine learning algorithm employed by the administration will then support the administration by predicting the cash accessibility of that household in the future. The decision regarding eligibility or not then will be finalized based on the probabilistic outcome of the machine learning model, together with a probability line set by the administration. For example, if the machine learning algorithm predicts a family to be 80% vulnerable to future poverty and 20% not, then it is up to the above mentioned government’s probability line as to whether households with 80% vulnerability probability are eligible or, for example, if only households with 90% vulnerability probability are eligible. In this paper, we design experiments to examine whether we achieve high accuracies in the prediction of households vulnerable to future poverty by changing the critical cash accessibility threshold value (i.e., the selected poverty line), as well as by changing the classification probability thresholds (the government’s selected probability line rule).
Several studies in the recent literature regarding poverty studies propose setting links between the households’ observable welfare attributes and the probability of being vulnerable to future poverty. Refs. [35,36] obtain the conditional probability of being vulnerable in various welfare dimensions by a Probit or Logit model. The approach of [35] measures vulnerability as the probability of being multidimensionally poor as an aggregate by determining deprivation scores to a total set of vulnerabilities. The approach of [36] estimates the probabilities of being vulnerable in each one of the welfare indicators disaggregated by components. The approach of [35] outputs only one probability as a measure through a Probit model, regardless of the specific welfare dimensions. However, this estimation method does not account for the different qualities of the vulnerabilities in different dimensions of well-being. That is, it omits the fact that, in addition to the deprivation score a household has, the composition of the deprivation set that this score involves also matters. The approach of [36] provides distinct evidence with regard to vulnerabilities in different dimensions of well-being. However, this estimation method might not manage to compute a flawless aggregate welfare estimation.
Ref. [23] propose a Bayesian beliefs network to predict the probability of being multidimensionally poor. In contrast to the Probit and logit models, Bayesian belief networks [37] incorporate the conditional connections of a set of multidimensional welfare attributes in a graphical network and the Bayes theorem [38]. Bayesian networks are more appropriate to solve multidimensional welfare estimation in comparison to Logit and Probit models, which can only face a multidimensional problem through one or several one dimensional solutions [23].
In this paper, we proceed along the recent pathway of the literature regarding vulnerability to poverty and in the context of the first real basic income experiment of the world by examining the efficiency of a Bayesian approach to forecast households vulnerable to future poverty on a complete set of panel data comprising thirty relevant welfare attributes of one and a half million individuals.

3. Data

The anonymized welfare data of 1.5 million randomly chosen individual Iranian citizens provided by Iran’s ministry of cooperatives, labor, and social welfare are utilized in this paper. The 30 distinct registered information for each individual are shown in Table 1. Note that this set of welfare attributes are purposefully selected by the aforementioned Iranian authority with the objective of developing practical inferences to know whom should be excluded from the (originally) widely designed basic income transfer. Hence, we do not accomplish a feature selection step as the first stage of the pipeline of our study in this paper. The relevance of the 30 given attributes is primarily taken for granted. However, the importance values of attributes are showcased during the analysis based on specific parameter settings in Section 4. Each row in the source data table belongs exactly to one person, containing welfare information of that person in 30 distinct columns. We did not utilize this data table directly as, in line with the existing literature, we believe in a more meaningful parameter to evaluate each individuals’ welfare i.e., the aggregation of individuals’ welfare attributes within their corresponding household. Over the key identification Parent ID, we ascribed each of the 1.5 million individual persons to their corresponding unique household which resulted in exactly five hundred thousand households in the total. We generated, out of individual available data, a new table named Household_welfare_data. In the aggregation process, we added the welfare values of individual persons (e.g., car numbers and car values) within a family together and averaged the sum over the number of family members. The aggregation was carried out with the exception of person ID, parent ID, age, gender, and the living place. These variables are not to be summed and hence are represented by the parent’s information in the Household_welfare_data. Finally, due to the existence of 8280 NaN values in a column related to the question of living in the city or not, we dropped the corresponding rows to produce a data table consisting of 491,720 rows (households) × 30 columns (welfare attributes).

4. Bayesian Network Model

A Bayesian belief network (BBN) model [39] is represented by a graphical network consisting of probabilistic relationships among a set of variables. It comprises a directed acyclic graph (DAG) with nodes representing the variables and arcs representing conditional dependencies between the connected nodes. Bayes’ theorem defines the relationships between variables [40]. The main objective of BBNs is to infer the posterior probability distribution of a set of presumably not-completely-observable variables after viewing a set of observable variables.
Formally, the parents of a node X i ; Pa( X i ), are all the nodes with arcs pointing to X i , while the children of X i are all nodes towards which X i has outgoing arcs. The fact that the DAG is presumed to be acyclic guarantees the existence of at least one topological ordering of the variables X 1 ;…; X n such that { X 1 ;…; X i 1 } only covers non-descendants of X i . Thus, after applying the chain rule, the joint probability for the joint set of values { X 1 ;…; X n } in a BBN is determined by the chain rule expressed in Equation (1):
p ( X 1 , , X n ) =   p ( X 1   ) p ( X 2 X 1 )   p ( X 3 X 1 , X 2 ) p ( X n X 1 , , X n 1 )
As a node is conditionally independent of its non-descendants, after removing all non-descendants other than parents from the conditional probability of each node, the joint probability for the joint set of values can be expressed as in Equation (2):
p ( X 1 , , X n ) = p ( X 1   P a X i ) p ( X 2 P a X 2 ) p ( X n P a X n )
Hence, the joint probability of any arbitrary set of variables in BBNs can be inferred from the local conditional distributions (CPTs) of each BBN’s variable given its parents’ values [26].
Further explanation of Bayesian belief networks and how they are utilized is explained in [41].
In our investigation, the 30 variables in Table 1 are selected to be the main components of the Bayesian network. The corresponding variable to the rows in Table 1 i.e., the average balance of the entire family members’ accounts within the period of 20 March 2019–20 March 2020, are the key dependent variables of our study. In a certain year, this variable represents the average remaining total amount of the money, which is accessible in the bank accounts of the entire members of a family through that year, after all the debits and credits have been considered. This is presumed to be the criterion for a household to receive further cash in the form of a basic income transfer. For example, if the administration decides on 20 March 2019 upon the eligibility of a household to be the receiver of the basic income within the time period 20 March 2019–20 March 2020, it uses the data of the aggregated values of the welfare attributes of the entire members of that family by means of their banking records from 20 March 2016 until 20 March 2019 (rows 18–29 of Table 1), as well as the non-banking welfare attributes of that household on the day of decision making (rows 3–17 of Table 1) to assess the household’s posterior probability of having access to cash within the upcoming time. As the individual banking records can be interpreted as sensitive information and might not be applicable in all circumstances, we design experiments in this paper once with the existence of banking records and once without banking records.
Constructing a Bayesian belief network requires the performance of three steps. First, as Bayesian networks conventionally use labeled variables, whose domain are a finite set of labels, we should discretize the space of the data for the entire variables. In our study, if a welfare variable is greater than or equal to a certain threshold t h v , it becomes labeled as negative (by assumption) and if it is smaller than t h v , it becomes labeled as positive (by assumption). To experiment the impact of setting different values of t h v , we incorporate deciles. A decile is the result of splitting up the ranked data of each variable into 10 equally large subsections so that each subsection represents 1/10 of the data of a variable. We set the splitting threshold in each experiment of our study to the 9 in-between threshold value of 10 identified deciles. Thus, the n’th decile splits the entire data related to a certain variable in Table 1 to the negatives, which represent the data part with values greater than or equal to the n/10 of the ranked data of that variable and the positives, which represent the data part with values smaller than the (10 − n)/10 of the ranked data of that variable. For example, the t h v n = 5 splits the data of a variable into values less than the median (positives) and values greater than the median (negatives). In our study, each time we set the variables’ splitting threshold in line with a certain decile and we apply the same decile number n to split the data of all 30 variables. The splitting of variables is performed with the exception of gender and living place, which are binary variables on their own.
In the second step of constructing a BBN, we estimate a DAG that reveals the dependencies between the variables given the labeled data [42]. In our study, we do not impose any prior knowledge to derive our networks’ DAGs. To estimate the DAGs we use the Hill Climbing Search algorithm [43]. This algorithm undertakes a greedy local search that starts from a disconnected DAG consisting of all 30 variables and proceeds by iteratively performing single-edge manipulations that maximally increase the value of a score function. The score function maps DAGs to a numerical score, which measures how well the DAGs fit to the given data table. In this study, the Bic Score [44] is used, which is a log-likelihood score with an additional penalty for network complexity, to avoid overfitting. The pyAgrum 1.9.0 “https://agrum.gitlab.io/pages/reference.html (accessed on 1 January 2024)” on Jupyter framework is applied to estimate the DAG, as well as the subsequent steps up to the Bayesian inferences through this study.
In the third step, we must compute conditional probability distributions (CPTs) of the individual variables, given the DAG and the labeled data.
By completion of the third step, the BNN is completed and can be used to make inferences with regard to the variables of concern’s posterior probabilities.
As mentioned above, in this paper we are pursuing the feasibility of obtaining reliable inferences regarding the cash accessibility posterior probabilities of any household in an upcoming year of interest by inputting a set of the household’s welfare attributes to the BBN.
We design experiments to split the variable average accounts balance within the period of 20 March 2019–20 March 2020 (which is the key variable of our study) according to the 9 in-between threshold values of 10 deciles, each time to the corresponding negative and positive subsection, and see how well the BBN can distinguish households who are positioned on an area larger or equal than the threshold t h v (negatives) from households who are positioned on an area smaller than the threshold t h v (positives). As the BBN model outputs probabilistic values linked to being negative or positive, we must decide upon a probability threshold t h p upon which we (i.e., the administration) decide to classify a household as a positive type, if the predicted posterior probability of the positives exceeds t h p , and classify a household as a negative, if the predicted posterior probability of positives for that household through the BBN model does not exceed the t h p . Obviously, the default t h p for interpreting probabilities to class labels is 0.5. However, tuning of t h p to increase the preciseness of predictions necessitates observing changes in the accuracy of the BBN model to predict each negative and positive value of the target variable while moving t h p , for example, from 0.0 to 0.9 in small (e.g., 0.1) incremental step sizes. Therefore, to analyze the accuracies, we apply the receiver operating characteristic (ROC) curve [45] as well as the precision and recall (PR) curve [46].
Before presenting the results in Section 4, we explain the applied metrics to assess the feasibility of accurate eligible households’ classification by a special case in the experimental design of our paper.

Classification of Households according to above and under Median Cash Availability

In this subsection, we examine how the population with under median average cash access is distinguished from the population with above median average cash access. The threshold t h v (n = 5) is set to be the cash level larger than available for the lower n = 5 deciles (positives) and less than available for the upper n = 5 deciles (negatives). We split the data of the rest of the variables to the negatives and positives based on their median levels, accordingly, as described in the previous section. The BBN model is trained using the labeled data of 30 variables in line with t h v (n = 5) and the Hill Climbing Search algorithm over 80% of the 491,720 rows × 30 columns of data. The BBN’s DAG is presented in Figure 1.
We use the rest, 20%, of the entire data table as the test set. The left and right hand panels of Figure 2 illustrate the ROC and PR metrics of the test set, respectively. To interpret these accuracy measures, we should first note the definitions a–d, as well as Equations (3)–(10).
  • True negative (TN): if the target value is negative and the predicted value is negative.
  • True positive (TP): if the target value is positive and the predicted value is positive.
  • False negative (FN): if the target value is positive and the predicted value is negative.
  • False positive (FP): if the target value is negative and the predicted value is positive.
T r u e   p o s i t i v e   r a t e = T P   c o u n t / T P   c o u n t + F N   c o u n t
F a l s e   p o s i t i v e   r a t e = F P   c o u n t / F P   c o u n t + T P   c o u n t
T r u e   n e g a t i v e   r a t e = T N   c o u n t / T N   c o u n t + F P   c o u n t
F a l s e   n e g a t i v e   r a t e = F N   c o u n t / F N   c o u n t + T N   c o u n t
R e c a l l = T r u e   p o s i t i v e   r a t e
P r e c i s i o n = T P   c o u n t / T P   c o u n t + F N   c o u n t
F 1 _ s c o r e = 2   P r e c i s i o n     R e c a l l / P r e c i s i o n + R e c a l l
A c c u r a c y t o t a l = T P   c o u n t + T N   c o u n t T P   c o u n t + T N   c o u n t + F P   c o u n t + F N   c o u n t
The ROC curve depicts the contrast between the true-positive rate and false-positive rate by changing the probability thresholds t h p . The PR curve depicts the possible trade-off between the recall and the precision by changing the probability thresholds t h p . Note that the precision describes how precise the model is if it predicts a class to be positive, whereas the recall describes how much the model has succeeded in covering the positives to be correctly predicted. The PR becomes more meaningful when there are moderate to large imbalances between the amount of data within the negative and positive classes when we are seeking to distinguish the population with the lowest n = one decile (positives) from the remaining nine deciles (negatives).
The AUC represents each time integral of the area under the ROC and PR curves, respectively, and is a metric for evaluating the accuracy of the model by considering the entire possible ranges of the t h p . The f1_score represents the harmonic mean of the precision and recall metrics. Note that the f1_score does not incorporate the True negative count. The accuracy_total represents the overall accurateness of the model without being detailed in the negative and positive subsections.
The blue point in Figure 2 is the optimal PR threshold that results in the best balance between the precision and recall metrics expressed in the term f1_score. The red point in Figure 2 is the optimal ROC threshold that results in the best balance between the true and the false-positive rates. The ROC and PR curves in Figure 2 show t h p around 0.425–0.492 as the optimum threshold, which delivers a balanced accuracy and preciseness to predict the positive classes. With that t h p , we will be able to cover between 80 and 90 percent of precisely predicted positives i.e., below-median-level cash accessible households. By setting non-optimal  t h p threshold values deviating from the optimal value, we can increase the recognition of the true-positive households up to levels higher than, e.g., 90%; however, then we should take extra added False positives (in ROC), as well as a reduced precision (in PR), into account.
Note that most of the indicators in our study are concerning regarding the possible fine-tuned detection of positives and not negatives, per definition. This is presumed to be legitimate in our study as the first concern of basic income programs is the detection of positives (i.e., the people relatively vulnerable to future poverty) and not negatives.
Depending on the government’s budget constraints, political administrations might be interested (beside the optimal thresholds) in the range of non-optimal threshold values as well as they can choose threshold values encompassing higher than, e.g., a 90% recognition of True positives, (which promises a higher recognition rate of lower income groups compared to the level corresponding to the optimal threshold) at the cost of accepting to allocate extra budget to be distributed to False positives. This trade-off between recognition of negatives and positives in the test set of the Household_welfare_data through altering the t h p threshold from 0.0 to 0.9 in small (0.1) incremental step sizes and its relationship with the accuracy_total is represented in Figure 3.
As the individual banking records can be interpreted as sensitive information and might not be applicable, we replicate the classification of households in the test set according to above and under median cash availability without recent years’ banking records (with the exception of the average balance of the entire family members’ accounts, which is incorporated only in the training step). Note that banking records of recent years play a crucial role in predicting the households’ cash access. This is evident from the depiction of the importance of welfare attributes in Figure 4.
Each panel in Figure 4 describes the change in the posterior probability of the dependent variable of our study (household cash accessibility) to be classified as negative or positive (in the vertical axis) by providing evidence from a single explanatory variable in the form of probability x for the variable negative and 1-x for the variable positive and incrementing x along the horizontal axis from 0.0 to 1.0 in small (0.01) incremental step sizes. The absolute difference of the maximum and the minimum of the posterior probability of negative cash access by changing the value of the explanatory variable in the horizontal axis is depicted in the parentheses above each explanatory variable’s panel and is a criterion for assessing how important that variable is in the shaping of a prediction for the dependent variable. The panels are sorted from left to right and above to below based on an increase in the importance values. As is evident from Figure 4, the entire banking records (rows 18–29 of Table 1 and in the lower 4 rows in Figure 4) play a greater role in predicting the posteriors in comparison with the non-banking welfare attributes of that household (rows 3–17 of Table 1 and the first 4 rows in Figure 4). Hence, it can be rationally expected that erasing banking records will reduce the model’s accuracy metrics.
The reduced BBN (BBN_2) model through the subtraction of the recent years’ banking records is trained using the labeled data of 14 variables in line with t h v (n = 5) and the Hill Climbing Search algorithm over 80% of the 491,720 rows × 14 columns of data. The BBN_2’s DAG is presented in Figure 5.
The PR and ROC curves together with the AUC and f_score values in Figure 6 indicate the feasibility of obtaining relatively precise predictions through erasing the banking records by t h v (n = 5) and by setting the t h p to optimal values. The indicators in Figure 6, however, imply lower preciseness compared to Figure 2, as is expected.
The trade-off between the recognition of negatives and positives (in the case of cutting the banking records from the households’ eligibility question in the test set of the Household_welfare_data) through altering the t h p threshold from 0.0 to 0.9 in small (0.1) incremental step sizes and its relationship with the accuracy_total is represented in Figure 7. It is evident that, in this case, the administration will have less room to play in the range of non-optimal threshold values as, in contrast to Figure 3, the represented True and False positive rates curves do not drift that much from each other. If the government, for example, decides to choose threshold values to achieve higher than 90% recognition of True positives, in this case (which promises a higher recognition rate of lower income groups), allocating extra budget to be distributed to more than 60% False positives, who are indeed, not eligible to be receivers of the basic income, must be taken into account.

5. Results

The results of examining the feasibility of distinguishing lower-cash-accessible groups (positives) from higher-cash-accessible groups (negatives) by setting various cash accessibility thresholds th(n) and various distinguishing probability thresholds tp(n) are presented in Table 2 (where banking and non-banking welfare records of households are incorporated) and Table 3 (where only non-banking welfare records of households are incorporated). Each column represents one distinct percentile number t h v , which can be the possible poverty boundary with regard to cash accessibility to define the negatives and positives. Each of the first nine rows represent one distinct percentile number t h p , upon which the government can decide to classify a household as a positive type if the predicted posterior probability of the positives exceeds t h p . Each cell within the first nine rows and nine columns, represents the result of the BNN models’ predictions regarding 1000 randomly chosen persons from the test set in a confusion matrix depicted in the explanatory Table 2.
The tp_ROC, tp_PR, AUC_ROC, AUC_PR, f1_score_ROC, f1_score_PR, and max_accuracy represent the optimal indicators of accuracy corresponding to the entire test set within each column. The max_accuracy describes the maximum of the overall accuracy (accuracy_total) we can achieve to deliver correct predictions within each t h v n .
The applied evaluation metrics reveal that, first of all, the probability of proper recognition of entire vulnerable households without error by using the BBN is infinitely low. This is especially a matter of concern due to the emergence of false-negative counts, i.e., vulnerable households, that mistakenly are detected as wealthy classes almost among all experiments. The rare results without False negatives involved comprise corner solutions consisting of, for example, tp(n = 1) and th(n = 9), which describe a situation where the administration is almost at a point of approximating a basic income system based on a definition for the entire population.
In the both Table 3 and Table 4, the minimum level of max_accuracy appears when the thresholds for distinguishing positives from negatives are set at the median cash accessibility level, e.g., th(n = 5), or next to it. The max_accuracy increases when we move towards deciding to distinguish the extremely-high-cash-accessible groups, e.g., th(n = 9), from the rest of society or to distinguish the extra-low-cash-accessible groups, e.g., th(n = 1), from the rest of society. This relatively higher overall feasibility of appropriate predictions to distinguish extreme groups from the rest is also evident from the parameter AUC_ROC in Table 3 and Table 4. However, the obtained high total accuracies from the detection of extreme groups does not mean equal preciseness with regard to positives and negatives. This is revealed through observing f1_scores obtained at optimal threshold levels. The f1_score_ROC and f1_score_PR decrease if we move from th(n = 9) to th(n = 1). This mainly goes back to the increase in False negative counts and can be made evident by means of looking at the False negative counts within each row. That is, although by setting the threshold at the left hand side of the deciles range, e.g., th(n = 1), we are capable of recognizing a relatively high number of negative marked households; however, due to imbalance in the data (through a higher proportion of negatives), some predictions regarding real positive households, which are the main targets of the basic income, are false. The problem of False-negative counts becomes less severe when setting the threshold at the right hand side of the deciles range, e.g., th(n = 9). In this case, all indicators, i.e., AUC_ROC, AUC_PR, f1_score_ROC, f1_score_PR, and max_accuracy, indicate satisfactory predictions. Regardless of the question of the optimum decile number, t h v , the question of which probability threshold, t h p should be set to achieve the maximum accuracy of detection can be answered to some extent by deviating from the optimal tp_ROC and tp_PR levels. A government can deviate from the optimal t h p levels, which often happen to be around 0.4, i.e., tp(n = 4), in our research, and set extremely low classification probability thresholds by reducing the t h p thresholds to levels lower than the optimum one. For example, to tp(n = 1 or 2 or 3) to achieve the minimum possible number of False negative counts. However, this tolerance often happens at the cost of accepting to allocate extra budget to be distributed to the False positives. The room to play that the administrations have to move back and forth in the range of non-optimal tp_ROC and tp_PR threshold values, in cases where high-resolution welfare attributes of the households (e.g., through including the households’ bank records) are available is wider compared to cases working with a relatively limited number of welfare attributes of the households (e.g., through excluding the households’ bank records). This is evident from the slopes of the True-positive and False-positive count curves in Figure 3 (through the curves’ relatively sharp style) and 5 (through the curves’ relatively mild style).

6. Further Discussion and Conclusions

The theoretical notion of basic income prescribes transferring equal amounts of money to individuals irrespective of their specific attributes, per definition. However, practical implementation of basic income proposals can necessitate setting smart criteria to be attached to specific attributes of households to become eligible receivers. In this paper, we proposed the question as to whether machine learning can resolve the inconsistency problem between theory and practice. Can integrating machine learning change the future of basic income by confidently excluding societies’ relatively wealthy groups from a basic income program and simultaneously let the basic income program run broadly for the rest of the society?
We analyzed this question by utilizing multidimensional and longitudinal welfare data comprising one and a half million individuals and a Bayesian beliefs network approach to examine the feasibility of predicting households’ vulnerability to future poverty based on the existing households’ welfare attributes.
We first converted the individual household data to a household level and set the cash availability level of a household as the criterion, upon which the governments can decide whether a household can be included in the receiving list of cash transfers within the context of a basic income program. We designed experiments to observe how precisely an administration can distinguish the relatively vulnerable groups of society from the relatively wealthier groups by employing a Bayesian beliefs model. To determine optimal feasible solutions, we changed the cash accessibility thresholds, as well as the classification probability thresholds, in small increments. The experiments were carried out once with the incorporation of a comprehensive set of household welfare attributes, especially considering their records of banking data and once with the incorporation of a limited set of the households’ welfare attributes, i.e., without considering their records of banking data. Thereby, we utilized standard machine learning metrics to evaluate the results of the experiments. The main emphasis of the metrics is put on the recognition of the relatively vulnerable groups, which are marked as positives through the study.
The metrics reveal that the probability of proper recognition of the entire vulnerable households without error by using Bayesian networks is indeed infinitely low. The rare results, without False negatives involved, comprise merely corner solutions, which are equivalent to a solution where the administration is almost next to the point of approximating a basic income system by distributing uniform cash values to all households.
However, different metrics applied in our study show that the opportunity to converge toward a balanced solution between a highly precise prediction of relatively wealthier groups and lowest possible error regarding false-negative counts is to some extent possible. Three experimental set-ups in our study granted near to optimal solutions: First when we set the cash accessibility threshold criteria possibly close to the deciles at the right hand side of the median level. Second when we set the minimal classification probability threshold possibly lower than the optimal classification probability thresholds. Third when we incorporated more data to the welfare attributes profile of the households, for example, by consideration of the households’ banking records. There might exist further caveats for each of these experimental setups. First, low precision by recognizing vulnerable groups by setting the threshold at the left hand side of the deciles ranges could have been triggered in our study through imbalance in the training sample. This issue might be theoretically resolved by incorporating extra data from societies’ vulnerable groups to be represented in the machine learning model’s training procedure or by using resampling or penalizing learning techniques. Yet, note that as the main goal of most basic income systems is to cover a broad range of society, setting the practical cash accessibility threshold close to the deciles at the right hand side of the median level makes sense if we assume that the real poverty line in various countries lies somewhere not far from the median-level people.
Then, setting the minimal classification probability threshold possibly lower than the optimal classification probability thresholds seems to be essential to obtain maximal recognition levels of truly vulnerable groups at the cost of accepting extra government budget to be allocated to the basic income program. Furthermore, incorporating individual (or household level) banking records in a machine learning algorithm to increase its preciseness is a subject of discussion outside of our paper’s scope as individual banking records can be interpreted as sensitive information and might not be applicable in all circumstances. In all, the solution achieved in our study might be interpreted as a preliminary step, which still is not satisfactory due to the existence of a small percentage of False-negative individuals who could be falsely recognized and be disadvantaged due to the households’ eligibility application within a basic income system. Indeed, mis-recognition of even an extremely small number of persons vulnerable to poverty can give a misleading impression with regard to the feasibility of integrating machine learning in the notion of basic income as a guarantee against the existence of people vulnerability to poverty in society. However, this does not mean that reaching an optimized solution by incorporating machine learning is not obtainable. We merely utilized one method i.e., Bayesian networks, in our application, with the advantage of achieving interpretable results in a graphical grace.
The need for discretizing continuous variables is a limitation of Bayesian networks that may influence the accuracy of results. The application of several other machine learning methods, especially deep neural network models, can produce outcomes with high accuracies as well. Achieving a high degree of preciseness by using the data set of this paper together with high interpretability by using other machine learning models remains a further step of our research. There are, furthermore, some other limits in the design our study’s model, e.g., by modelling the time factor. While we incorporated information regarding the previous years’ welfare profiles to predict future welfare levels, we did not explicitly model the consequence time points as influencing factors in the Bayesian network. Capturing the dynamics of the welfare dimensions through time can be conducted by applying other machine learning models, e.g., recurrent deep learning approaches and or dynamic Bayesian belief networks, which are capable of relating variables to each other over adjacent time steps. In addition, while we used a monetary poverty measurement as the dependent variable of our study, i.e., a poverty line, applying a broader range of welfare variables to be predicted in a multidimensional vulnerability to future poverty concept can be considered as another frontier of research to be accomplished.

Funding

This research received no external funding.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Data supporting reported results can be downloaded at: https://gitlab.uni-koblenz.de/hamedkhalili/bi (accessed on 10 January 2024).

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Jordan, B. The low road to basic income? Tax-beneft integration in the UK. J. Soc. Policy 2012, 41, 1–17. [Google Scholar]
  2. Mori, I. Half of UK Adults Would Support Universal Basic Income in Principle. Polling Commissioned by the Institute for Policy Research, University of Bath. 2017. Available online: https://www.ipsos.com/en-uk/half-uk-adults-would-support-universal-basic-income-principle (accessed on 1 January 2024).
  3. Coalition Agreement SPD, The Greens and FDP. Mehr Fortschritt Wagen. 2021. Available online: https://www.spd.de/fileadmin/Dokumente/Koalitionsvertrag/Koalitionsvertrag_2021-2025.pdf (accessed on 1 January 2024).
  4. Scientific Advisory Board at the Federal Ministry. Unconditional Basic Income. 2021. Available online: bmf-wissenschaftlicher-beirat.de (accessed on 1 January 2024).
  5. De Durana, A.; Rodrigu, G. New Developments in the National Guaranteed Minimum Income Scheme in Spain. European Social Policy Network (ESPN); European Commission: Brussels, Belgium, 2021. [Google Scholar]
  6. Perkiö, J. Basic income proposals in Finland, Germany and Spain. European Network for Alternative Thinking and Political Dialogue. 2013. Available online: https://www.researchgate.net/publication/260763235_Discussion_Paper_No_2_Basic_Income_Proposals_in_Finland_Germany_and_Spain/link/0f3175322d59949d21000000/download?_tp=eyJjb250ZXh0Ijp7ImZpcnN0UGFnZSI6InB1YmxpY2F0aW9uIiwicGFnZSI6InB1YmxpY2F0aW9uIn19 (accessed on 1 January 2024).
  7. Yang, J.; Mohan, G.; Pipil, S.; Fukushi, K. Review on basic income (BI): Its theories and empirical cases. J. Soc. Econ. Dev. 2021, 23, 203–239. [Google Scholar]
  8. Widerquist, K. Perspectives on the guaranteed income, part I. J. Econ. Issues 2001, 35, 749–757. [Google Scholar]
  9. Gentilini, U.; Grosh, M.; Rigolini, J.; Yemtsov, R. Exploring Universal Basic Income; A Guide to Navigating Concepts, Evidence, and Practices; World Bank: Washington, DC, USA, 2020. [Google Scholar]
  10. Bill, J. The prospects for basic income. Soc. Policy Adm. 1988, 22, 115–123. [Google Scholar]
  11. Pateman, C. Democratizing citizenship: Some advantages of a basic income. Politics Soc. 2004, 32, 89–105. [Google Scholar] [CrossRef]
  12. Raventós, D. Basic Income: The Material Conditions of Freedom; Pluto Press: London, UK, 2007. [Google Scholar]
  13. Van der Veen, R. Real freedom versus reciprocity: Competing views on the justice of unconditional basic income. Political Stud. 1998, 46, 140–163. [Google Scholar]
  14. Van Parijs, P. Why surfers should be fed: The liberal case for an unconditional basic income. Philos. Public Aff. 1991, 20, 101–131. [Google Scholar]
  15. Lovett, F. Domination and distributive justice. J. Politics 2009, 71, 817–830. [Google Scholar]
  16. Standing, G. The precariat: From denizens to citizens? Polity 2012, 44, 588–608. [Google Scholar] [CrossRef]
  17. Von Gliszczynski, M. Social protection and basic income in global policy. Glob. Soc. Policy 2017, 17, 98–100. [Google Scholar] [CrossRef]
  18. Hoynes, H.; Rothstein, J. Universal Basic Income in the United States and Advanced Countries. Annu. Rev. Econ. 2019, 11, 929–958. [Google Scholar] [CrossRef]
  19. Jackson, A. Basic income: A social democratic perspective. Glob. Soc. Policy 2017, 17, 101–104. [Google Scholar] [CrossRef]
  20. OECD. Basic Income as a Policy Option: Can It Add Up? OECD: Paris, France, 2017. [Google Scholar]
  21. Zheng, Y.; Guerriero, M.; Lopez, E.; Haverman, P. Universal Basic Income; A Working Paper; UNDP China Office: Beijing, China, 2020.
  22. Fitzpatrick, T. Freedom and Security: An Introduction to the Basic Income Debate; Macmillan Press: London, UK, 1999. [Google Scholar]
  23. Gallardo, M. Measuring vulnerability to multidimensional poverty with Bayesian network classifiers. Econ. Anal. Policy 2022, 73, 492–512. [Google Scholar] [CrossRef]
  24. Ceriani, L.; Gigliarano, C. Multidimensional well-being: A Bayesian networks approach. Soc. Indic. Res. 2020, 152, 237–263. [Google Scholar] [CrossRef]
  25. Miller, T. Explanation in artificial intelligence: Insights from the social sciences. Artif. Intell. 2019, 267, 1–38. [Google Scholar] [CrossRef]
  26. Mihaljević, B.; Bielza, C.; Larrañaga, P. Bayesian networks for interpretable machine learning and optimization. Neurocompting 2021, 456, 648–665. [Google Scholar] [CrossRef]
  27. Gallardo, M. Identifying vulnerability to poverty: A critical survey. J. Econ. Surv. 2018, 32, 1074–1105. [Google Scholar] [CrossRef]
  28. Salecker, L.; Ahmadov, A.K.; Karimli, L. Contrasting Monetary and Multidimensional Poverty Measures in a Low-Income Sub-Saharan African Country. Soc. Indic. Res. 2020, 151, 547–574. [Google Scholar] [CrossRef]
  29. Bossert, W.; Chakravarty, S.; D’Ambrosio, C. Multidimensional poverty and material deprivation with discrete data. Rev. Income Wealth 2013, 59, 29–43. [Google Scholar]
  30. Chaudhuri, S.; Jalan, J.; Suryahadi, A. Assessing Household Vulnerability to Poverty from Cross-Sectional Data: A Methodology and Estimates from Indonesia; Department of Economics Discussion Paper Series; Columbia University: New York, NY, USA, 2002; Volume 102. [Google Scholar]
  31. Christiaensen, L.; Subbarao, K. Towards an understanding of household vulnerability in rural Kenya. J. Afr. Econ. 2005, 14, 520–558. [Google Scholar] [CrossRef]
  32. Calvo, C.; Dercon, S. Measuring Individual Vulnerability; Discussion Paper Series 229; Department of Economics, University of Oxford: Oxford, UK, 2005. [Google Scholar]
  33. Calvo, C.; Dercon, S. Vulnerability to Poverty, No 2007-03, CSAE Working Paper Series, Centre for the Study of African Economies, University of Oxford. 2007. Available online: https://EconPapers.repec.org/RePEc:csa:wpaper:2007-03 (accessed on 1 January 2024).
  34. Calvo, C.; Dercon, S. Vulnerability to individual and aggregate poverty. Soc. Choice Welf. 2013, 41, 721–740. [Google Scholar]
  35. Feeny, S.; McDonald, L. Vulnerability to multidimensional poverty: Findings from households in Melanesia. J. Dev. Stud. 2016, 52, 447–464. [Google Scholar] [CrossRef]
  36. Gallardo, M. Measuring vulnerability to multidimensional poverty. Soc. Indic. Res. 2020, 148, 67–103. [Google Scholar] [CrossRef]
  37. Grover, J. A Literature Review of Bayes’ Theorem and Bayesian Belief Networks (BBN). In Strategic Economic Decision-Making; Springer: New York, NY, USA, 2012; pp. 11–27. [Google Scholar]
  38. Bishop, C.M. Pattern Recognition and Machine Learning; Springer: New York, NY, USA, 2006. [Google Scholar]
  39. Pearl, J. Probabilistic Reasoning in Intelligent Systems: Networks of Plausible Inference, 1st ed.; Morgan Kaufmann: San Francisco, CA, USA, 1988. [Google Scholar]
  40. Puga, J.; Krzywinski, M.; Altman, N. Bayes’ theorem. Nat. Methods 2015, 12, 277–278. [Google Scholar] [CrossRef]
  41. Barbrook-Johnson, P.; Penn, A.S. Bayesian Belief Networks. In Systems Mapping; Palgrave Macmillan: Cham, Switzerland, 2022. [Google Scholar] [CrossRef]
  42. Richard, E. Neapolitan, Learning Bayesian Networks; Northeastern Illinois University: Chicago, IL, USA, 2003; Available online: http://www.cs.technion.ac.il/~dang/books/Learning%20Bayesian%20Networks(Neapolitan,%20Richard).pdf (accessed on 1 January 2024).
  43. Tsamardinos, I.; Brown, L.E.; Aliferis, C.F. The max-min hill-climbing Bayesian network structure learning algorithm. Mach. Learn. 2006, 65, 31–78. [Google Scholar] [CrossRef]
  44. Koller, D.; Friedman, N. Probabilistic Graphical Models—Principles and Techniques; MIT Press: Cambridge, MA, USA, 2009; Available online: http://mitp-content-server.mit.edu:18180/books/content/sectbyfn?collid=books_pres_0&id=7953&fn=9780262013192_sch_0001.pdf (accessed on 1 January 2024).
  45. Fawcett, T. An Introduction to ROC Analysis. Pattern Recognit. Lett. 2006, 27, 861–874. [Google Scholar] [CrossRef]
  46. Powers, D.M.W. Evaluation: From Precision, Recall and F-Measure to ROC, Informedness, Markedness & Correlation. J. Mach. Learn. Technol. 2011, 2, 37–63. [Google Scholar]
Figure 1. The Bayesian belief network’s directed acyclic graph incorporating 30 welfare variables.
Figure 1. The Bayesian belief network’s directed acyclic graph incorporating 30 welfare variables.
Data 09 00018 g001
Figure 2. The test set’s ROC and PR metrics.
Figure 2. The test set’s ROC and PR metrics.
Data 09 00018 g002
Figure 3. Government’s play room to recognize higher True positive rates.
Figure 3. Government’s play room to recognize higher True positive rates.
Data 09 00018 g003
Figure 4. Welfare attributes’ importance to provide evidence regarding negative and positives.
Figure 4. Welfare attributes’ importance to provide evidence regarding negative and positives.
Data 09 00018 g004
Figure 5. The Bayesian belief network’s directed acyclic graph incorporating non-banking welfare variables.
Figure 5. The Bayesian belief network’s directed acyclic graph incorporating non-banking welfare variables.
Data 09 00018 g005
Figure 6. The test set’s ROC and PR metrics by incorporating non-banking welfare variables.
Figure 6. The test set’s ROC and PR metrics by incorporating non-banking welfare variables.
Data 09 00018 g006
Figure 7. Government’s play room to recognize higher True-positive rates by incorporating non-banking welfare variables.
Figure 7. Government’s play room to recognize higher True-positive rates by incorporating non-banking welfare variables.
Data 09 00018 g007
Table 1. The types of 30 distinct registered information from each individual. Note that the dates are Iranian calendar dates corresponding to the beginning and the end of each Iranian year 1395, 1396, 1397, 1398 converted to European equivalents 2016–2017, 2017–2018, 2018–2019, 2019–2020, respectively.
Table 1. The types of 30 distinct registered information from each individual. Note that the dates are Iranian calendar dates corresponding to the beginning and the end of each Iranian year 1395, 1396, 1397, 1398 converted to European equivalents 2016–2017, 2017–2018, 2018–2019, 2019–2020, respectively.
Person’s family profile and gender
  • Person ID [integer].
2.
Parent ID [integer].
3.
Age [integer].
4.
Gender [binary].
Person’s living place
5.
Live in the city or not [binary].
Person’s income.
6.
Total annual salary [float].
7.
Has a trade union license [binary].
8.
Is an employed taxable person [binary].
Person’s insurance and retirement status
9.
Has health insurance [binary].
10.
Is a pension fund insurer [binary].
11.
Is a pension fund retiree [binary].
Person’s transport and trips
12.
Number of foreign air trips [integer].
13.
Number of foreign land trips [integer].
14.
Total number of cars [integer].
15.
Total value of cars [float].
Person’s special health issues
16.
Is a special patient [binary].
17.
Is a disabled person [binary].
Person’s bank account records of the recent years
18.
Total income from bank interest within 20 March 2016–20 March 2017 [float].
19.
Total creditor turnover within 20 March 2016–20 March 2017 [float].
20.
Total debt within 20 March 2016–20 March 2017 [float].
21.
Average accounts balance within 20 March 2016–20 March 2017 [float].
22.
Total income from bank interest in within 20 March 2017–20 March 2018 [float].
23.
Total creditor turnover within 20 March 2017–20 March 2018 [float].
24.
Total debt within 20 March 2017–20 March 2018 [float].
25.
Average accounts balance within 20 March 2017–20 March 2018 [float].
26.
Total income from bank interest within 20 March 2018–20 March 2019 [float].
27.
Total creditor turnover within 20 March 2018–20 March 2019 [float].
28.
Total debt within 20 March 2018–20 March 2019 [float].
29.
Average accounts balance within 20 March 2018–20 March 2019 [float].
30.
Average accounts balance within 20 March 2019–20 March 2020 [float].
Table 2. Explanation of represented cells in Table 3 and Table 4 in confusion matrix.
Table 2. Explanation of represented cells in Table 3 and Table 4 in confusion matrix.
Column: th(n = i)
Row: tp(n = i)TP count out of 1000FP count out of 1000
FN count out of 1000TN count out of 1000
Table 3. Feasibility of distinguishing lower-cash-accessible groups (positives) from higher-cash-accessible groups (negatives) by setting various cash accessibility thresholds th(n) and various distinguishing probability thresholds tp(n) if bank records are incorporated.
Table 3. Feasibility of distinguishing lower-cash-accessible groups (positives) from higher-cash-accessible groups (negatives) by setting various cash accessibility thresholds th(n) and various distinguishing probability thresholds tp(n) if bank records are incorporated.
Indexth(n = 1)th(n = 2)th(n = 3)th(n = 4)th(n = 5)th(n = 6)th(n = 7)th(n = 8)th(n = 9)
tp(n = 1)11010920424729929338232147534259231969524581616490787
297522052922386172801117228765421806
tp(n = 2)964518710327113336219946119056119768718281212990273
43816376735054637402253243320914117653520
tp(n = 3)933817168248793251224481195411366671208009389756
4682353708736007447938395532703417918891037
tp(n = 4)873216262244693151014298852899655897877588934
52829627147761084500574266630746210311071859
tp(n = 5)78271565423962304924167251684642767826288528
61834687228261795509704427832259223361202265
tp(n = 6)602114642229522898239862506330626697755887624
79840787349262711051988452768875230431243169
tp(n = 7)43121172619229257583685048055605647605386621
9684910775012965014254311846411435196235581294172
tp(n = 8)391266814017193403042841444551437283784419
100849158768181662206561182486180362150256901456374
tp(n = 9)0000946108101991429320446246432480311
13986122477622767329159128750030138625527517515810482
tp_ROC0.0740.1850.2650.3660.4740.6270.7400.8210.910
tp_PR0.3780.3790.3970.4050.4300.4860.4630.4910.429
AUC_ROC0.9070.8970.8970.8970.8940.8970.9000.9090.918
AUC_PR0.6530.7600.8040.8520.8850.9170.9240.9680.985
f1_score_ROC0.5570.6860.7630.8040.8350.8640.8810.9030.924
f1_score_PR0.6680.730.770.8050.8360.8710.9010.9350.967
max_accurcy0.9160.880.8560.8150.8580.8380.8650.9020.95
Table 4. Feasibility of distinguishing lower-cash-accessible groups (positives) from higher-cash-accessible groups (negatives) by setting various cash accessibility thresholds th(n) and various distinguishing probability thresholds tp(n) if bank records are not incorporated.
Table 4. Feasibility of distinguishing lower-cash-accessible groups (positives) from higher-cash-accessible groups (negatives) by setting various cash accessibility thresholds th(n) and various distinguishing probability thresholds tp(n) if bank records are not incorporated.
Indexth(n = 1)th(n = 2)th(n = 3)th(n = 4)th(n = 5)th(n = 6)th(n = 7)th(n = 8)th(n = 9)
tp(n = 1)9827418849630253438651452143059638370728480819290486
27601163273161595445120180000
tp(n = 2)811291612882703413694065093845853397012638061879040
447464350835354222031691126472925960
tp(n = 3)455811912623923932329047630556827568823980317590294
808178567066456683194917029128205351722
tp(n = 4)334711210118816326020345323554023266822279716590189
928289269511753213140672240571714070112737
tp(n = 5)00103851418522213037914949118262417177814589684
125875101711164610169479146326106221841213047812
tp(n = 6)000012166140643079940011958113375312488880
12587520479618462925154521837619728412715955681616
tp(n = 7)0000361392381864832677486846969585969
125875204796296682299571339427271326222208112974527
tp(n = 8)0000001251292716227357536076181245
1258752047963056953796043964484353763512392011319251
tp(n = 9)000000000010312253351972035
12587520479630569539160952547558740058628747317318461
tp_ROC0.1270.2550.3380.4080.5240.6090.7200.7930.933
tp_PR0.2510.2550.3110.3380.3630.3700.4160.4470.480
AUC_ROC0.8260.7940.7770.7660.7610.7580.7650.7750.783
AUC_PR0.3670.5040.5840.6690.7430.7980.8670.9210.963
f1_score_ROC0.4120.5380.6040.650.6930.7350.7660.8130.863
f1_score_PR0.4410.5380.6050.6620.7260.7850.8410.8970.948
max_accurcy0.8750.8140.7510.7010.7050.7120.7450.8250.908
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Khalili, H. Can Data and Machine Learning Change the Future of Basic Income Models? A Bayesian Belief Networks Approach. Data 2024, 9, 18. https://doi.org/10.3390/data9020018

AMA Style

Khalili H. Can Data and Machine Learning Change the Future of Basic Income Models? A Bayesian Belief Networks Approach. Data. 2024; 9(2):18. https://doi.org/10.3390/data9020018

Chicago/Turabian Style

Khalili, Hamed. 2024. "Can Data and Machine Learning Change the Future of Basic Income Models? A Bayesian Belief Networks Approach" Data 9, no. 2: 18. https://doi.org/10.3390/data9020018

Article Metrics

Back to TopTop