Fairness Beyond Disparate Treatment & Disparate Impact: Learning Classification without Disparate Mistreatment \titlenote An opensource code implementation of our scheme is available at: http://fatecomputing.mpisws.org/
Abstract
Automated datadriven decision making systems are increasingly being used to assist, or even replace humans in many settings. These systems function by learning from historical decisions, often taken by humans. In order to maximize the utility of these systems (or, classifiers), their training involves minimizing the errors (or, misclassifications) over the given historical data. However, it is quite possible that the optimally trained classifier makes decisions for people belonging to different social groups with different misclassification rates (e.g., misclassification rates for females are higher than for males), thereby placing these groups at an unfair disadvantage. To account for and avoid such unfairness, in this paper, we introduce a new notion of unfairness, disparate mistreatment, which is defined in terms of misclassification rates. We then propose intuitive measures of disparate mistreatment for decision boundarybased classifiers, which can be easily incorporated into their formulation as convexconcave constraints. Experiments on synthetic as well as real world datasets show that our methodology is effective at avoiding disparate mistreatment, often at a small cost in terms of accuracy.
©2017 International World Wide Web Conference Committee (IW3C2),
published under Creative Commons CC BY 4.0 License.
\conferenceinfoWWW 2017,April 3–7, 2017, Perth, Australia.
\copyrightetcACM 0.0pt
\crdata9781450349130/17/04.
http://dx.doi.org/10.1145/3038912.3052660
1
1 Introduction
The emergence and widespread usage of automated datadriven decision making systems in a wide variety of applications, ranging from content recommendations to pretrial risk assessment, has raised concerns about their potential unfairness towards people with certain traits [8, 22, 24, 27]. Antidiscrimination laws in various countries prohibit unfair treatment of individuals based on specific traits, also called sensitive attributes (\eg, gender, race). These laws typically distinguish between two different notions of unfairness [5] namely, disparate treatment and disparate impact. More specifically, there is disparate treatment when the decisions an individual user receives change with changes to her sensitive attribute information, and there is disparate impact when the decision outcomes disproportionately benefit or hurt members of certain sensitive attribute value groups. A number of recent studies [10, 21, 29], including our own prior work [28], have focused on designing decision making systems that avoid one or both of these types of unfairness.
These prior designs have attempted to tackle unfairness in decision making scenarios where the historical decisions in the training data are biased (\ie, groups of people with certain sensitive attributes may have historically received unfair treatment) and there is no ground truth about the correctness of the historical decisions (\ie, one cannot tell whether a historical decision used during the training phase was right or wrong). However, when the ground truth for historical decisions is available, disproportionately beneficial outcomes for certain sensitive attribute value groups can be justified and explained by means of the ground truth. Therefore, disparate impact would not be a suitable notion of unfairness in such scenarios.
In this paper, we propose an alternative notion of unfairness, disparate mistreatment, especially wellsuited for scenarios where ground truth is available for historical decisions used during the training phase. We call a decision making process to be suffering from disparate mistreatment with respect to a given sensitive attribute (\eg, race) if the misclassification rates differ for groups of people having different values of that sensitive attribute (\eg, blacks and whites). For example, in the case of the NYPD Stopquestionandfrisk program (SQF) [1], where pedestrians are stopped on the suspicion of possessing an illegal weapon [12], having different weapon discovery rates for different races would constitute a case of disparate mistreatment.
User Attributes  Ground Truth  Classifier’s  Disp.  Disp.  Disp.  
Sensitive  Nonsensitive  (Has Weapon)  Decision to Stop  Treat.  Imp.  Mist.  
Gender  Clothing Bulge  Prox. Crime  
Male 1  1  1  ✓  1  1  1  ✗  ✓  ✓  
Male 2  1  0  ✓  1  1  0  
Male 3  0  1  ✗  1  0  1  ✓  ✗  ✓  
Female 1  1  1  ✓  1  0  1  
Female 2  1  0  ✗  1  1  1  ✓  ✗  ✗  
Female 3  0  0  ✓  0  1  0 
In addition to all misclassifications in general, depending on the application scenario, one might want to measure disparate mistreatment with respect to different kinds of misclassifications. For example, in pretrial risk assessments, the decision making process might only be required to ensure that the false positive rates are equal for all groups, since it may be more acceptable to let a guilty person go, rather than incarcerate an innocent person. ^{1}^{1}1 “It is better that ten guilty persons escape than that one innocent suffer”—William Blackstone On the other hand, in loan approval systems, one might instead favor a decision making process in which the false negative rates are equal, to ensure that deserving (positive class) people with a certain sensitive attribute value are not denied (negative class) loans disproportionately. Similarly, depending on the application scenario at hand, and the cost of the type of misclassification, one may choose to measure disparate mistreatment using false discovery and false omission rates, instead of false positive and false negative rates (see Table 1).
In the remainder of the paper, we first formalize disparate treatment, disparate impact and disparate mistreatment in the context of (binary) classification. Then, we introduce intuitive measures of disparate mistreatment for decision boundarybased classifiers and show that, for a wide variety of linear and nonlinear classifiers, these measures can be incorporated into their formulation as convexconcave constraints. The resulting formulation can be solved efficiently using recent advances in convexconcave programming [26]. Finally, we experiment with synthetic as well as real world datasets and show that our methodology can be effectively used to avoid disparate mistreatment.
2 Background and Related Work
In this section, we first elaborate on the three different notions of unfairness in automated decision making systems using an illustrative example and then provide an overview of the related literature.
Disparate mistreatment. Intuitively, disparate mistreatment can arise in any automated decision making system whose outputs (or decisions) are not perfectly (\ie, 100%) accurate. For example, consider a decision making system that uses a logistic regression classifier to provide binary outputs (say, positive and negative) on a set of people. If the items in the training data with positive and negative class labels are not linearly separable, as is often the case in many realworld application scenarios, the system will misclassify (\ie, produce false positives, false negatives, or both, on) some people. In this context, the misclassification rates may be different for groups of people having different values of sensitive attributes (\eg, males and females; blacks and whites) and thus disparate mistreatment may arise.
Figure 1 provides an example of decision making systems (classifiers) with and without disparate mistreatment. In all cases, the classifiers need to decide whether to stop a pedestrian—on the suspicion of possessing an illegal weapon—using a set of features such as bulge in clothing and proximity to a crime scene. The “ground truth” on whether a pedestrian actually possesses an illegal weapon is also shown. We show decisions made by three different classifiers , and . We deem and as unfair due to disparate mistreatment because their rate of erroneous decisions for males and females are different: has different false negative rates for males and females ( and , respectively), whereas has different false positive rates ( and ) as well as different false negative rates ( and ) for males and females.
Disparate treatment. In contrast to disparate mistreatment, disparate treatment arises when a decision making system provides different outputs for groups of people with the same (or similar) values of nonsensitive attributes (or features) but different values of sensitive attributes.
In Figure 1, we deem and to be unfair due to disparate treatment since ’s (’s) decisions for and ( and ) are different even though they have the same values of nonsensitive attributes. Here, disparate treatment corresponds to the very intuitive notion of fairness: two otherwise similar persons should not be treated differently solely because of a difference in gender.
Disparate impact. Finally, disparate impact arises when a decision making system provides outputs that benefit (hurt) a group of people sharing a value of sensitive attribute more frequently than other groups of people.
In Figure 1, assuming that a pedestrian benefits from a decision of not being stopped, we deem as unfair due to disparate impact because the fraction of males and females that were stopped are different ( and , respectively).
Application scenarios for disparate impact vs. disparate mistreatment. Note that unlike in the case of disparate mistreatment, the notion of disparate impact is independent of the “ground truth” information about the decisions, \ie, whether or not the decisions are correct or valid. Thus, the notion of disparate impact is particularly appealing in application scenarios where ground truth information for decisions does not exist and the historical decisions used during training are not reliable and thus cannot be trusted. Unreliability of historical decisions for automated decision making systems is particularly concerning in scenarios like recruiting or loan approvals, where biased judgments by humans in the past may be used when training classifiers for the future. In such application scenarios, it is hard to distinguish correct and incorrect decisions, making it hard to assess or use disparate mistreatment as a notion of fairness.
However, in scenarios where ground truth information for decisions can be obtained, disparate impact can be quite misleading as a notion of fairness. That is, in scenarios where the validity of decisions can be reliably ascertained, it would be possible to distinguish disproportionality in decision outcomes for sensitive groups that arises from justifiable reasons (\eg, qualification of the candidates) and disproportionality that arises for nonjustifiable reasons (\ie, discrimination against certain groups). By requiring decision outcomes to be proportional, disparate impact risks introducing reversediscrimination against qualified candidates. Such practices have previously been deemed unlawful by courts (Ricci vs. DeStefano, 2009). In contrast, when the correctness of decisions can be determined, disparate mistreatment can not only be accurately assessed, but also avoids reversediscrimination, making it a more appealing notion of fairness.
Related Work. There have been a number of studies, including our own prior work [28], proposing methods for detecting [10, 21, 23, 25] and removing [9, 10, 13, 16, 17, 23, 28, 29] unfairness when it is defined in terms of disparate treatment, disparate impact or both. However, as pointed out earlier, the disparate impact notion might be less meaningful in scenarios where ground truth decisions are available.
A number of previous studies have pointed out racial disparities in both automated [4] as well as human [12, 14] decision making systems related to criminal justice. For example, a recent work by Goel et al. [12] detects racial disparities in NYPD SQF program, inspired by a notion of unfairness similar to our notion of disparate mistreatment. More specifically, it uses ground truth (stops leading to successful discovery of an illegal weapon on the suspect) to show that blacks were treated unfairly since false positive rates in stops were higher for them than for whites. The study’s findings provide further justification for the need for datadriven decision making systems without disparate mistreatment.
A recent work by Hardt et al. [15] (concurrently conducted with our work) proposes a method to achieve a fairness notion equivalent to our notion of disparate mistreatment. This method works by postprocessing the probability estimates of an unfair classifier to learn different decision thresholds for different sensitive attribute value groups, and applying these groupspecific thresholds at decision making time. Since this method requires the sensitive attribute information at decision time, it cannot be used in cases where sensitive attribute information is unavailable (\eg, due to privacy reasons) or prohibited from being used due to disparate treatment laws [5].
3 Formalizing Notions of Fairness
In a binary classification task, the goal is to learn a mapping between user feature vectors and class labels . Learning this mapping is often achieved by finding a decision boundary in the feature space that minimizes a certain loss , \ie, , computed on a training dataset . Then, for a given unseen feature vector , the classifier predicts the class label if and otherwise, where denotes the signed distance from to the decision boundary. Assume that each user has an associated sensitive feature . For ease of exposition, we assume to be binary, \ie, . However, our setup can be easily generalized to categorical as well as multiple sensitive features.
\cellcolorgray!25 Predicted Label  
\cellcolorgray!25  \cellcolorgray!25  
\cellcolorgray!25  \cellcolor gray!25  True positive  False negative 
\cellcolor
blue!25 False Negative Rate 
\cellcolor
gray!25 True Label 
\cellcolor gray!25  False positive  True negative 
\cellcolor
blue!25 False Positive Rate 
\cellcolor
cyan!25 False Discovery Rate 
\cellcolor
cyan!25 False Omission Rate 
\cellcolor
orange!25 Overall Misclass. Rate 
Given the above terminology, we can formally express the absence of disparate treatment, disparate impact and disparate mistreatment as follows:
Existing notion 1: Avoiding disparate treatment. A binary classifier does not suffer from disparate treatment if:
(1) 
, if the probability that the classifier outputs a specific value of given a feature vector does not change after observing the sensitive feature , there is no disparate treatment.
Existing notion 2: Avoiding disparate impact. A binary classifier does not suffer from disparate impact if:
(2) 
, if the probability that a classifier assigns a user to the positive class, , is the same for both values of the sensitive feature , then there is no disparate impact.
New notion 3: Avoiding disparate mistreatment.
A binary classifier does not suffer from disparate mistreatment if the misclassification rates for different groups of people having different values of the sensitive feature are the same. Table 1 describes various ways of measuring misclassification rates.
Specifically, misclassification rates can be measured as fractions over the class distribution in the ground truth labels, \ie, as false positive and false negative rates, or over the class distribution in the predicted labels, \ie, as false omission and false discovery rates.
^{2}^{2}2
In prediction tasks where a positive prediction entails a large cost (\eg, cost involved in the treatment of a disease)
one might be more interested in measuring error rates as fractions over the class distribution in the predicted labels, rather than over the class distribution in the ground truth labels, \eg, to ensure that the false discovery rates, instead of false positive rates, for all groups are the same.
Consequently, the absence of disparate mistreatment in a binary
classification task can be specified with respect to the different misclassification measures as follows:
overall misclassification rate (OMR):
(3) 
false positive rate (FPR):
(4) 
false negative rate (FNR):
(5) 
false omission rate (FOR):
(6) 
false discovery rates (FDR):
(7) 
In the following section, we introduce a method to eliminate disparate mistreatment from decision boundarybased classifiers when disparate mistreatment is defined in terms of overall misclassification rate, false positive rate and false negative rate. Eliminating disparate mistreatment when it is defined in terms of false discovery rate and false omission rate presents significant additional challenges due to computational complexities involved and we leave it as a direction to be thoroughly explored in a future work.
Satisfying multiple fairness notions simultaneously. In certain application scenarios, it might be desirable to satisfy more than one notion of fairness defined above in Eqs. (17). In this paper, we consider scenarios where we attempt to avoid disparate treatment as well as disparate mistreatment measured as overall misclassification rate, false positive rate and false negative rate simultaneously, \ie, satisfy Eqs. (1, 35).
Some recent works [7, 18] have investigated the impossibility of simultaneously satisfying multiple notions of fairness. Chouldechova [7] and Kleinberg et al. [18], show that, when the fraction of users with positive class labels differ between members of different sensitive attribute value groups, it is impossible to construct classifiers that are equally wellcalibrated (where wellcalibration essentially measures the false discovery and false omission rates of a classifier) and also satisfy the equal false positive and false negative rate criterion (except for a “dumb” classifier that assign all examples to a single class). These results suggest that satisfying all five criterion of disparate mistreatment (Table 1) simultaneously is impossible when the underlying distribution of data is different for different groups. However, in practice, it may still be interesting to explore the best, even if imperfect, extent of fairness a classifier can achieve. In the next section, we allow for bounded imperfections in our new fairness notions by allowing the left and rightsides of Eqs. (35) to differ by no more than a threshold .
4 Classifiers without Disparate
Mistreatment
In this section, we describe how to train decision boundarybased classifiers (\eg, logistic regression, SVMs) that do not suffer from disparate mistreatment. These classifiers generally learn the optimal decision boundary by minimizing a convex loss . The convexity of ensures that a global optimum can be found efficiently. In order to ensure that the learned boundary is fair—it does not suffer from disparate mistreatment—one could incorporate the appropriate condition from Eqs. (35) (based on which kind of misclassifications disparate mistreatment is being defined for) into the classifier formulation. For example:
(8) 
where and the smaller is, the more fair the decision boundary would be. The above formulation ensures that the classifier chooses the optimal decision boundary within the space of fair boundaries specified by the constraints. However, since the conditions in Eqs. (35) are, in general, non convex, solving the constrained optimization problem defined by (8) seems difficult.
To overcome the above difficulty, we propose a tractable proxy, inspired by the disparate impact proxy proposed by Zafar et al. [28]. In particular, we propose to measure disparate mistreatment using the covariance between the users’ sensitive attributes and the signed distance between the feature vectors of misclassified users and the classifier decision boundary, \ie:
(9)  
where the term cancels out since and the function is defined as:
(10)  
(11)  
(12) 
which approximates, respectively, the conditions in Eqs. (35). Note that, if a decision boundary satisfies Eqs. (35), the covariance defined above for that boundary will be close to zero, \ie, . Moreover, in linear models for classification, such as logistic regression or linear SVMs, the decision boundary is simply the hyperplane defined by , therefore, .
Given the above proxy, one can rewrite (8) as:
(13) 
where the covariance threshold controls how adherent to disparate mistreatment the boundary should be. ^{3}^{3}3 Note that if one wants to have both equal false positive and equal false negative rates, one can apply separate constraints with defined in both (11) and (12).
Solving the problem efficiently. While the constraints proposed in (13) can be an effective proxy for fairness, they are still nonconvex, making it challenging to efficiently solve the optimization problem in (13). Next, we will convert these constraints into a Disciplined ConvexConcave Program (DCCP), which can be solved efficiently by leveraging recent advances in convexconcave programming [26].
First, consider the constraint described in (13), \ie,
(14) 
where may denote ‘’ or ‘’. Also, we drop the constant number for the sake of simplicity. Since the sensitive feature is binary, \ie, , we can split the sum in the above expression into two terms:
(15) 
where and are the subsets of the training dataset taking values and , respectively. Define and , then one can write 15) as: and rewrite (
(16) 
which, given that is convex in , results into a convexconcave (or, difference of convex) function.
Finally, we can rewrite the problem defined by (13) as:
(17) 
which is a Disciplined ConvexConcave Program (DCCP) for any convex loss , and can be efficiently solved using wellknown heuristics such as the one proposed by Shen et al. [26]. Next, we particularize the formulation given by (17) for a logistic regression classifier [6]. ^{4}^{4}4 Our fairness constraints can be easily incorporated to other boundarybased classifiers such as (non)linear SVMs.
Logistic regression without disparate mistreatment. In logistic regression, the optimal decision boundary can be found by solving a maximum likelihood problem of the form in the training phase. Hence, a fair logistic regressor can be trained by solving the following constrained optimization problem:
(18) 
Simultaneously removing disparate treatment. Note that the above formulation for removing disparate mistreatment provides the flexibility to remove disparate treatment as well. That is, since our formulation does not require the sensitive attribute information at decision time, by keeping the features disjoint from sensitive attribute , one can remove disparate mistreatment and disparate treatment simultaneously.
5 Evaluation
In this section, we conduct experiments on synthetic as well as real world datasets to evaluate the effectiveness of our scheme in controlling disparate mistreatment. To this end, we first generate several synthetic datasets that illustrate different variations of disparate mistreatment and show that our method can effectively remove disparate mistreatment in each of the variations, often at a small cost on accuracy. Then, we conduct experiments on the ProPublica COMPAS dataset [19] to show the effectiveness of our method on a real world dataset. In both the synthetic and realworld datasets, we compare the performance of our scheme with a baseline algorithm and a recently proposed method [15].
All of our experiments are conducted using logistic regression classifiers. To ensure the robustness of the experimental findings, for all of the datasets, we repeatedly (five times) split the data uniformly at random into train (50%) and test (50%) sets and report the average statistics for accuracy and fairness.
Evaluation metrics. In this evaluation, we consider that one wants to remove disparate mistreatment when it is measured in terms of false positive rate and false negative rate (Eqs. (4) and (5)). Specifically, We quantify the disparate mistreatment incurred by a classifier as:
where the closer the values of and to , the lower the degree of disparate mistreatment.
5.1 Experiments on synthetic data
In this section, we empirically study the tradeoff between fairness and accuracy in a classifier that suffers from disparate mistreatment. To this end, we first start with a simple scenario in which the classifier is unfair in terms of only false positive rate or false negative rate. Then, we focus on a more complex scenario in which the classifier is unfair in terms of both.
5.1.1 Disparate mistreatment on only false positive rate or false negative rate
The first scenario considers a case where a classifier trained on the ground truth data leads to disparate mistreatment in terms of only the false positive rate (false negative rate), while being fair with respect to false negative rate (false positive rate), \ie, and (or, alternatively, and ).
Experimental setup. We first generate binary class labels () and corresponding sensitive attribute values (), both uniformly at random, and assign a twodimensional user feature vector () to each of the points. To ensure different distributions for negative classes of the two sensitive attribute value groups (so that the two groups have different false positive rates), the user feature vectors are sampled from the following distributions (we sample points from each distribution):
Next, we train a (unconstrained) logistic regression classifier on this data. The classifier is able to achieve an accuracy of . However, due to difference in feature distributions for the two sensitive attribute value groups, it achieves and , which constitutes a clear case of disparate mistreatment in terms of false positive rate.
We then train several logistic regression classifiers on the same training data subject to fairness constraints on false positive rate, \ie, we train a logistic regressor by solving problem (18), where is given by Eq. (11). Each classifier constrains the false positive rate covariance () with a multiplicative factor () of the covariance of the unconstrained classifier (), that is, . Ideally, a smaller , and hence a smaller , would result in more fair outcomes.
Results. Figure 2 summarizes the results for this scenario by showing (a) the relation between decisionboundary covariance and the false positive rates for both sensitive attribute values; (b) the tradeoff between accuracy and fairness; and (c) the decision boundaries for both the unconstrained classifier (solid) and the fair constrained classifier (dashed). In this figure, we observe that: i) as the fairness constraint value goes to zero, the false positive rates for both groups ( and ) converge, and hence, the outcomes of the classifier become more fair, \ie, , while remains close to zero (the invariance of may however change depending on the underlying distribution of the data); ii) ensuring lower values of disparate mistreatment leads to a larger drop in accuracy.
5.1.2 Disparate mistreatment on both false positive rate and false negative rate
In this section, we consider a more complex scenario, where the outcomes of the classifier suffer from disparate mistreatment with respect to both false positive rate and false negative rate, \ie, both and are nonzero. This scenario can in turn be split into two cases:
I.
and have opposite signs, \ie, the decision boundary disproportionately favors subjects from a certain sensitive attribute value group to be in the positive class (even when such assignments are misclassifications) while disproportionately assigning the subjects from the other group to the negative class. As a result, false positive rate for one group is higher than the other, while the false negative rate for the same group is lower.
II.
and have the same sign, \ie, both false positive as well as false negative rate are higher for a certain sensitive attribute value group. These cases might arise in scenarios when a certain group is harder to classify than the other.
Next, we experiment with each of the above cases separately.
— Case I: To simulate this scenario, we first generate samples from each of the following distributions:
An unconstrained logistic regression classifier on this dataset attains an overall accuracy of but leads to a false positive rate of and (\ie, ) for the sensitive attribute groups and , respectively; and false negative rates of and (\ie, ). Finally, we train three different fair classifiers, with fairness constraints on (i) false positive rates— given by Eq. (11), (ii) false negative rates— given by Eq. (12) and (iii) on both false positive and false negative rates—separate constraints for given by Eq. (11) and Eq. (12).
Results. Figure 3 summarizes the results for this scenario by showing the decision boundaries for the unconstrained classifier (solid) and the constrained fair classifiers. Here, we can observe several interesting patterns. First, removing disparate mistreatment on only false positive rate causes a rotation in the decision boundary to move previously misclassified examples with into the negative class, decreasing their false positive rate. However, in the process, it also moves previously wellclassified examples with into the negative class, increasing their false negative rate. As a consequence, controlling disparate mistreatment on false positive rate (Figure 3(a)), also removes disparate mistreatment on false negative rate. A similar effect occurs when we control disparate mistreatment only with respect to the false negative rate (Figure 3(b)), and therefore, provides similar results as the constrained classifier for both false positive and false negative rates (Figure 3(c)). This effect is explained by the distribution of the data, where the centroids of the clusters for the group with are shifted with respect to the ones for the group .
— Case II: To simulate the scenario where both and have the same sign, we generate samples from each of the following distributions:
Then, we train an unconstrained logistic regression classifier on this dataset. It attains an accuracy of but leads to and , resulting in disparate mistreatment in terms of both false positive and negative rates. Then, similarly to the previous scenario, we train three different kind of constrained classifiers to remove disparate mistreatment on (i) false positive rate, (ii) false negatives rate, and (iii) both.
Results. Figure 4 summarizes the results by showing the decision boundaries for both the unconstrained classifiers (solid) and the fair constrained classifier (dashed) when controlling for disparate mistreatment with respect to false positive rate, false negative rate and both, respectively. We observe several interesting patterns. First, controlling disparate mistreatment for only false positive rate (false negative rate), leads to a minor drop in accuracy, but can exacerbate the disparate mistreatment on false negative rate (false positive rate). For example, while the decision boundary is moved to control for disparate mistreatment on false negative rate, that is, to ensure that more examples with are wellclassified in the positive class (reducing false negative rate), it also moves previously wellclassified negative examples into the positive class, hence increasing the false positive rate. A similar phenomenon occur when controlling disparate mistreatment with respect to only false positive rate. As a consequence, controlling for both types of disparate mistreatment simultaneously brings and close to zero, but causes a large drop in accuracy.
5.1.3 Performance Comparison
In this section, we compare the performance of our scheme with two different methods on the synthetic datasets described above. In particular, we compare the performance of the following approaches:
Our method: implements our scheme to avoid disparate treatment and disparate mistreatment simultaneously. Disparate mistreatment is avoided by using fairness constraints (as described in Sections 5.1.1 and 5.1.2). Disparate treatment is avoided by ensuring that sensitive attribute information is not used while making decisions, \ie, by keeping user feature vectors () and the sensitive features () disjoint. All the explanatory simulations on synthetic data shown earlier (Sections 5.1.1 and 5.1.2) implement this scheme.
Our method_{sen}: implements our scheme to avoid disparate mistreatment only. The user feature vectors () and the sensitive features () are not disjoint, that is, is used as a learnable feature. Therefore, the sensitive attribute information is used for decision making, resulting in disparate treatment.
Hardt et al. [15]: operates by postprocessing the outcomes of an unfair classifier (logistic regression in this case) and using different decision thresholds for different sensitive attribute value groups to achieve fairness. By construction, it needs the sensitive attribute information while making decisions, and hence cannot avoid disparate treatment.
Baseline: tries to remove disparate mistreatment by introducing different penalties for misclassified data points with different sensitive attribute values during training phase. Specifically, it proceeds in two steps. First, it trains an (unfair) classifier minimizing a loss function (\eg, logistic loss) over the training data. Next, it selects the set of misclassified data points from the sensitive attribute value group that presents the higher error rate. For example, if one wants to remove disparate mistreatment with respect to false positive rate and (which means the false positive rate for points with is higher than that of ), it selects the set of misclassified data points in the training set having and . Next, it iteratively retrains the classifier with increasingly higher penalties on this set of data points until a certain fairness level is achieved in the training set (until ). The algorithm is summarized in Figure 1, particularized to ensure fairness in terms of false positive rate. This process can be intuitively extended to account for fairness in terms of false negative rate or for both false positive rate and false negative rate. Like Our method, the baseline does not use sensitive attribute information while making decisions.
Comparison results. Table 2 shows the performance comparison for all the methods on the three synthetic datasets described above. We can observe that, while all four methods mostly achieve similar levels of fairness, they do it at different costs in terms of accuracy. Both Our method_{sen} and Hardt et al.—which use sensitive feature information while making decisions—present the best performance in terms of accuracy (due to the additional information available to them). However, as explained earlier, these two methods suffer from disparate treatment. On the other hand, the implementation of our scheme to simultaneously remove disparate mistreatment and disparate treatment (Our method) does so with further accuracy drop of only with respect to the above two methods that cause disparate treatment. Finally, the baseline is sometimes unable to achieve fairness. When it does achieves fairness, it does so at a (sometimes much) greater cost in accuracy in comparison with the competing methods.
In summary, our method achieves the same performance as Hardt et al. when making use of the same information in the data, \ie, nonsensitive as well as sensitive features. However, in contrast to Hardt et al., it also allows us to simultaneously remove both disparate mistreatment and disparate treatment at a small additional cost in terms of accuracy.
FPR constraints  FNR constraints  Both constraints  
Synthetic setting 1 (Figure 2)  Our method  
Our method_{sen}  
Baseline  
Hardt et al.  
Synthetic setting 2 (Figure 3)  Our method  
Our method_{sen}  
Baseline  
Hardt et al.  
Synthetic setting 3 (Figure 4)  Our method  
Our method_{sen}  
Baseline  
Hardt et al.  
ProPuclica COMPAS (Section 5.2)  Our method_{sen}  
Baseline  
Hardt et al. 
5.2 Real world dataset: ProPublica COMPAS
In this section, we experiment with the COMPAS risk assessment dataset compiled by ProPublica [19] and show that our method can significantly reduce disparate mistreatment at a modest cost in terms of accuracy.
Dataset and experimental setup. ProPublica compiled a list of all criminal offenders screened through the COMPAS (Correctional Offender Management Profiling for Alternative Sanctions) tool ^{5}^{5}5 COMPAS tries to predict the recidivism risk (on a scale of 1–10) of a criminal offender by analyzing answers to 137 questions pertaining to the offender’s criminal history and behavioral patterns [2]. in Broward County, Florida during 20132014. The data includes information on the offenders’ demographic features (gender, race, age), criminal history (charge for which the person was arrested, number of prior offenses) and the risk score assigned to the offender by COMPAS. ProPublica also collected the ground truth on whether or not these individuals actually recidivated within two years after the screening. For more information about the data collection, we point the reader to a detailed description [20]. Some of the followup discussion on this dataset can be found at [3, 11].
In this analysis, for simplicity, we only consider a subset of offenders whose race was either black or white. Recidivism rates for the two groups are shown in Table 3.
Using this ground truth, we build an unconstrained logistic regression classifier to predict whether an offender will (positive class) or will not (negative class) recidivate within two years. The set of features used in the classification task are described in Table 4. ^{6}^{6}6 Notice that goal of this section is not to analyze the best set of features for recidivism prediction, rather, we focus on showing that our method can effectively remove disparate mistreatment in a given dataset. Hence, we chose to use the same set of features as used by ProPublica for their analysis. ^{7}^{7}7 Since race is one of the features in the learnable set, we additionally assume that all the methods have access to the sensitive attributes while making decisions.
The (unconstrained) logistic regression classifier leads to an accuracy of . However, the classifier yields false positive rates of and , respectively, for blacks and whites (\ie, ), and false negative rates of and (\ie, ). These results constitute a clear case of disparate mistreatment in terms of both false positive rate and false negative rate. The classifier puts one group (blacks) at relative disadvantage by disproportionately misclassifying negative (did not recidivate) examples from this group into positive (did recidivate) class. This disproportional assignment results in a significantly higher false positive rate for blacks as compared to whites. On the other hand, the classifier puts the other group (whites) on a relative advantage by disproportionately misclassifying positive (did recidivate) examples from this group into negative (did not recidivate) class (resulting in a higher false negative rate). Note that this scenario resembles our synthetic example Case I in Section 5.1.2.
Finally, we train logistic regression classifiers with three types of constraints: constraints on false positive rate, false negative rate, and on both.
Results. Table 2 (last block) summarizes the results by showing the tradeoff between fairness and accuracy achieved by our method, the method by Hardt et al., and the baseline. Similarly to the results in Section 5.1.2, we observe that for all three mehtods, controlling for disparate mistreatment on false positive rate (false negative rate) also helps decrease disparate mistreatment on false negative rate (false positive rate). Moreover, all three methods are able to achieve similar accuracy for a given level of fairness.
Additionally, we observe that our method (as well as the baseline) does not completely remove disparate mistreatment, \ie, it does not achieve zero or/and in any of the cases. This is probably due to the relatively small size of the dataset ^{8}^{8}8 examples in the training set. (and hence a smaller ratio between number of training examples and number of learnable features), which hinders a robust estimate of misclassification covariance (Eqs. 11 and 12). This highlights the fact that our method can suffer from reduced performance on small datasets. In scenarios with sufficiently large training datasets, we expect more reliable estimates of covariance, and hence, a better performance from our method. On the other hand, the method by Hardt et al. is able to achieve both zero and while controlling for disparate mistreatment on both false positive and false negative rates (Table 2)—albeit at a considerable drop in terms of accuracy. Since this method operates on a data of much smaller dimensionality (the final classifier probability estimates), it is not expected to suffer as much from the small size of the dataset as compared to our method or the baseline (which depend on the misclassification covariance computed on the feature set).
Race  Yes  No  Total 
Black  
White  
Total 
Feature  Description 
Age Category  , between and , 
Gender  Male or Female 
Race  White or Black 
Priors Count  0–37 
Charge Degree  Misconduct or Felony 
2yearrec.  Whether (+ve) or not (ve) the 
(target feature)  defendant recidivated within two years 
6 Discussion and future work
As shown in Section 5, the method proposed in this paper provides a flexible tradeoff between disparate mistreatmentbased fairness and accuracy. It also allows to avoid disparate mistreatment and disparate treatment simultaneously. This feature might be specially useful in scenarios when the sensitive attribute information is not available (\eg, due to privacy reasons) or is prohibited from being used due to disparate treatment laws [5].
Although we proposed fair classifier formulations to remove disparate mistreatment only on false positive and false negative rates, as described in Section 3, disparate mistreatment can also be measured with respect to false discovery and false omission rates. Extending our current formulation to include false discovery and false omission rates is a nontrivial task due to computational complexities involved. A natural extension of this work would be to include these other measures of disparate mistreatment into our fair classifier formulation.
Finally, we would like to point out that the current formulation of fairness constraints may suffer from the following limitations. Firstly, the proposed formulation to train fair classifiers is not a convex program, but a disciplined convexconcave program (DCCP), which can be efficiently solved using heuristicbased methods [26]. While these methods are shown to work well in practice, unlike convex optimization, they do not provide any guarantees on the global optimality of the solution. Secondly, since computing the analytical covariance in fairness constraints is not a trivial task, we approximate it through Monte Carlo covariance on the training set (Eq. 9). While this approximation is expected to work well when a reasonable amount of training data is provided, it might be inaccurate for smaller datasets.
References
 [1] Stopandfrisk in New York City. https://en.wikipedia.org/wiki/Stopandfrisk_in_New_York_City.
 [2] https://www.documentcloud.org/documents/2702103SampleRiskAssessmentCOMPASCORE.html, 2016.
 [3] J. Angwin and J. Larson. Bias in Criminal Risk Scores Is Mathematically Inevitable, Researchers Say. http://bit.ly/2iTc4B9.
 [4] J. Angwin, J. Larson, S. Mattu, and L. Kirchner. Machine Bias: There’s Software Used Across the Country to Predict Future Criminals. And it’s Biased Against Blacks. https://www.propublica.org/article/machinebiasriskassessmentsincriminalsentencing, 2016.
 [5] S. Barocas and A. D. Selbst. Big Data’s Disparate Impact. California Law Review, 2016.
 [6] C. M. Bishop. Pattern Recognition and Machine Learning. Springer, 2006.
 [7] A. Chouldechova. Fair Prediction with Disparate Impact:A Study of Bias in Recidivism Prediction Instruments. arXiv preprint, arXiv:1610.07524, 2016.

[8]
K. Crawford.
Artificial Intelligence’s White Guy Problem.
https://www.nytimes.com/2016/06/26/
opinion/sunday/artificialintelligenceswhiteguyproblem.html.  [9] C. Dwork, M. Hardt, T. Pitassi, and O. Reingold. Fairness Through Awareness. In ITCSC, 2012.
 [10] M. Feldman, S. A. Friedler, J. Moeller, C. Scheidegger, and S. Venkatasubramanian. Certifying and Removing Disparate Impact. In KDD, 2015.
 [11] A. W. Flores, C. T. Lowenkamp, and K. Bechtel. False Positives, False Negatives, and False Analyses: A Rejoinder to “Machine Bias: There’s Software Used Across the Country to Predict Future Criminals. And it’s Biased Against Blacks.”. 2016.
 [12] S. Goel, J. M. Rao, and R. Shroff. Precinct or Prejudice? Understanding Racial Disparities in New York City’s StopandFrisk Policy. Annals of Applied Statistics, 2015.
 [13] G. Goh, A. Cotter, M. Gupta, and M. Friedlander. Satisfying Realworld Goals with Dataset Constraints. In NIPS, 2016.

[14]
J. M. Greg Ridgeway.
Doubly Robust Internal Benchmarking and False Discovery Rates for
Detecting Racial Bias in Police Stops.
Journal of the American Statistical Association, 2009.
 [15] M. Hardt, E. Price, and N. Srebro. Equality of Opportunity in Supervised Learning. In NIPS, 2016.
 [16] F. Kamiran and T. Calders. Classification with No Discrimination by Preferential Sampling. In BENELEARN, 2010.
 [17] T. Kamishima, S. Akaho, H. Asoh, and J. Sakuma. Fairnessaware Classifier with Prejudice Remover Regularizer. In PADM, 2011.
 [18] J. Kleinberg, S. Mullainathan, and M. Raghavan. Inherent TradeOffs in the Fair Determination of Risk Scores. In ITCS, 2017.
 [19] J. Larson, S. Mattu, L. Kirchner, and J. Angwin. https://github.com/propublica/compasanalysis, 2016.
 [20] J. Larson, S. Mattu, L. Kirchner, and J. Angwin. How We Analyzed the COMPAS Recidivism Algorithm. https://www.propublica.org/article/howweanalyzedthecompasrecidivismalgorithm, 2016.
 [21] B. T. Luong, S. Ruggieri, and F. Turini. kNN as an Implementation of Situation Testing for Discrimination Discovery and Prevention. In KDD, 2011.
 [22] C. Muñoz, M. Smith, and D. Patil. Big Data: A Report on Algorithmic Systems, Opportunity, and Civil Rights. Executive Office of the President. The White House., 2016.
 [23] D. Pedreschi, S. Ruggieri, and F. Turini. Discriminationaware Data Mining. In KDD, 2008.
 [24] J. Podesta, P. Pritzker, E. Moniz, J. Holdren, and J. Zients. Big Data: Seizing Opportunities, Preserving Values. Executive Office of the President. The White House., 2014.
 [25] A. Romei and S. Ruggieri. A Multidisciplinary Survey on Discrimination Analysis. KER, 2014.
 [26] X. Shen, S. Diamond, Y. Gu, and S. Boyd. Disciplined ConvexConcave Programming. arXiv:1604.02639, 2016.
 [27] L. Sweeney. Discrimination in Online Ad Delivery. ACM Queue, 2013.
 [28] M. B. Zafar, I. V. Martinez, M. G. Rodriguez, and K. P. Gummadi. Fairness Constraints: Mechanisms for Fair Classification. In AISTATS, 2017.
 [29] R. Zemel, Y. Wu, K. Swersky, T. Pitassi, and C. Dwork. Learning Fair Representations. In ICML, 2013.