You can edit almost every page by Creating an account. Otherwise, see the FAQ.

Algorithmic Design: Fairness Versus Accuracy

From EverybodyWiki Bios & Wiki

Script error: No such module "AfC submission catcheck".

In recent years, algorithms have been used in all areas of life, including as a support for making important decisions - for example in hiring, providing medical care or approving a loan.
Due to the criticality of the algorithm results, it is important to ensure that their error rates do not vary significantly across different groups of the population.

We will represent a framework in which an algorithm recommends making a decision to an individual, for example whether to hire, when the decision is based on observed covariates about the individual, for example opinions of previous employers. We will evaluate the algorithm error for the individual with a general loss function, where the error depends on the decision the algorithm recommended and the unobserved type of the individual, for example promotion or dismissal from the new position.
Finally, we will sum up the errors of pre-defined groups of the population, for example gender, where the group error is the expected error for all group members.

Given two pairs of group errors, pair A and pair B, we will define that A Pareto dominates B, if it holds that

  • For each group, the error group in A is lower than the error group in B, i.e. the accuracy of A is higher than the accuracy of B.
  • The difference between the errors in A is lower than the difference in B, i.e. the fairness between the groups is higher in A than in B.

Important point – we do not prefer any way of making a decision, fairness or accuracy, but choose the algorithm that improves these two parameters, according to the Pareto frontier.


Based on "Algorithmic Design: Fairness Versus Accuracy" article.[1]

Example of Fairness-Accuracy tradeoff[edit]

No information → treat everyone. Equal error rate of 7/15 for both populations.
access to covariate
More accuracy, less fairness.
More fairness, less accuracy.
access to covariate and to covariate
Improves both fairness and accuracy – no tradeoff.

Framework[edit]

Consider a population of subjects, each subjuect has the following parameters:

  • type
  • group
  • covariate (feature) vector , when is a finite set.

X is observed by the algoritm's designer, Y and G are not directly observed (but may be revealed by X).

Y, G, X are random variables, and the joint distribution (Y, G, X) ~ is known to the designer.

The designer chooses an algorithm that maps an outcome for each individual.

The loss function of the algorithm is .

can interpret as one of the following:

  • Social cost os assigning outcome a to an individual of type y.
  • The negative of the subject's payoff (ignoring the true type).
Definition
The error for group given an algorithm is , i.e., the average loss for subjects in group g.

is an error pair.

Fairness-Accuracy Pareto Frontier[edit]

File:Pareto dominance.png
e pareto-dominates e'
Definition
An error pair e pareto-dominates an error pair e' , or if

(higher accuracy) and
(higher fairness),
with at least one inequality strict.

Let denote the set of all algorithms .

File:Important points.png
in this example, at
(optimal for group b)
Definition
The feasible set given X is .

is a convex polygon, assumint that X is a finite valued.

The pareto set given X is .
e is pareto-undominated in the fessible set.

.

Fix covariate X

  • An optimal point (with minimal error) for group is
  • An optimal point for fairness is
  • Break ties in favor of accuracy
P(X) for group-balanced
Definition
Covariate X is
  • r-skewed if at and at
  • b-skewed if at and at
  • group-balanced otherwise
Theorem
P(X) is the lower boundry of between
  • and if X is g-skewed
  • and if X is group-balanced
Definition
X exhibits a strong accuracy-fairness conflict if there are two points satisfying and but

Example of strong accuracy-fairness conflict:

  • Compare e'=(1/2,1/2) versus e=(1/3,1/4)
  • e' involves higher errors for both groups, but improves on fairness
Collary
Suppose is distinct from and , then X exhibits a strong accuracy-fairness conflict if and only if it is group-skewed.

Bayes Design[edit]

Designer can only control the inputs to the algorithm.

Definition
A grabling is any mapping from old covariate values to .

Examples of garblings:

  • No change: w.p. 1
  • Drop an input: and w.p. 1
  • Add noise: where
  • No information: w.p. 1 for all x ( is constant)

For each garbling T, let be the algorithm that maps each realization of T(x) into the bayes-optimal action.

, i.e, the choice of utilitarian agent.

Definition
The feasible set under bayes design given X is

The pareto set under bayes design given X is .
e is pareto-undominated in .

Definition
Let .
File:P under bayes design.png
Proposition
  • If X is g-skwewd, then iff aggregate error at and is weakly less than .
  • If X is group-balanced, then iff aggregate error at and is weakly less than .
Definition
Let ,

when is the aggregate error in population.

Lemma
and

Special Cases[edit]

X Reveales G: The Frontier is Rawlsian[edit]

File:X reveals G.png
X reveals G
Definition
G is reveled by X if is a degenerate distribution for every .
Proposition
If G is reveled by X, then is a rectangle whose sides are parallel to the axes, and P(X) is the line segment from to .

The frontier is rawlsian

  • Disatvantaged group gets it's minimral feaible error: rawlsian designer indifferent between all pareto points.
  • Advantaged group's error depends on designer preferences -
    • Utilitarian designer prefers point.
    • Egalitarian designer prefers point.

Comparing P(X) and P(X,G)[edit]

X is group-balanced[edit]

File:Comparing P(X) and P(X, G) - X is group-balanced.png
X is group-balanced

(X,G) reveals G, so the new feasible set is a rectangle.

Lemma
The left and bottom boundaries intersects and .

The new frontier does not overlap the old frontier uniform pareto improvment.
This means that regardless of the fairness-accuracy preferences, using G in some way can improve payoff.

X is group-skewed[edit]

File:Comparing P(X) and P(X, G) - X is group-skewed.png
X is group-skewed

The new feasible set is a rectangle whose left and bottom boundaries intersects and .
The new frontier intersects the old frontier not an uniform pareto improvment.

References[edit]

  1. Annie Liang, Jay Lu, Xiaosheng Mu (2021). Algorithmic Design: Fairness Versus Accuracy.


This article "Algorithmic Design: Fairness Versus Accuracy" is from Wikipedia. The list of its authors can be seen in its historical and/or the page Edithistory:Algorithmic Design: Fairness Versus Accuracy. Articles copied from Draft Namespace on Wikipedia could be seen on the Draft Namespace of Wikipedia and not main one.