You can edit almost every page by Creating an account. Otherwise, see the FAQ.

Fairness in AI-based systems

From EverybodyWiki Bios & Wiki

Script error: No such module "AfC submission catcheck". As part of the “big-data revolution” in the last decade, many decisions that were traditionally taken by decision makers, has been automated and are currently performed by end-to-end automated systems. These systems based in algorithms that are influenced both from big-data that collected by the enterprise, but also by some heuristics that the mechanism designer thought that they will reflect higher correlation with the enterprise goals. For example, data is processed through algorithms in order to predict what products and ads will be better for some user preferences, job candidates’ applications are filtered in order to increase efficiency in large work places as corporates, and finally universities has started to automate the process of offering admissions to students applications. Those algorithms are very productive for the originations that runs them, thus sometimes they might sue them without enough caution regarding social fairness. Social fairness in decision makers was also a big issue because it may impact the lives of so many people, but now a days those decisions are not taken any more by humans but rather by automated systems and algorithms that the engineers and computer researcher departments that designed them, did not necessary though enough about the growing impact of those systems on social fairness. Thus, recently, both computer scientists and economics have started to research how to design mechanisms that can measure and ensure social fairness in those automated systems. [1][2]

Mathematical Setup[edit]

Job hiring[edit]

The mathematical setup for fairness in hiring is as follows. Assuming Y is whether a job applicant is a strong or weak quality, and, a, the action is whether the applicant is hired or rejected. The covariate includes possible relevant attributes/features of the applicant such as past work history, resume, and references[3].

Thus, in order to reflect fairness in the hiring process, a natural choice for the loss function in the above applications is

Where the parameters respectively denote the error to a false positive (e.g., denying bail to a low-risk individual) and a false negative (e.g., granting bail to a high-risk individual).

Example\s for general issues in ML fairness[edit]

Many of the giant vendors of content or products have to use automated recommendation systems in order to decide which product or add which user will see first among endless options. Those systems had to make predictions of whether a user will be interested on some product and they also need to smartly allocate the user time into the most preferable items or websites. In addition, such systems are dramatic influence on people since they control the information they get, the economic opportunities, the technologies and the new products that they will buy [4][5]. Thus, modern advertising vendors need to incorporate in their recommendation systems, fairness mechanisms in order to make sure that they do not mislead some groups by individual treatment that is not necessarily good for them. For example, recommendations system of amazon (as alexa) may recommend to a low-income person less healthy products just because they are bit cheaper while without those personalized algorithms everyone had the same likelihood to get offers for the same products.

One of the challenges for fairness mechanisms it that they have to take a broad look on the final results of the system in order to make sure that the final results are actually fair. For example if a recommendations system has been designed to show to man and women the same ads for positions, it is not necessary means that the actual results will be fully controlled by this mechanism because those ads might need to compete with other optional ads and the overload of the other ads that targets women for example, might effects the ads of the positions to be shown less to women.

Fairness Trade-offs[edit]

A common issue in mechanism design for fairness is the tradeoff between different objectives that sometimes stand in each other way. There are two types of such trade-offs; first, when both objectives are ethical fairness objectives that may contradict each other as for example hoe the algorithm should prefer two candidates if when of them is better for the objective of women equality and the other is better for the objective of diversity, secondly, fairness objectives may interfere with other business objectives as utility maximization and one need to take individual acceptable suit point where both fairness and business objectives are in reasonable rate.

Online Advertising[edit]

The growing field of online advertising have opened the option to make ads targeting much more efficient than in the traditional physical ads market. Major online advertising platforms as Google and Facebook cannot use sensitive user information for this type of targeting but they can still make use in less sensitive information as zip code, gender, etc[6]. Thus, they can bias the adds spreading mechanisms by biases that may affect life of many millions of people that will get adds that may give them less opportunities to good jobs, education housing, credit etc [7][8].

Admissions in education[edit]

Algorithms for choosing the best candidates are in use also in the education system when major universities filter applications using machine learning tools. These algorithms may again contain inherent biases from data that they were trained on that may continue discrimination that is already reflected in the data. Juan Gilbert, however, has proposed an algorithmic alternative, he is a computer scientist and researcher at the University of Florida, who has invented a software tool called Applications Quest that encourages diversity without giving preference to race. A few schools such as Clemson and Auburn University use his tool in their admissions[9].

Labor markets and gig economy[edit]

Recently many researchers showed a big concern that the gig economy will reflect those biases even strongly in this emergence digital economy in the so-called web three [10]. This is because in those digital work platforms as TaskRabbit and Fiverr, machine learning algorithms determines which candidate will be presented first for which job offers. Thus, there is a big concern about the fairness of those algorithms that are becoming a substantial part of systems that may become main players in the modern laborer market. Some researchers [11] had collected evidence of biases in employers’ reviews for gig workers on online labor platforms – they checked how gender and racial bias may affects the popularity of candidates in the platform as it is reflected in the traffic they get. On Airbnb they show evidence of discrimination against African hosts and guests. Following that, Airbnb has published a budget offer for researchers that willing to research the issue of discrimination in its platform [12].

Criminal Justice[edit]

Machine learning techniques have motivated several attempts to analyze the fairness aspects of predictive and statistical models, especially in the context of a critical application domain like criminal justice. Unsurprisingly, relying on such models in practice can end up reinforcing underlying racial biases because the training data of those systems usually also contain those biases inherently. Thus, injustices from the past are preserved to the future through those systems. Unfortunately, a theoretical understanding of fairness in risk assessment is not sufficient to suggest the adoption of such systems but only practical mechanisms solutions could solve this gap for such a critical system.

Health insurance[edit]

In the health sector, especially in the US where it is a private sector: wrong incentives for insurers selection may emerge from inaccurate estimates of the insurers conditions. A model called Hierarchical Condition Category (HCC) that was designed by The Centers for Medicare & Medicaid Services is in wide use for risk adjustment of insurers [13]. The model predicts the expectancy of the patient cost based on diagnosis information. Although in most cases the model has relatively accurate predictions for most of the groups, it has been shown that it is group-unaware for some groups, and some individuals use those biases in order to exploit the resulting selection [14]. When such a bias has no effect on the health condition of the patient, the only business value in such exploitations is the monopoly of the widely used HCC tool on the risk assessment, resulting in some unfortunate groups to get bad terms from many of the health vendors together without getting a good clear explanation for their discrimination. The issue of explain-ability in AI and machine learning algorithms has also an important role in addressing those concerns.

Determining creditworthiness[edit]

Many decisions of banks such as to whom a (home, business) loan should be awarded, and at what interest rate has recently got automated in the FinTech industry. For example, banks use algorithms in order to deal with the liquidity limitations they face by automating the process of determining the creditworthiness of individuals accurately and in less than a second. Machine learning algorithms accelerates the pipeline of loans or other benefits that that the bank costumers ask for [15]. Concerns about overcharging the low-income classes are a corollary of those automatized pipelines. While it is important to remember that such biases also exist in human judgment so manual workers are not the solution here – namely the automation of those decisions helps us to numerically analyze the problems while also it opens up opportunities to finally address those issues directly by incorporating fairness mechanisms.

Social Networks[edit]

The popularity of social networks is everywhere in modern life – from Facebook and LinkedIn to dating apps and TikTok - social networks have received a critical examination of their influence about social fairness and discrimination [16]. The inequality in those algorithms may have a dramatic effect on social connections that people have through clustering to some groups and opinion diffusion or recommendation on new social connections. Thus, there is an urgent need to address the issue of fairness in those platforms while also regulating and enforcing fairness in such systems since they influence our society in an unprecedented manner.

References[edit]

  1. https://arxiv.org/pdf/2112.09975.pdf
  2. https://arxiv.org/abs/2010.05434
  3. From https://arxiv.org/pdf/2112.09975.pdf
  4. https://www.wired.com/story/facebook-advertising-discrimination-settlement
  5. https://www.propublica.org/article/hud-sues-facebook-housing-discrimination-advertising-algorithms
  6. Potential for discrimination in online targeted advertising. In Proceedings of the 2018 ACM Conference on Fairness, Accountability, and Transparency, volume 81, pages 1–15, 2018.
  7. Chandler Nicholle Spinks. Contemporary Housing Discrimination: Facebook, Targeted Advertising, and the Fair Housing Act. Hous. L. Rev., 57:925, 2019.
  8. https://www.wired.com/story/facebook-advertising-discrimination-settlement
  9. https://www.vice.com/en/article/nzee5d/behind-the-color-blind-college-admissions-diversity-algorithm
  10. Marianne Bertrand and Sendhil Mullainathan. Are Emily and Greg more employable than Lakisha and Jamal? A field experiment on labor market discrimination. American Economic Review, 94(4):991–1013, 2004.
  11. Sid Basu, Ruthie Berman, Adam Bloomston, John Campbell, Anne Diaz, Nanako Era, Benjamin Evans, Sukhada Palkar, and Skyler Wharton. Measuring discrepancies in airbnb guest acceptance rates using anonymized demographic data. Technical report, Airbnb, 2020.
  12. Anikó Hannák, Claudia Wagner, David Garcia, Alan Mislove, Markus Strohmaier, and Christo Wilson. Bias in online freelance marketplaces: Evidence from TaskRabbit and Fiverr. In Proceedings of the 2017 ACM Conference on Computer Supported Cooperative Work and Social Computing, pages 1914–1933, 2017.
  13. https://www.cms.gov/Medicare/Health-Plans/MedicareAdvtgSpecRateStats/Risk-Adjustors
  14. Michael Geruso, Timothy Layton, and Daniel Prinz. Screening in contract design: Evidence from the ACA health insurance exchanges. American Economic Journal: Economic Policy, 11(2):64–107, 2019.
  15. Lauren Saunders. FinTech and Consumer Protection: A Snapshot. 2019.
  16. Antoni Calvo-Armengol and Matthew O Jackson. The effects of social networks on employment and inequality. American economic review, 94(3):426–454, 2004.

External Links[edit]


This article "Fairness in AI-based systems" is from Wikipedia. The list of its authors can be seen in its historical and/or the page Edithistory:Fairness in AI-based systems. Articles copied from Draft Namespace on Wikipedia could be seen on the Draft Namespace of Wikipedia and not main one.