← Back to Research Radar
Academic Publication Academic Publication

Fairness in Machine Learning: A Survey

350
Citations
July 31, 2024
Published Date

Research Abstract & Technology Focus

When Machine Learning technologies are used in contexts that affect citizens, companies as well as researchers need to be confident that there will not be any unexpected social implications, such as bias towards gender, ethnicity, and/or people with disabilities. There is significant literature on approaches to mitigate bias and promote fairness, yet the area is complex and hard to penetrate for newcomers to the domain. This article seeks to provide an overview of the different schools of thought and approaches that aim to increase the fairness of Machine Learning. It organizes approaches into the widely accepted framework of pre-processing, in-processing, and post-processing methods, subcategorizing into a further 11 method areas. Although much of the literature emphasizes binary classification, a discussion of fairness in regression, recommender systems, and unsupervised learning is also provided along with a selection of currently available open source libraries. The article concludes by summarizing open challenges articulated as five dilemmas for fairness research.
fairness machine learning survey
Read Full Literature

Commercial Realization

Startups and Open Source tools heavily associated with the concepts explored in this paper.

  • GitHub
    THU-MAIC/OpenMAIC
    Open Multi-Agent Interactive Classroom — Get an immersive, multi-ag...
  • Product Hunt
    Superset
    Run an army of Claude Code, Codex, etc. on your machine

Associated Media Narrative