Web: http://arxiv.org/abs/1908.09635

Jan. 26, 2022, 2:11 a.m. | Ninareh Mehrabi, Fred Morstatter, Nripsuta Saxena, Kristina Lerman, Aram Galstyan

cs.LG updates on arXiv.org arxiv.org

With the widespread use of AI systems and applications in our everyday lives,
it is important to take fairness issues into consideration while designing and
engineering these types of systems. Such systems can be used in many sensitive
environments to make important and life-changing decisions; thus, it is crucial
to ensure that the decisions do not reflect discriminatory behavior toward
certain groups or populations. We have recently seen work in machine learning,
natural language processing, and deep learning that addresses …

arxiv bias fairness learning machine machine learning survey

More from arxiv.org / cs.LG updates on arXiv.org

Senior Data Analyst

@ Fanatics Inc | Remote - New York

Data Engineer - Search

@ Cytora | United Kingdom - Remote

Product Manager, Technical - Data Infrastructure and Streaming

@ Nubank | Berlin

Postdoctoral Fellow: ML for autonomous materials discovery

@ Lawrence Berkeley National Lab | Berkeley, CA

Principal Data Scientist

@ Zuora | Remote

Data Engineer

@ Veeva Systems | Pennsylvania - Fort Washington