Aug. 11, 2022, 1:10 a.m. | Bhavya Ghai, Klaus Mueller

cs.LG updates on arXiv.org arxiv.org

With the rise of AI, algorithms have become better at learning underlying
patterns from the training data including ingrained social biases based on
gender, race, etc. Deployment of such algorithms to domains such as hiring,
healthcare, law enforcement, etc. has raised serious concerns about fairness,
accountability, trust and interpretability in machine learning algorithms. To
alleviate this problem, we propose D-BIAS, a visual interactive tool that
embodies human-in-the-loop AI approach for auditing and mitigating social
biases from tabular datasets. It uses …

algorithmic bias arxiv bias causality human lg loop

Lead Developer (AI)

@ Cere Network | San Francisco, US

Research Engineer

@ Allora Labs | Remote

Ecosystem Manager

@ Allora Labs | Remote

Founding AI Engineer, Agents

@ Occam AI | New York

AI Engineer Intern, Agents

@ Occam AI | US

AI Research Scientist

@ Vara | Berlin, Germany and Remote