Aug. 11, 2022, 1:10 a.m. | Bhavya Ghai, Klaus Mueller

cs.LG updates on arXiv.org arxiv.org

With the rise of AI, algorithms have become better at learning underlying
patterns from the training data including ingrained social biases based on
gender, race, etc. Deployment of such algorithms to domains such as hiring,
healthcare, law enforcement, etc. has raised serious concerns about fairness,
accountability, trust and interpretability in machine learning algorithms. To
alleviate this problem, we propose D-BIAS, a visual interactive tool that
embodies human-in-the-loop AI approach for auditing and mitigating social
biases from tabular datasets. It uses …

algorithmic bias arxiv bias causality human lg loop

Software Engineer for AI Training Data (School Specific)

@ G2i Inc | Remote

Software Engineer for AI Training Data (Python)

@ G2i Inc | Remote

Software Engineer for AI Training Data (Tier 2)

@ G2i Inc | Remote

Data Engineer

@ Lemon.io | Remote: Europe, LATAM, Canada, UK, Asia, Oceania

Artificial Intelligence – Bioinformatic Expert

@ University of Texas Medical Branch | Galveston, TX

Lead Developer (AI)

@ Cere Network | San Francisco, US