April 30, 2024, 4:42 a.m. | Charmaine Barker, Daniel Bethell, Dimitar Kazakov

cs.LG updates on arXiv.org arxiv.org

arXiv:2404.18134v1 Announce Type: new
Abstract: Mitigating bias in automated decision-making systems, specifically deep learning models, is a critical challenge in achieving fairness. This complexity stems from factors such as nuanced definitions of fairness, unique biases in each dataset, and the trade-off between fairness and model accuracy. To address such issues, we introduce FairVIC, an innovative approach designed to enhance fairness in neural networks by addressing inherent biases at the training stage. FairVIC differs from traditional approaches that typically address biases …

abstract accuracy arxiv automated bias biases challenge complexity cs.ai cs.cy cs.lg dataset decision decision-making systems deep learning definitions fairness making model accuracy networks neural networks stat.ml systems trade trade-off type unique

Software Engineer for AI Training Data (School Specific)

@ G2i Inc | Remote

Software Engineer for AI Training Data (Python)

@ G2i Inc | Remote

Software Engineer for AI Training Data (Tier 2)

@ G2i Inc | Remote

Data Engineer

@ Lemon.io | Remote: Europe, LATAM, Canada, UK, Asia, Oceania

Artificial Intelligence – Bioinformatic Expert

@ University of Texas Medical Branch | Galveston, TX

Lead Developer (AI)

@ Cere Network | San Francisco, US