May 15, 2023, 12:43 a.m. | Shaina Raza, Parisa Osivand Pour, Syed Raza Bashir

cs.LG updates on arXiv.org arxiv.org

With the growing utilization of machine learning in healthcare, there is
increasing potential to enhance healthcare outcomes and efficiency. However,
this also brings the risk of perpetuating biases in data and model design that
can harm certain protected groups based on factors such as age, gender, and
race. This study proposes an artificial intelligence framework, grounded in
software engineering principles, for identifying and mitigating biases in data
and models while ensuring fairness in healthcare settings. A case study is
presented …

age arxiv biases data design efficiency equity fairness gender healthcare machine machine learning model design race risk

Software Engineer for AI Training Data (School Specific)

@ G2i Inc | Remote

Software Engineer for AI Training Data (Python)

@ G2i Inc | Remote

Software Engineer for AI Training Data (Tier 2)

@ G2i Inc | Remote

Data Engineer

@ Lemon.io | Remote: Europe, LATAM, Canada, UK, Asia, Oceania

Artificial Intelligence – Bioinformatic Expert

@ University of Texas Medical Branch | Galveston, TX

Lead Developer (AI)

@ Cere Network | San Francisco, US