April 16, 2024, 4:42 a.m. | Biswajit Rout, Ananya B. Sai, Arun Rajkumar

cs.LG updates on arXiv.org arxiv.org

arXiv:2404.09664v1 Announce Type: new
Abstract: The rapid developments of various machine learning models and their deployments in several applications has led to discussions around the importance of looking beyond the accuracies of these models. Fairness of such models is one such aspect that is deservedly gaining more attention. In this work, we analyse the natural language representations of documents and sentences (i.e., encodings) for any embedding-level bias that could potentially also affect the fairness of the downstream tasks that rely …

abstract accuracy applications arxiv attention beyond cs.cy cs.lg deployments discussions fair fairness fair representations gap importance machine machine learning machine learning models trade trade-off type

Software Engineer for AI Training Data (School Specific)

@ G2i Inc | Remote

Software Engineer for AI Training Data (Python)

@ G2i Inc | Remote

Software Engineer for AI Training Data (Tier 2)

@ G2i Inc | Remote

Data Engineer

@ Lemon.io | Remote: Europe, LATAM, Canada, UK, Asia, Oceania

Artificial Intelligence – Bioinformatic Expert

@ University of Texas Medical Branch | Galveston, TX

Lead Developer (AI)

@ Cere Network | San Francisco, US