April 16, 2024, 4:42 a.m. | Biswajit Rout, Ananya B. Sai, Arun Rajkumar

cs.LG updates on arXiv.org arxiv.org

arXiv:2404.09664v1 Announce Type: new
Abstract: The rapid developments of various machine learning models and their deployments in several applications has led to discussions around the importance of looking beyond the accuracies of these models. Fairness of such models is one such aspect that is deservedly gaining more attention. In this work, we analyse the natural language representations of documents and sentences (i.e., encodings) for any embedding-level bias that could potentially also affect the fairness of the downstream tasks that rely …

abstract accuracy applications arxiv attention beyond cs.cy cs.lg deployments discussions fair fairness fair representations gap importance machine machine learning machine learning models trade trade-off type

AI Research Scientist

@ Vara | Berlin, Germany and Remote

Data Architect

@ University of Texas at Austin | Austin, TX

Data ETL Engineer

@ University of Texas at Austin | Austin, TX

Lead GNSS Data Scientist

@ Lurra Systems | Melbourne

Senior Machine Learning Engineer (MLOps)

@ Promaton | Remote, Europe

Senior Software Engineer, Generative AI (C++)

@ SoundHound Inc. | Toronto, Canada