Sept. 27, 2022, 1:14 a.m. | Shaily Bhatt, Sunipa Dev, Partha Talukdar, Shachi Dave, Vinodkumar Prabhakaran

cs.CL updates on arXiv.org arxiv.org

Recent research has revealed undesirable biases in NLP data & models.
However, these efforts focus of social disparities in West, and are not
directly portable to other geo-cultural contexts. In this paper, we focus on
NLP fairness in the context of India. We start with a brief account of
prominent axes of social disparities in India. We build resources for fairness
evaluation in the Indian context and use them to demonstrate prediction biases
along some of the axes. We then …

arxiv case fairness india nlp

Data Architect

@ University of Texas at Austin | Austin, TX

Data ETL Engineer

@ University of Texas at Austin | Austin, TX

Lead GNSS Data Scientist

@ Lurra Systems | Melbourne

Senior Machine Learning Engineer (MLOps)

@ Promaton | Remote, Europe

Staff Software Engineer, Generative AI, Google Cloud AI

@ Google | Mountain View, CA, USA; Sunnyvale, CA, USA

Expert Data Sciences

@ Gainwell Technologies | Any city, CO, US, 99999