Sept. 29, 2022, 1:15 a.m. | Shaily Bhatt, Sunipa Dev, Partha Talukdar, Shachi Dave, Vinodkumar Prabhakaran

cs.CL updates on arXiv.org arxiv.org

Recent research has revealed undesirable biases in NLP data & models.
However, these efforts focus of social disparities in West, and are not
directly portable to other geo-cultural contexts. In this paper, we focus on
NLP fairness in the context of India. We start with a brief account of
prominent axes of social disparities in India. We build resources for fairness
evaluation in the Indian context and use them to demonstrate prediction biases
along some of the axes. We then …

arxiv case fairness india nlp

Senior Data Engineer

@ Publicis Groupe | New York City, United States

Associate Principal Robotics Engineer - Research.

@ Dyson | United Kingdom - Hullavington Office

Duales Studium mit vertiefter Praxis: Bachelor of Science Künstliche Intelligenz und Data Science (m/w/d)

@ Gerresheimer | Wackersdorf, Germany

AI/ML Engineer (TS/SCI) {S}

@ ARKA Group, LP | Aurora, Colorado, United States

Data Integration Engineer

@ Find.co | Sliema

Data Engineer

@ Q2 | Bengaluru, India