Nov. 22, 2022, 2:14 a.m. | Shaily Bhatt, Sunipa Dev, Partha Talukdar, Shachi Dave, Vinodkumar Prabhakaran

cs.CL updates on arXiv.org arxiv.org

Recent research has revealed undesirable biases in NLP data and models.
However, these efforts focus on social disparities in West, and are not
directly portable to other geo-cultural contexts. In this paper, we focus on
NLP fair-ness in the context of India. We start with a brief account of the
prominent axes of social disparities in India. We build resources for fairness
evaluation in the Indian context and use them to demonstrate prediction biases
along some of the axes. We …

arxiv case fairness india nlp

Lead GNSS Data Scientist

@ Lurra Systems | Melbourne

Senior Machine Learning Engineer (MLOps)

@ Promaton | Remote, Europe

Senior Computer Vision Engineer

@ Motive | Pakistan - Remote

Data Analyst III

@ Fanatics | New York City, United States

Senior Data Scientist - Experian Health (This role is remote, from anywhere in the U.S.)

@ Experian | ., ., United States

Senior Data Engineer

@ Springer Nature Group | Pune, IN