Oct. 18, 2022, 1:13 a.m. | Shaily Bhatt, Sunipa Dev, Partha Talukdar, Shachi Dave, Vinodkumar Prabhakaran

cs.CL updates on arXiv.org arxiv.org

Recent research has revealed undesirable biases in NLP data and models.
However, these efforts focus of social disparities in West, and are not
directly portable to other geo-cultural contexts. In this paper, we focus on
NLP fair-ness in the context of India. We start with a brief account of the
prominent axes of social disparities in India. We build resources for fairness
evaluation in the Indian context and use them to demonstrate prediction biases
along some of the axes. We …

arxiv case fairness india nlp

Data Architect

@ University of Texas at Austin | Austin, TX

Data ETL Engineer

@ University of Texas at Austin | Austin, TX

Lead GNSS Data Scientist

@ Lurra Systems | Melbourne

Senior Machine Learning Engineer (MLOps)

@ Promaton | Remote, Europe

Data Science Specialist

@ Telstra | Telstra ICC Bengaluru

Senior Staff Engineer, Machine Learning

@ Nagarro | Remote, India