Oct. 13, 2022, 1:18 a.m. | Shaily Bhatt, Sunipa Dev, Partha Talukdar, Shachi Dave, Vinodkumar Prabhakaran

cs.CL updates on arXiv.org arxiv.org

Recent research has revealed undesirable bi-ases in NLP data and models.
However, theseefforts focus of social disparities in West, andare not directly
portable to other geo-culturalcontexts. In this paper, we focus on NLP
fair-ness in the context of India. We start witha brief account of the
prominent axes of so-cial disparities in India. We build resourcesfor fairness
evaluation in the Indian contextand use them to demonstrate prediction bi-ases
along some of the axes. We then delvedeeper into social stereotypes for …

arxiv case fairness india nlp

Senior Machine Learning Engineer (MLOps)

@ Promaton | Remote, Europe

Data Analyst - Associate

@ JPMorgan Chase & Co. | Mumbai, Maharashtra, India

Staff Data Engineer (Data Platform)

@ Coupang | Seoul, South Korea

AI/ML Engineering Research Internship

@ Keysight Technologies | Santa Rosa, CA, United States

Sr. Director, Head of Data Management and Reporting Execution

@ Biogen | Cambridge, MA, United States

Manager, Marketing - Audience Intelligence (Senior Data Analyst)

@ Delivery Hero | Singapore, Singapore