March 22, 2024, 4:48 a.m. | Jennifer Chien, Kevin R. McKee, Jackie Kay, William Isaac

cs.CL updates on arXiv.org arxiv.org

arXiv:2403.14467v1 Announce Type: cross
Abstract: Researchers and developers increasingly rely on toxicity scoring to moderate generative language model outputs, in settings such as customer service, information retrieval, and content generation. However, toxicity scoring may render pertinent information inaccessible, rigidify or "value-lock" cultural norms, and prevent language reclamation processes, particularly for marginalized people. In this work, we extend the concept of algorithmic recourse to generative language models: we provide users a novel mechanism to achieve their desired prediction by dynamically setting …

abstract arxiv content generation cs.cl cs.cy cs.hc customer customer service developers generative however information language language model language models people processes researchers retrieval scoring service toxicity type value

Data Architect

@ University of Texas at Austin | Austin, TX

Data ETL Engineer

@ University of Texas at Austin | Austin, TX

Lead GNSS Data Scientist

@ Lurra Systems | Melbourne

Senior Machine Learning Engineer (MLOps)

@ Promaton | Remote, Europe

Business Data Scientist, gTech Ads

@ Google | Mexico City, CDMX, Mexico

Lead, Data Analytics Operations

@ Zocdoc | Pune, Maharashtra, India