Feb. 24, 2022, 2:10 a.m. | Farshid Faal, Ketra Schmitt

cs.CL updates on arXiv.org arxiv.org

Transformer-based language models are able to generate fluent text and be
efficiently adapted across various natural language generation tasks. However,
language models that are pretrained on large unlabeled web text corpora have
been shown to suffer from degenerating toxic content and social bias behaviors,
consequently hindering their safe deployment. Various detoxification methods
were proposed to mitigate the language model's toxicity; however, these methods
struggled to detoxify language models when conditioned on prompts that contain
specific social identities related to gender, …

arxiv language language models modeling toxicity transformer

Data Architect

@ University of Texas at Austin | Austin, TX

Data ETL Engineer

@ University of Texas at Austin | Austin, TX

Lead GNSS Data Scientist

@ Lurra Systems | Melbourne

Senior Machine Learning Engineer (MLOps)

@ Promaton | Remote, Europe

Risk Management - Machine Learning and Model Delivery Services, Product Associate - Senior Associate-

@ JPMorgan Chase & Co. | Wilmington, DE, United States

Senior ML Engineer (Speech/ASR)

@ ObserveAI | Bengaluru