Web: http://arxiv.org/abs/2207.04546

Sept. 19, 2022, 1:15 a.m. | Pieter Delobelle, Bettina Berendt

cs.CL updates on arXiv.org arxiv.org

Large pre-trained language models are successfully being used in a variety of
tasks, across many languages. With this ever-increasing usage, the risk of
harmful side effects also rises, for example by reproducing and reinforcing
stereotypes. However, detecting and mitigating these harms is difficult to do
in general and becomes computationally expensive when tackling multiple
languages or when considering different biases. To address this, we present
FairDistillation: a cross-lingual method based on knowledge distillation to
construct smaller language models while controlling …

arxiv language language models

Research Scientists

@ ODU Research Foundation | Norfolk, Virginia

Embedded Systems Engineer (Robotics)

@ Neo Cybernetica | Bedford, New Hampshire

2023 Luis J. Alvarez and Admiral Grace M. Hopper Postdoc Fellowship in Computing Sciences

@ Lawrence Berkeley National Lab | San Francisco, CA

Senior Manager Data Scientist

@ NAV | Remote, US

Senior AI Research Scientist

@ Earth Species Project | Remote anywhere

Research Fellow- Center for Security and Emerging Technology (Multiple Opportunities)

@ University of California Davis | Washington, DC

Staff Fellow - Data Scientist

@ U.S. FDA/Center for Devices and Radiological Health | Silver Spring, Maryland

Staff Fellow - Senior Data Engineer

@ U.S. FDA/Center for Devices and Radiological Health | Silver Spring, Maryland

Data Scientist (Analytics) - Singapore

@ Momos | Singapore, Central, Singapore

Machine Learning Scientist, Drug Discovery

@ Flagship Pioneering, Inc. | Cambridge, MA

Applied Scientist - Computer Vision

@ Flawless | Los Angeles, California, United States

Sr. Data Engineer, Customer Service

@ Wayfair Inc. | Boston, MA