July 7, 2022, 1:11 a.m. | Przemyslaw Joniak, Akiko Aizawa

cs.CL updates on arXiv.org arxiv.org

Language model debiasing has emerged as an important field of study in the
NLP community. Numerous debiasing techniques were proposed, but bias ablation
remains an unaddressed issue. We demonstrate a novel framework for inspecting
bias in pre-trained transformer-based language models via movement pruning.
Given a model and a debiasing objective, our framework finds a subset of the
model containing less bias than the original model. We implement our framework
by pruning the model while fine-tuning it on the debiasing objective. …

arxiv bias biases gender gender bias language language models pruning transformer

Data Architect

@ University of Texas at Austin | Austin, TX

Data ETL Engineer

@ University of Texas at Austin | Austin, TX

Lead GNSS Data Scientist

@ Lurra Systems | Melbourne

Senior Machine Learning Engineer (MLOps)

@ Promaton | Remote, Europe

Principal Data Engineering Manager

@ Microsoft | Redmond, Washington, United States

Machine Learning Engineer

@ Apple | San Diego, California, United States