all AI news
Debiasing Algorithm through Model Adaptation
March 18, 2024, 4:43 a.m. | Tomasz Limisiewicz, David Mare\v{c}ek, Tom\'a\v{s} Musil
stat.ML updates on arXiv.org arxiv.org
Abstract: Large language models are becoming the go-to solution for the ever-growing number of tasks. However, with growing capacity, models are prone to rely on spurious correlations stemming from biases and stereotypes present in the training data. This work proposes a novel method for detecting and mitigating gender bias in language models. We perform causal analysis to identify problematic model components and discover that mid-upper feed-forward layers are most prone to convey bias. Based on the …
abstract algorithm arxiv bias biases capacity correlations cs.ai cs.cl data gender gender bias however language language models large language large language models model adaptation novel solution stat.ml stemming stereotypes tasks through training training data type work
More from arxiv.org / stat.ML updates on arXiv.org
Jobs in AI, ML, Big Data
Software Engineer for AI Training Data (School Specific)
@ G2i Inc | Remote
Software Engineer for AI Training Data (Python)
@ G2i Inc | Remote
Software Engineer for AI Training Data (Tier 2)
@ G2i Inc | Remote
Data Engineer
@ Lemon.io | Remote: Europe, LATAM, Canada, UK, Asia, Oceania
Artificial Intelligence – Bioinformatic Expert
@ University of Texas Medical Branch | Galveston, TX
Lead Developer (AI)
@ Cere Network | San Francisco, US