March 27, 2024, 4:42 a.m. | Leonidas Gee, Andrea Zugarini, Novi Quadrianto

cs.LG updates on arXiv.org arxiv.org

arXiv:2403.17811v1 Announce Type: new
Abstract: To reduce the inference cost of large language models, model compression is increasingly used to create smaller scalable models. However, little is known about their robustness to minority subgroups defined by the labels and attributes of a dataset. In this paper, we investigate the effects of 18 different compression methods and settings on the subgroup robustness of BERT language models. We show that worst-group performance does not depend on model size alone, but also on …

abstract arxiv compression cost cs.cl cs.lg dataset effects however inference labels language language models large language large language models paper reduce robust robustness scalable subgroups type

Founding AI Engineer, Agents

@ Occam AI | New York

AI Engineer Intern, Agents

@ Occam AI | US

AI Research Scientist

@ Vara | Berlin, Germany and Remote

Data Architect

@ University of Texas at Austin | Austin, TX

Data ETL Engineer

@ University of Texas at Austin | Austin, TX

DevOps Engineer (Data Team)

@ Reward Gateway | Sofia/Plovdiv