all AI news
What Happens When Small Is Made Smaller? Exploring the Impact of Compression on Small Data Pretrained Language Models
April 9, 2024, 4:43 a.m. | Busayo Awobade, Mardiyyah Oduwole, Steven Kolawole
cs.LG updates on arXiv.org arxiv.org
Abstract: Compression techniques have been crucial in advancing machine learning by enabling efficient training and deployment of large-scale language models. However, these techniques have received limited attention in the context of low-resource language models, which are trained on even smaller amounts of data and under computational constraints, a scenario known as the "low-resource double-bind." This paper investigates the effectiveness of pruning, knowledge distillation, and quantization on an exclusively low-resourced, small-data language model, AfriBERTa. Through a battery …
abstract arxiv attention compression context cs.cl cs.lg data deployment enabling however impact language language models low machine machine learning scale small small data training type
More from arxiv.org / cs.LG updates on arXiv.org
Jobs in AI, ML, Big Data
Data Architect
@ University of Texas at Austin | Austin, TX
Data ETL Engineer
@ University of Texas at Austin | Austin, TX
Lead GNSS Data Scientist
@ Lurra Systems | Melbourne
Senior Machine Learning Engineer (MLOps)
@ Promaton | Remote, Europe
Software Engineer, Data Tools - Full Stack
@ DoorDash | Pune, India
Senior Data Analyst
@ Artsy | New York City