March 8, 2024, 5:48 a.m. | Souvika Sarkar, Mohammad Fakhruddin Babar, Md Mahadi Hassan, Monowar Hasan, Shubhra Kanti Karmaker Santu

cs.CL updates on arXiv.org arxiv.org

arXiv:2304.11520v4 Announce Type: replace
Abstract: This paper presents a performance study of transformer language models under different hardware configurations and accuracy requirements and derives empirical observations about these resource/accuracy trade-offs. In particular, we study how the most commonly used BERT-based language models (viz., BERT, RoBERTa, DistilBERT, and TinyBERT) perform on embedded systems. We tested them on four off-the-shelf embedded platforms (Raspberry Pi, Jetson, UP2, and UDOO) with 2 GB and 4 GB memory (i.e., a total of eight hardware configurations) …

abstract accuracy arxiv bert cs.cl cs.sy devices distilbert eess.sy embedded embedded devices hardware language language models natural natural language paper performance processing requirements roberta study tinybert trade transformer transformer language models transformer models type viz

Data Architect

@ University of Texas at Austin | Austin, TX

Data ETL Engineer

@ University of Texas at Austin | Austin, TX

Lead GNSS Data Scientist

@ Lurra Systems | Melbourne

Senior Machine Learning Engineer (MLOps)

@ Promaton | Remote, Europe

Senior Data Engineer

@ Quantexa | Sydney, New South Wales, Australia

Staff Analytics Engineer

@ Warner Bros. Discovery | NY New York 230 Park Avenue South