March 6, 2024, 5:47 a.m. | Xin Lu, Yanyan Zhao, Bing Qin

cs.CL updates on arXiv.org arxiv.org

arXiv:2403.02436v1 Announce Type: new
Abstract: Pre-trained language models have been proven to possess strong base capabilities, which not only excel in in-distribution language modeling but also show powerful abilities in out-of-distribution language modeling, transfer learning and few-shot learning. Unlike existing work focusing on the influence of scale on base capabilities, our work examines the influence of architecture on those. Specifically, our concern is: How does architecture influence the base capabilities of pre-trained language models? In this work, we attempt to …

abstract architecture arxiv capabilities case case study cs.cl distribution excel few-shot few-shot learning influence language language models modeling show study transfer transfer learning transformer transformer models type work

AI Research Scientist

@ Vara | Berlin, Germany and Remote

Data Architect

@ University of Texas at Austin | Austin, TX

Data ETL Engineer

@ University of Texas at Austin | Austin, TX

Lead GNSS Data Scientist

@ Lurra Systems | Melbourne

Senior Machine Learning Engineer (MLOps)

@ Promaton | Remote, Europe

Business Data Analyst

@ Alstom | Johannesburg, GT, ZA