all AI news
Quantifying Adaptability in Pre-trained Language Models with 500 Tasks. (arXiv:2112.03204v2 [cs.CL] UPDATED)
Web: http://arxiv.org/abs/2112.03204
May 6, 2022, 1:11 a.m. | Belinda Z. Li, Jane Yu, Madian Khabsa, Luke Zettlemoyer, Alon Halevy, Jacob Andreas
cs.CL updates on arXiv.org arxiv.org
When a neural language model (LM) is adapted to perform a new task, what
aspects of the task predict the eventual performance of the model? In NLP,
systematic features of LM generalization to individual examples are well
characterized, but systematic aspects of LM adaptability to new tasks are not
nearly as well understood. We present a large-scale empirical study of the
features and limits of LM adaptability using a new benchmark, TaskBench500,
built from 500 procedurally generated sequence modeling tasks. …
More from arxiv.org / cs.CL updates on arXiv.org
The Budge programming language. (arXiv:2205.07979v2 [cs.PL] UPDATED)
2 days, 19 hours ago |
arxiv.org
Latest AI/ML/Big Data Jobs
Data Analyst, Patagonia Action Works
@ Patagonia | Remote
Data & Insights Strategy & Innovation General Manager
@ Chevron Services Company, a division of Chevron U.S.A Inc. | Houston, TX
Faculty members in Research areas such as Bayesian and Spatial Statistics; Data Privacy and Security; AI/ML; NLP; Image and Video Data Analysis
@ Ahmedabad University | Ahmedabad, India
Director, Applied Mathematics & Computational Research Division
@ Lawrence Berkeley National Lab | Berkeley, Ca
Business Data Analyst
@ MainStreet Family Care | Birmingham, AL
Assistant/Associate Professor of the Practice in Business Analytics
@ Georgetown University McDonough School of Business | Washington DC