Web: http://arxiv.org/abs/2205.05055

May 13, 2022, 1:11 a.m. | Stephanie C.Y. Chan, Adam Santoro, Andrew K. Lampinen, Jane X. Wang, Aaditya Singh, Pierre H. Richemond, Jay McClelland, Felix Hill

cs.LG updates on arXiv.org arxiv.org

Large transformer-based language models are able to perform few-shot learning
(also known as in-context learning), without having been explicitly trained for
it. We hypothesized that specific distributional properties of natural language
might drive this emergent phenomenon, as these characteristics might lead to a
kind of interpolation between few-shot meta-training (designed to elicit rapid
few-shot learning) and standard supervised training (designed to elicit gradual
in-weights learning). We also hypothesized that these distributional properties
could lead to emergent few-shot learning in domains …

ai arxiv cross data few-shot learning learning transformers

More from arxiv.org / cs.LG updates on arXiv.org

Data Analyst, Patagonia Action Works

@ Patagonia | Remote

Data & Insights Strategy & Innovation General Manager

@ Chevron Services Company, a division of Chevron U.S.A Inc. | Houston, TX

Faculty members in Research areas such as Bayesian and Spatial Statistics; Data Privacy and Security; AI/ML; NLP; Image and Video Data Analysis

@ Ahmedabad University | Ahmedabad, India

Director, Applied Mathematics & Computational Research Division

@ Lawrence Berkeley National Lab | Berkeley, Ca

Business Data Analyst

@ MainStreet Family Care | Birmingham, AL

Assistant/Associate Professor of the Practice in Business Analytics

@ Georgetown University McDonough School of Business | Washington DC