April 17, 2024, 4:43 a.m. | Alexander Scarlatos, Andrew Lan

cs.LG updates on arXiv.org arxiv.org

arXiv:2305.14502v2 Announce Type: replace-cross
Abstract: Recent developments in large pre-trained language models have enabled unprecedented performance on a variety of downstream tasks. Achieving best performance with these models often leverages in-context learning, where a model performs a (possibly new) task given one or more examples. However, recent work has shown that the choice of examples can have a large impact on task performance and that finding an optimal set of examples is non-trivial. While there are many existing methods for …

abstract arxiv context cs.ai cs.cl cs.lg examples however in-context learning language language models performance reinforcement reinforcement learning retrieval tasks type work

AI Research Scientist

@ Vara | Berlin, Germany and Remote

Data Architect

@ University of Texas at Austin | Austin, TX

Data ETL Engineer

@ University of Texas at Austin | Austin, TX

Lead GNSS Data Scientist

@ Lurra Systems | Melbourne

Lead Data Scientist, Commercial Analytics

@ Checkout.com | London, United Kingdom

Data Engineer I

@ Love's Travel Stops | Oklahoma City, OK, US, 73120