April 8, 2024, 4:47 a.m. | Arkil Patel, Siva Reddy, Dzmitry Bahdanau, Pradeep Dasigi

cs.CL updates on arXiv.org arxiv.org

arXiv:2311.09635v2 Announce Type: replace
Abstract: Contemporary Large Language Models (LLMs) exhibit a high degree of code generation and comprehension capability. A particularly promising area is their ability to interpret code modules from unfamiliar libraries for solving user-instructed tasks. Recent work has shown that large proprietary LLMs can learn novel library usage in-context from demonstrations. These results raise several open questions: whether demonstrations of library usage is required, whether smaller (and more open) models also possess such capabilities, etc. In this …

abstract arxiv capability code code generation context cs.cl in-context learning language language models large language large language models learn libraries library llms modules novel proprietary tasks type usage work

Founding AI Engineer, Agents

@ Occam AI | New York

AI Engineer Intern, Agents

@ Occam AI | US

AI Research Scientist

@ Vara | Berlin, Germany and Remote

Data Architect

@ University of Texas at Austin | Austin, TX

Data ETL Engineer

@ University of Texas at Austin | Austin, TX

Data Scientist (Database Development)

@ Nasdaq | Bengaluru-Affluence