all AI news
LLMs in the Imaginarium: Tool Learning through Simulated Trial and Error
March 8, 2024, 5:42 a.m. | Boshi Wang, Hao Fang, Jason Eisner, Benjamin Van Durme, Yu Su
cs.LG updates on arXiv.org arxiv.org
Abstract: Tools are essential for large language models (LLMs) to acquire up-to-date information and take consequential actions in external environments. Existing work on tool-augmented LLMs primarily focuses on the broad coverage of tools and the flexibility of adding new tools. However, a critical aspect that has surprisingly been understudied is simply how accurately an LLM uses tools for which it has been trained. We find that existing LLMs, including GPT-4 and open-source LLMs specifically fine-tuned for …
abstract arxiv augmented llms coverage cs.ai cs.cl cs.lg environments error flexibility however information language language models large language large language models llms through tool tools type work
More from arxiv.org / cs.LG updates on arXiv.org
Jobs in AI, ML, Big Data
Lead Developer (AI)
@ Cere Network | San Francisco, US
Research Engineer
@ Allora Labs | Remote
Ecosystem Manager
@ Allora Labs | Remote
Founding AI Engineer, Agents
@ Occam AI | New York
AI Engineer Intern, Agents
@ Occam AI | US
AI Research Scientist
@ Vara | Berlin, Germany and Remote