Jan. 28, 2024, 12:55 a.m. | LlamaIndex

LlamaIndex www.youtube.com

​LLMs are great at reasoning and taking actions.

​But previous frameworks for agentic reasoning (e.g. ReAct) were primarily focused on sequential reasoning, leading to higher latency/cost, and even poorer performance due to the lack of long-term planning.

​LLMCompiler is a new framework by Kim et al. that introduces a compiler for multi-function calling. Given a task, the framework plans out a DAG. This planning both allows for long-term thinking (which boosts performance) but also determination of which steps can be …

agents compiler cost framework frameworks function latency llamaindex llms long-term performance planning react reasoning webinar

Artificial Intelligence – Bioinformatic Expert

@ University of Texas Medical Branch | Galveston, TX

Lead Developer (AI)

@ Cere Network | San Francisco, US

Research Engineer

@ Allora Labs | Remote

Ecosystem Manager

@ Allora Labs | Remote

Founding AI Engineer, Agents

@ Occam AI | New York

AI Engineer Intern, Agents

@ Occam AI | US