March 17, 2024, 4:58 p.m. | LlamaIndex

LlamaIndex www.youtube.com

Long-term memory for LLMs is an unsolved problem, and doing naive retrieval from a vector database doesn’t work.

​The recent iteration of MemGPT (Packer et al.) takes a big step in this direction. Taking the LLM as an OS analog, the authors propose “virtual context management” to manage both memory in-context window and in external storage. ​Recent advances in function calling allow these agents to read and write from these data sources, and modify their own context.

​We're excited to …

analog authors big context context window database editing iteration llamaindex llm llms long-term management memory retrieval unsolved vector vector database virtual webinar work

Data Architect

@ University of Texas at Austin | Austin, TX

Data ETL Engineer

@ University of Texas at Austin | Austin, TX

Lead GNSS Data Scientist

@ Lurra Systems | Melbourne

Senior Machine Learning Engineer (MLOps)

@ Promaton | Remote, Europe

Reporting & Data Analytics Lead (Sizewell C)

@ EDF | London, GB

Data Analyst

@ Notable | San Mateo, CA