March 17, 2024, 4:58 p.m. | LlamaIndex

LlamaIndex www.youtube.com

Long-term memory for LLMs is an unsolved problem, and doing naive retrieval from a vector database doesn’t work.

​The recent iteration of MemGPT (Packer et al.) takes a big step in this direction. Taking the LLM as an OS analog, the authors propose “virtual context management” to manage both memory in-context window and in external storage. ​Recent advances in function calling allow these agents to read and write from these data sources, and modify their own context.

​We're excited to …

analog authors big context context window database editing iteration llamaindex llm llms long-term management memory retrieval unsolved vector vector database virtual webinar work

Software Engineer for AI Training Data (School Specific)

@ G2i Inc | Remote

Software Engineer for AI Training Data (Python)

@ G2i Inc | Remote

Software Engineer for AI Training Data (Tier 2)

@ G2i Inc | Remote

Data Engineer

@ Lemon.io | Remote: Europe, LATAM, Canada, UK, Asia, Oceania

Artificial Intelligence – Bioinformatic Expert

@ University of Texas Medical Branch | Galveston, TX

Lead Developer (AI)

@ Cere Network | San Francisco, US