May 4, 2023, 2:39 p.m. | James Briggs

James Briggs www.youtube.com

Large Language Models (LLMs) have a data freshness problem. Even some of the most powerful models, like ChatGPT's gpt-3.5-turbo and GPT-4, have no idea about recent events.

The world, according to LLMs, is frozen in time. They only know the world as it appeared through their training data.

So, how do we handle this problem? We can use retrieval augmentation. This technique allows us to retrieve relevant information from an external knowledge base and give that information to our LLM. …

augmentation chatgpt data events gpt gpt-3 gpt-3.5 gpt-4 hallucinations langchain language language models large language models llm llm hallucinations llms retrieval through training training data world

Data Architect

@ University of Texas at Austin | Austin, TX

Data ETL Engineer

@ University of Texas at Austin | Austin, TX

Lead GNSS Data Scientist

@ Lurra Systems | Melbourne

Senior Machine Learning Engineer (MLOps)

@ Promaton | Remote, Europe

AIML - Sr Machine Learning Engineer, Data and ML Innovation

@ Apple | Seattle, WA, United States

Senior Data Engineer

@ Palta | Palta Cyprus, Palta Warsaw, Palta remote