Aug. 2, 2023, 7:11 p.m. | MLOps.community

MLOps.community www.youtube.com

// Abstract
Large Language Models (LLMs) have shown remarkable capabilities in domains such as question-answering and information recall, but every so often, they just make stuff up. In this talk, we'll take a look at “LLM Hallucinations" and explore strategies to keep LLMs grounded and reliable in real-world applications.

We’ll start by walking through an example implementation of an "LLM-powered Support Center" to illustrate the problems caused by hallucinations. Next, I'll demonstrate how leveraging a searchable knowledge base can ensure …

abstract explore hallucinations information knowledge language language models large language large language models llm llm hallucinations llms look prod recall scott strategies talk

More from www.youtube.com / MLOps.community

Data Architect

@ University of Texas at Austin | Austin, TX

Data ETL Engineer

@ University of Texas at Austin | Austin, TX

Lead GNSS Data Scientist

@ Lurra Systems | Melbourne

Senior Machine Learning Engineer (MLOps)

@ Promaton | Remote, Europe

Global Data Architect, AVP - State Street Global Advisors

@ State Street | Boston, Massachusetts

Data Engineer

@ NTT DATA | Pune, MH, IN