all AI news
Hallucination-Free? Assessing the Reliability of Leading AI Legal Research Tools (Paper Explained)
June 26, 2024, 5:20 p.m. | Yannic Kilcher
Yannic Kilcher www.youtube.com
An in-depth look at a recent Stanford paper examining the degree of hallucinations in various LegalTech tools that incorporate LLMs.
OUTLINE:
0:00 - Intro
1:58 - What are legal research tools and how are large language models used by them?
5:30 - Overview and abstract of the paper
9:29 - What is a hallucination and why do they occur?
15:45 - What is retrieval augmented generation (RAG)?
25:00 - Why LLMs are a bad choice when reasoning …
abstract ai legal explained free hallucination hallucinations intro language language models large language large language models legal legal research legaltech llms look overview paper rag reliability research stanford them tools
More from www.youtube.com / Yannic Kilcher
Jobs in AI, ML, Big Data
VP, Enterprise Applications
@ Blue Yonder | Scottsdale
Data Scientist - Moloco Commerce Media
@ Moloco | Redwood City, California, United States
Senior Backend Engineer (New York)
@ Kalepa | New York City. Hybrid
Senior Backend Engineer (USA)
@ Kalepa | New York City. Remote US.
Senior Full Stack Engineer (USA)
@ Kalepa | New York City. Remote US.
Senior Full Stack Engineer (New York)
@ Kalepa | New York City., Hybrid