all AI news
GROKKED LLM vs RAG? (Part 3)
June 8, 2024, noon | code_your_own_AI
code_your_own_AI www.youtube.com
Current research in AI clearly indicates that established LLMs, like Gemini Pro 1.5 or GPT-4 Turbo fail in deep reasoning, even when integrated in complex RAG systems.
A Grokking phase transition is essential for LLM to active their performance phase, reaching close to 99% accuracy for "un-seen" tasks in the …
analyze architecture black box box causal causal reasoning current fail gemini gemini pro gemini pro 1.5 gpt gpt-4 layer llm llms part performance pro 1.5 rag reasoning research transformer transformer architecture transition turbo
More from www.youtube.com / code_your_own_AI
Decoding AI's Blind Spots: Solving Causal Reasoning
2 days, 11 hours ago |
www.youtube.com
APPLE: NEW ML AI, Multimodal & Multitask 4M
3 days, 9 hours ago |
www.youtube.com
Financial AI Brilliance: 7 Children at Stanford? 😆
5 days, 9 hours ago |
www.youtube.com
NEW TextGrad by Stanford: Better than DSPy
1 week, 2 days ago |
www.youtube.com
Inside my Brain: Med AI for my MRI Diagnosis?
1 week, 4 days ago |
www.youtube.com
BEST RAG you can buy: LAW AI (Stanford)
1 week, 6 days ago |
www.youtube.com
RAG explained step-by-step up to GROKKED RAG sys
2 weeks, 1 day ago |
www.youtube.com
Jobs in AI, ML, Big Data
AI Focused Biochemistry Postdoctoral Fellow
@ Lawrence Berkeley National Lab | Berkeley, CA
Senior Data Engineer
@ Displate | Warsaw
Solutions Engineer
@ Stability AI | United States
Lead BizOps Engineer
@ Mastercard | O'Fallon, Missouri (Main Campus)
Senior Solution Architect
@ Cognite | Kuala Lumpur
Senior Front-end Engineer
@ Cognite | Bengaluru