Feb. 14, 2024, 12:58 p.m. | MLOps.community

MLOps.community www.youtube.com

Omar Khattab, PhD Candidate at Stanford, DSPy: Compiling Declarative Language Model Calls into Self-Improving Pipelines. Within minutes of compiling, a few lines of DSPy allow GPT-3.5 and llama2-13b-chat to self-bootstrap pipelines that outperform standard few-shot prompting and pipelines with expert-created demonstrations brought to us by @WeightsBiases .

Omar delves into the world of language models and the concept of "stacking," also known as chaining networks programs. He explains the shift in perspective from viewing language models as task-specific transformers to …

13b bootstrap chat clip dspy expert few-shot gpt gpt-3 gpt-3.5 language language model language models llama2 networks phd pipelines podcast prompting standard stanford understanding

More from www.youtube.com / MLOps.community

Software Engineer for AI Training Data (School Specific)

@ G2i Inc | Remote

Software Engineer for AI Training Data (Python)

@ G2i Inc | Remote

Software Engineer for AI Training Data (Tier 2)

@ G2i Inc | Remote

Data Engineer

@ Lemon.io | Remote: Europe, LATAM, Canada, UK, Asia, Oceania

Artificial Intelligence – Bioinformatic Expert

@ University of Texas Medical Branch | Galveston, TX

Lead Developer (AI)

@ Cere Network | San Francisco, US