Feb. 14, 2024, 12:58 p.m. | MLOps.community

MLOps.community www.youtube.com

Omar Khattab, PhD Candidate at Stanford, DSPy: Compiling Declarative Language Model Calls into Self-Improving Pipelines. Within minutes of compiling, a few lines of DSPy allow GPT-3.5 and llama2-13b-chat to self-bootstrap pipelines that outperform standard few-shot prompting and pipelines with expert-created demonstrations brought to us by @WeightsBiases .

Omar delves into the world of language models and the concept of "stacking," also known as chaining networks programs. He explains the shift in perspective from viewing language models as task-specific transformers to …

13b bootstrap chat clip dspy expert few-shot gpt gpt-3 gpt-3.5 language language model language models llama2 networks phd pipelines podcast prompting standard stanford understanding

More from www.youtube.com / MLOps.community

AI Research Scientist

@ Vara | Berlin, Germany and Remote

Data Architect

@ University of Texas at Austin | Austin, TX

Data ETL Engineer

@ University of Texas at Austin | Austin, TX

Lead GNSS Data Scientist

@ Lurra Systems | Melbourne

Data Science Analyst

@ Mayo Clinic | AZ, United States

Sr. Data Scientist (Network Engineering)

@ SpaceX | Redmond, WA