May 10, 2024, 4 p.m. | Weights & Biases

Weights & Biases www.youtube.com

In this thought-provoking episode of Gradient Dissent, Percy Liang, co-founder of Together AI and Stanford University associate professor, addresses the intricacies of benchmarking language models comprehensively. Percy explains why evaluating these models requires a departure from traditional machine learning methods that focus on narrow tasks such as question answering or summarization. Instead, a holistic approach is necessary to capture the broad capabilities and identify the risks associated with language models. This discussion sheds light on the complexities of developing benchmarks …

benchmarking challenge co-founder focus founder gradient gradient dissent language language models llm machine machine learning narrow professor question question answering stanford stanford university summarization tasks thought together together ai traditional machine learning university

Software Engineer for AI Training Data (School Specific)

@ G2i Inc | Remote

Software Engineer for AI Training Data (Python)

@ G2i Inc | Remote

Software Engineer for AI Training Data (Tier 2)

@ G2i Inc | Remote

Data Engineer

@ Lemon.io | Remote: Europe, LATAM, Canada, UK, Asia, Oceania

Artificial Intelligence – Bioinformatic Expert

@ University of Texas Medical Branch | Galveston, TX

Lead Developer (AI)

@ Cere Network | San Francisco, US