Oct. 23, 2023, 2:45 p.m. | MLOps.community

MLOps.community www.youtube.com

// Abstract
Making LLMs reliable is hard. You can't debug or unit test them, not in the traditional sense at least. Instead, you'll need to turn to the practice of Observability, by instrumenting your feature to produce rich telemetry and analyzing behavior from that data. Observability can also act as a key source of data for evaluations.

// Bio
Phillip is on the product team at Honeycomb where he leads their AI initiatives and works on a bunch of different …

abstract behavior carter conference data debug feature least lightning llms making observability practice production sense talk telemetry test them

More from www.youtube.com / MLOps.community

Data Architect

@ University of Texas at Austin | Austin, TX

Data ETL Engineer

@ University of Texas at Austin | Austin, TX

Lead GNSS Data Scientist

@ Lurra Systems | Melbourne

Senior Machine Learning Engineer (MLOps)

@ Promaton | Remote, Europe

#13721 - Data Engineer - AI Model Testing

@ Qualitest | Miami, Florida, United States

Elasticsearch Administrator

@ ManTech | 201BF - Customer Site, Chantilly, VA