all AI news
Observability for LLMs // Phillip Carter // LLMs in Production Conference Lightning Talk 4
Oct. 23, 2023, 2:45 p.m. | MLOps.community
MLOps.community www.youtube.com
Making LLMs reliable is hard. You can't debug or unit test them, not in the traditional sense at least. Instead, you'll need to turn to the practice of Observability, by instrumenting your feature to produce rich telemetry and analyzing behavior from that data. Observability can also act as a key source of data for evaluations.
// Bio
Phillip is on the product team at Honeycomb where he leads their AI initiatives and works on a bunch of different …
abstract behavior carter conference data debug feature least lightning llms making observability practice production sense talk telemetry test them
More from www.youtube.com / MLOps.community
Leading Enterprise Data Teams // Sol Rashidi // MLOps Podcast #227
5 days, 21 hours ago |
www.youtube.com
Jobs in AI, ML, Big Data
Data Architect
@ University of Texas at Austin | Austin, TX
Data ETL Engineer
@ University of Texas at Austin | Austin, TX
Lead GNSS Data Scientist
@ Lurra Systems | Melbourne
Senior Machine Learning Engineer (MLOps)
@ Promaton | Remote, Europe
#13721 - Data Engineer - AI Model Testing
@ Qualitest | Miami, Florida, United States
Elasticsearch Administrator
@ ManTech | 201BF - Customer Site, Chantilly, VA