May 2, 2024, 6:58 p.m. | Jimmy Guerrero

DEV Community dev.to

Large language models show impressive capabilities, but ensuring their safe and reliable deployment remains challenging. This talk will cover evaluation techniques to assess and improve LLM reliability across key vectors like groundedness and faithfulness. It will also explore detecting vulnerabilities to attacks like prompt injection and PII leaks. Attendees will learn how to build custom evaluations tailored to their use cases.


Speaker: Shiv Sakuja is a former Google engineer, and co-founder of Athina AI, an LLM observability and evaluation platform …

ai attacks capabilities computer computer vision computervision datascience deployment evaluation explore key language language models large language large language models leaks learn llm llms machinelearning making meetup pii prompt prompt injection reliability safe show talk vectors vision vulnerabilities will

Software Engineer for AI Training Data (School Specific)

@ G2i Inc | Remote

Software Engineer for AI Training Data (Python)

@ G2i Inc | Remote

Software Engineer for AI Training Data (Tier 2)

@ G2i Inc | Remote

Data Engineer

@ Lemon.io | Remote: Europe, LATAM, Canada, UK, Asia, Oceania

Artificial Intelligence – Bioinformatic Expert

@ University of Texas Medical Branch | Galveston, TX

Lead Developer (AI)

@ Cere Network | San Francisco, US