April 19, 2024, 4:50 p.m. | AI & Data Today

AI Today Podcast: Artificial Intelligence Insights, Experts, and Opinion www.aidatatoday.com

LLMs aren’t perfect. In fact, that’s how they’re meant to be. LLMs are probabilistic systems. They work by providing different possible results based on what the inputs could probably mean and based on what data is probably relevant. Given all those possibilities and probabilities, LLMs are bound to get things wrong. In this episode hosts Kathleen Walch and Ron Schmelzer discuss using custom instructions for LLMs and why this is a best practice.



Show Notes:




Free Intro to CPMAI course …

best practices custom instructions data engineering inputs llms mean podcast practices prompt results systems work

More from www.aidatatoday.com / AI Today Podcast: Artificial Intelligence Insights, Experts, and Opinion

AI Research Scientist

@ Vara | Berlin, Germany and Remote

Data Architect

@ University of Texas at Austin | Austin, TX

Data ETL Engineer

@ University of Texas at Austin | Austin, TX

Lead GNSS Data Scientist

@ Lurra Systems | Melbourne

Senior Machine Learning Engineer (MLOps)

@ Promaton | Remote, Europe

Senior Software Engineer, Generative AI (C++)

@ SoundHound Inc. | Toronto, Canada