s
April 10, 2024, 11:14 p.m. |

Simon Willison's Weblog simonwillison.net

Notes on how to use LLMs in your product


A whole bunch of useful observations from Will Larson here. I love his focus on the key characteristic of LLMs that "you cannot know whether a given response is accurate", nor can you calculate a dependable confidence score for a response - and as a result you need to either "accept potential inaccuracies (which makes sense in many cases, humans are wrong sometimes too) or keep a Human-in-the-Loop (HITL) to validate …

ai confidence focus generativeai key llms love notes product the key will willlarson

Data Engineer

@ Lemon.io | Remote: Europe, LATAM, Canada, UK, Asia, Oceania

Artificial Intelligence – Bioinformatic Expert

@ University of Texas Medical Branch | Galveston, TX

Lead Developer (AI)

@ Cere Network | San Francisco, US

Research Engineer

@ Allora Labs | Remote

Ecosystem Manager

@ Allora Labs | Remote

Founding AI Engineer, Agents

@ Occam AI | New York