s
Dec. 9, 2023, 6:08 a.m. |

Simon Willison's Weblog simonwillison.net

I always struggle a bit with I'm asked about the "hallucination problem" in LLMs. Because, in some sense, hallucination is all LLMs do. They are dream machines.

We direct their dreams with prompts. The prompts start the dream, and based on the LLM's hazy recollection of its training documents, most of the time the result goes someplace useful.

It's only when the dreams go into deemed factually incorrect territory that we label it a "hallucination". It looks like a bug, …

ai andrej karpathy andrejkarpathy documents generativeai hallucination llm llms machines prompts sense struggle training

Data Scientist (m/f/x/d)

@ Symanto Research GmbH & Co. KG | Spain, Germany

Head of Data Governance - Vice President

@ iCapital | New York City, United States

Analytics Engineer / Data Analyst (Intermediate/Senior)

@ Employment Hero | Ho Chi Minh City, Ho Chi Minh City, Vietnam - Remote

Senior Customer Data Strategy Manager (Remote, San Francisco)

@ Dynatrace | San Francisco, CA, United States

Software Developer - AI/Machine Learning

@ ICF | Nationwide Remote Office (US99)

Senior Data Science Manager - Logistics, Rider (all genders)

@ Delivery Hero | Berlin, Germany