all AI news
Long context prompting for Claude 2.1
Dec. 6, 2023, 11:44 p.m. |
Simon Willison's Weblog simonwillison.net
Long context prompting for Claude 2.1
Claude 2.1 has a 200,000 token context, enough for around 500 pages of text. Convincing it to answer a question based on a single sentence buried deep within that content can be difficult, but Anthropic found that adding "Assistant: Here is the most relevant sentence in the context:" to the end of the prompt was enough to raise Claude 2.1’s score from 27% to 98% on their evaluation.
ai anthropic assistant claude claude 2 claude 2.1 context found generativeai llms promptengineering prompting question text token
More from simonwillison.net / Simon Willison's Weblog
Ham radio general exam question pool as JSON
1 day, 9 hours ago |
simonwillison.net
uv pip install --exclude-newer example
2 days, 12 hours ago |
simonwillison.net
Jobs in AI, ML, Big Data
Data Engineer
@ Lemon.io | Remote: Europe, LATAM, Canada, UK, Asia, Oceania
Artificial Intelligence – Bioinformatic Expert
@ University of Texas Medical Branch | Galveston, TX
Lead Developer (AI)
@ Cere Network | San Francisco, US
Research Engineer
@ Allora Labs | Remote
Ecosystem Manager
@ Allora Labs | Remote
Founding AI Engineer, Agents
@ Occam AI | New York