all AI news
Long context prompting for Claude 2.1
Dec. 6, 2023, 11:44 p.m. |
Simon Willison's Weblog simonwillison.net
Long context prompting for Claude 2.1
Claude 2.1 has a 200,000 token context, enough for around 500 pages of text. Convincing it to answer a question based on a single sentence buried deep within that content can be difficult, but Anthropic found that adding "Assistant: Here is the most relevant sentence in the context:" to the end of the prompt was enough to raise Claude 2.1’s score from 27% to 98% on their evaluation.
ai anthropic assistant claude claude 2 claude 2.1 context found generativeai llms promptengineering prompting question text token
More from simonwillison.net / Simon Willison's Weblog
How (some) good corporate engineering blogs are written
1 day, 18 hours ago |
simonwillison.net
Django Enhancement Proposal 14: Background Workers
2 days, 18 hours ago |
simonwillison.net
Why, after 6 years, I’m over GraphQL
3 days, 16 hours ago |
simonwillison.net
What does the public in six countries think of generative AI in news?
3 days, 19 hours ago |
simonwillison.net
Jobs in AI, ML, Big Data
Senior Machine Learning Engineer
@ GPTZero | Toronto, Canada
ML/AI Engineer / NLP Expert - Custom LLM Development (x/f/m)
@ HelloBetter | Remote
Doctoral Researcher (m/f/div) in Automated Processing of Bioimages
@ Leibniz Institute for Natural Product Research and Infection Biology (Leibniz-HKI) | Jena
Seeking Developers and Engineers for AI T-Shirt Generator Project
@ Chevon Hicks | Remote
Senior Applied Data Scientist
@ dunnhumby | London
Principal Data Architect - Azure & Big Data
@ MGM Resorts International | Home Office - US, NV