all AI news
Long context prompting for Claude 2.1
Dec. 6, 2023, 11:44 p.m. |
Simon Willison's Weblog simonwillison.net
Long context prompting for Claude 2.1
Claude 2.1 has a 200,000 token context, enough for around 500 pages of text. Convincing it to answer a question based on a single sentence buried deep within that content can be difficult, but Anthropic found that adding "Assistant: Here is the most relevant sentence in the context:" to the end of the prompt was enough to raise Claude 2.1’s score from 27% to 98% on their evaluation.
ai anthropic assistant claude claude 2 claude 2.1 context found generativeai llms promptengineering prompting text token
More from simonwillison.net / Simon Willison's Weblog
Fast groq-hosted LLMs vs browser jank
1 day, 6 hours ago |
simonwillison.net
AI counter app from my PyCon US keynote
2 days, 3 hours ago |
simonwillison.net
Understand errors and warnings better with Gemini
2 days, 21 hours ago |
simonwillison.net
Commit: Add a shared credentials relationship from twitter.com to x.com
2 days, 23 hours ago |
simonwillison.net
Jobs in AI, ML, Big Data
Software Engineer for AI Training Data (School Specific)
@ G2i Inc | Remote
Software Engineer for AI Training Data (Python)
@ G2i Inc | Remote
Software Engineer for AI Training Data (Tier 2)
@ G2i Inc | Remote
Data Engineer
@ Lemon.io | Remote: Europe, LATAM, Canada, UK, Asia, Oceania
Artificial Intelligence – Bioinformatic Expert
@ University of Texas Medical Branch | Galveston, TX
Lead Developer (AI)
@ Cere Network | San Francisco, US