all AI news
OpenAI cookbook: How to get token usage data for streamed chat completion response
May 7, 2024, 2:46 a.m. |
Simon Willison's Weblog simonwillison.net
OpenAI cookbook: How to get token usage data for streamed chat completion response
New feature in the OpenAI streaming API that I've been wanting for a long time: you can now set stream_options={"include_usage": True} to get back a "usage" block at the end of the stream showing how many input and output tokens were used.
This means you can now accurately account for the total cost of each streaming API call. Previously this information was only an available for non-streaming …
ai api block chat data feature generativeai get back llms openai set streaming the end token true usage
More from simonwillison.net / Simon Willison's Weblog
Understand errors and warnings better with Gemini
1 day, 12 hours ago |
simonwillison.net
PSF announces a new five year commitment from Fastly
1 day, 21 hours ago |
simonwillison.net
Programming mantras are proverbs
1 day, 22 hours ago |
simonwillison.net
Jobs in AI, ML, Big Data
Software Engineer for AI Training Data (School Specific)
@ G2i Inc | Remote
Software Engineer for AI Training Data (Python)
@ G2i Inc | Remote
Software Engineer for AI Training Data (Tier 2)
@ G2i Inc | Remote
Data Engineer
@ Lemon.io | Remote: Europe, LATAM, Canada, UK, Asia, Oceania
Artificial Intelligence – Bioinformatic Expert
@ University of Texas Medical Branch | Galveston, TX
Lead Developer (AI)
@ Cere Network | San Francisco, US