Oct. 7, 2023, 11:48 a.m. | AI Jason

AI Jason www.youtube.com

It's hard to get LLM generate big amount of content and take in large inputs; To solve this, introducing StreamingLLM, Extend Llama-2 & Falcon's up to 4 million tokens; 22x faster inference than your standard LLM ⚡️

Now you can even generate the whole book with LLM!

🔗 Links
- Follow me on twitter: https://twitter.com/jasonzhou1993
- Join my AI email list: https://www.ai-jason.com/
- My discord: https://discord.gg/eZXprSaCDE
- StreamingLLM Github: https://github.com/mit-han-lab/streaming-llm

👋🏻 About Me
My name is Jason Zhou, a product …

big book falcon faster generate inference inputs llama llama2 llm solve standard token tokens

Software Engineer for AI Training Data (School Specific)

@ G2i Inc | Remote

Software Engineer for AI Training Data (Python)

@ G2i Inc | Remote

Software Engineer for AI Training Data (Tier 2)

@ G2i Inc | Remote

Data Engineer

@ Lemon.io | Remote: Europe, LATAM, Canada, UK, Asia, Oceania

Artificial Intelligence – Bioinformatic Expert

@ University of Texas Medical Branch | Galveston, TX

Lead Developer (AI)

@ Cere Network | San Francisco, US