all AI news
Google announces multi-modal Gemini 1.5 with million token context length
InfoQ - AI, ML & Data Engineering www.infoq.com
One week after announcing Gemini 1.0 Ultra, Google announced additional details about its next generation model, Gemini 1.5. The new iteration comes with an expansion of its context window and the adoption of a "Mixture of Experts" (MoE) architecture, promising to make the AI both faster and more efficient. The new model also includes expanded multimodal capabilities.
By Andrew Hoblitzelladoption ai architecture artificial intelligence context context window development expansion experts faster gemini gemini 1.5 google iteration machine learning microservices mixture of experts ml & data engineering modal moe multi-modal neural networks next token