June 27, 2024, 3:38 p.m. | Siddharth Jindal

Analytics India Magazine analyticsindiamag.com

The 27B model can perform inference on a single NVIDIA H100 Tensor Core GPU or TPU host, reducing deployment costs.


The post Google Rolls Out Gemma 2, Leaves Llama 3 Behind appeared first on Analytics India Magazine.

ai news analytics analytics india magazine core costs deployment gemma gemma 2 google gpu h100 india inference llama llama 3 magazine nvidia nvidia h100 tensor tpu

More from analyticsindiamag.com / Analytics India Magazine

VP, Enterprise Applications

@ Blue Yonder | Scottsdale

Data Scientist - Moloco Commerce Media

@ Moloco | Redwood City, California, United States

Senior Backend Engineer (New York)

@ Kalepa | New York City. Hybrid

Senior Backend Engineer (USA)

@ Kalepa | New York City. Remote US.

Senior Full Stack Engineer (USA)

@ Kalepa | New York City. Remote US.

Senior Full Stack Engineer (New York)

@ Kalepa | New York City., Hybrid