March 16, 2024, 4:25 p.m. | /u/-x-Knight

Machine Learning www.reddit.com

Hi guys, I have made some modifications to the Llama2 repository to utilize the TPU v3-8 hardware, so it can perform Llama2 7B (and even 13B) chat completion inference without graph recompilation. It is still slower than the Nvidia P100 when generating text with batch-size 1, not suitable for real-time inference but (TPU being TPU) shines well with batched text generation. I used it to generate large amount of texts for research purpose. Hope it benefits the community.

Here's the …

13b batch-size chat graph hardware inference kaggle llama2 machinelearning nvidia text tpu

Software Engineer for AI Training Data (School Specific)

@ G2i Inc | Remote

Software Engineer for AI Training Data (Python)

@ G2i Inc | Remote

Software Engineer for AI Training Data (Tier 2)

@ G2i Inc | Remote

Data Engineer

@ Lemon.io | Remote: Europe, LATAM, Canada, UK, Asia, Oceania

Artificial Intelligence – Bioinformatic Expert

@ University of Texas Medical Branch | Galveston, TX

Lead Developer (AI)

@ Cere Network | San Francisco, US