s
March 8, 2024, 10:47 a.m. |

Simon Willison's Weblog simonwillison.net

You can now train a 70b language model at home


Jeremy Howard and team: "Today, we’re releasing Answer.AI’s first project: a fully open source system that, for the first time, can efficiently train a 70b large language model on a regular desktop computer with two or more standard gaming GPUs (RTX 3090 or 4090)."


This is about fine-tuning an existing model, not necessarily training one from scratch.


There are two tricks at play here. The first is QLoRA, which can …

70b ai computer desktop gaming generativeai gpus home howard jeremyhoward language language model large language large language model llms open source project standard team train

Data Engineer

@ Lemon.io | Remote: Europe, LATAM, Canada, UK, Asia, Oceania

Artificial Intelligence – Bioinformatic Expert

@ University of Texas Medical Branch | Galveston, TX

Lead Developer (AI)

@ Cere Network | San Francisco, US

Research Engineer

@ Allora Labs | Remote

Ecosystem Manager

@ Allora Labs | Remote

Founding AI Engineer, Agents

@ Occam AI | New York