Oct. 24, 2023, 1 p.m. | Anthony Alford

InfoQ - AI, ML & Data Engineering www.infoq.com

A team from the University of Washington and Google Research recently open-sourced Distilling Step-by-Step, a technique for fine-tuning smaller language models. Distilling Step-by-Step requires less training data than standard fine-tuning and results in smaller models that can outperform few-shot prompted large language models (LLMs) that have 700x the parameters.

By Anthony Alford

ai anthony data few-shot fine-tuning google google research language language models large language large language models llms ml & data engineering natural language processing parameters research standard team training training data university university of washington washington

Artificial Intelligence – Bioinformatic Expert

@ University of Texas Medical Branch | Galveston, TX

Lead Developer (AI)

@ Cere Network | San Francisco, US

Research Engineer

@ Allora Labs | Remote

Ecosystem Manager

@ Allora Labs | Remote

Founding AI Engineer, Agents

@ Occam AI | New York

AI Engineer Intern, Agents

@ Occam AI | US