all AI news
Google Open-Sources AI Fine-Tuning Method Distilling Step-by-Step
InfoQ - AI, ML & Data Engineering www.infoq.com
A team from the University of Washington and Google Research recently open-sourced Distilling Step-by-Step, a technique for fine-tuning smaller language models. Distilling Step-by-Step requires less training data than standard fine-tuning and results in smaller models that can outperform few-shot prompted large language models (LLMs) that have 700x the parameters.
By Anthony Alfordai anthony data few-shot fine-tuning google google research language language models large language large language models llms ml & data engineering natural language processing parameters research standard team training training data university university of washington washington