June 28, 2022, 8 a.m. | Anthony Alford

InfoQ - AI, ML & Data Engineering www.infoq.com

Researchers at Stanford University have open-sourced Diffusion-LM, a non-autoregressive generative language model that allows for fine-grained control of the model's output text. When evaluated on controlled text generation tasks, Diffusion-LM outperforms existing methods.

By Anthony Alford

ai deep learning diffusion language ml & data engineering natural language processing neural networks news stanford stanford university university

Lead Developer (AI)

@ Cere Network | San Francisco, US

Research Engineer

@ Allora Labs | Remote

Ecosystem Manager

@ Allora Labs | Remote

Founding AI Engineer, Agents

@ Occam AI | New York

AI Engineer Intern, Agents

@ Occam AI | US

AI Research Scientist

@ Vara | Berlin, Germany and Remote