Feb. 15, 2024, 5:46 a.m. | Xiaochuang Han, Sachin Kumar, Yulia Tsvetkov, Marjan Ghazvininejad

cs.CL updates on arXiv.org arxiv.org

arXiv:2305.14771v2 Announce Type: replace
Abstract: Diffusion-based language models are emerging as a promising alternative to autoregressive LMs: they approach the competence of autoregressive LMs while offering nuanced controllability at inference time. While autoregressive LMs have benefited immensely from scaling and instruction-based learning, existing studies of diffusion LMs have been conducted on a smaller scale. Starting with a recently proposed diffusion model SSD-LM, in this work we first explore methods to scale it from 0.4B to 13B parameters, proposing techniques to …

abstract arxiv collaboration cs.cl david diffusion general inference language language models lms scaling small studies type

Lead Developer (AI)

@ Cere Network | San Francisco, US

Research Engineer

@ Allora Labs | Remote

Ecosystem Manager

@ Allora Labs | Remote

Founding AI Engineer, Agents

@ Occam AI | New York

AI Engineer Intern, Agents

@ Occam AI | US

AI Research Scientist

@ Vara | Berlin, Germany and Remote