Oct. 27, 2022, 1:12 a.m. | Shansan Gong, Mukai Li, Jiangtao Feng, Zhiyong Wu, Lingpeng Kong

cs.LG updates on arXiv.org arxiv.org

Recently, diffusion models have emerged as a new paradigm for generative
models. Despite the success in domains using continuous signals such as vision
and audio, adapting diffusion models to natural language is difficult due to
the discrete nature of text. We tackle this challenge by proposing DiffuSeq: a
diffusion model designed for sequence-to-sequence (Seq2Seq) text generation
tasks. Upon extensive evaluation over a wide range of Seq2Seq tasks, we find
DiffuSeq achieving comparable or even better performance than six established
baselines, …

arxiv diffusion diffusion models sequence to sequence text text generation

Senior Machine Learning Engineer

@ GPTZero | Toronto, Canada

ML/AI Engineer / NLP Expert - Custom LLM Development (x/f/m)

@ HelloBetter | Remote

Doctoral Researcher (m/f/div) in Automated Processing of Bioimages

@ Leibniz Institute for Natural Product Research and Infection Biology (Leibniz-HKI) | Jena

Seeking Developers and Engineers for AI T-Shirt Generator Project

@ Chevon Hicks | Remote

Principal Data Architect - Azure & Big Data

@ MGM Resorts International | Home Office - US, NV

GN SONG MT Market Research Data Analyst 11

@ Accenture | Bengaluru, BDC7A