Nov. 24, 2022, 7:18 a.m. | Jinyi Hu, Xiaoyuan Yi, Wenhao Li, Maosong Sun, Xing Xie

cs.CL updates on arXiv.org arxiv.org

Variational Auto-Encoder (VAE) has been widely adopted in text generation.
Among many variants, recurrent VAE learns token-wise latent variables with each
conditioned on the preceding ones, which captures sequential variability better
in the era of RNN. However, it is unclear how to incorporate such recurrent
dynamics into the recently dominant Transformer due to its parallelism. In this
work, we propose TRACE, a Transformer-based recurrent VAE structure. TRACE
imposes recurrence on segment-wise latent variables with arbitrarily separated
text segments and constructs …

arxiv autoencoder diverse diversity text text generation transformer

Data Architect

@ University of Texas at Austin | Austin, TX

Data ETL Engineer

@ University of Texas at Austin | Austin, TX

Lead GNSS Data Scientist

@ Lurra Systems | Melbourne

Senior Machine Learning Engineer (MLOps)

@ Promaton | Remote, Europe

Staff Software Engineer, Generative AI, Google Cloud AI

@ Google | Mountain View, CA, USA; Sunnyvale, CA, USA

Expert Data Sciences

@ Gainwell Technologies | Any city, CO, US, 99999