Web: http://arxiv.org/abs/2205.05943

May 13, 2022, 1:11 a.m. | Ghazi Felhi, Joseph Le Roux, Djamé Seddah

cs.LG updates on arXiv.org arxiv.org

We propose a generative model for text generation, which exhibits
disentangled latent representations of syntax and semantics. Contrary to
previous work, this model does not need syntactic information such as
constituency parses, or semantic information such as paraphrase pairs. Our
model relies solely on the inductive bias found in attention-based
architectures such as Transformers.


In the attention of Transformers, keys handle information selection while
values specify what information is conveyed. Our model, dubbed QKVAE, uses
Attention in its decoder to …

arxiv bias semantics transformers unsupervised

More from arxiv.org / cs.LG updates on arXiv.org

Data Analyst, Patagonia Action Works

@ Patagonia | Remote

Data & Insights Strategy & Innovation General Manager

@ Chevron Services Company, a division of Chevron U.S.A Inc. | Houston, TX

Faculty members in Research areas such as Bayesian and Spatial Statistics; Data Privacy and Security; AI/ML; NLP; Image and Video Data Analysis

@ Ahmedabad University | Ahmedabad, India

Director, Applied Mathematics & Computational Research Division

@ Lawrence Berkeley National Lab | Berkeley, Ca

Business Data Analyst

@ MainStreet Family Care | Birmingham, AL

Assistant/Associate Professor of the Practice in Business Analytics

@ Georgetown University McDonough School of Business | Washington DC