May 6, 2022, 1:10 a.m. | Sharan Narasimhan, Suvodip Dey, Maunendra Sankar Desarkar

cs.CL updates on arXiv.org arxiv.org

Recent studies show that auto-encoder based approaches successfully perform
language generation, smooth sentence interpolation, and style transfer over
unseen attributes using unlabelled datasets in a zero-shot manner. The latent
space geometry of such models is organised well enough to perform on datasets
where the style is "coarse-grained" i.e. a small fraction of words alone in a
sentence are enough to determine the overall style label. A recent study uses a
discrete token-based perturbation approach to map "similar" sentences
("similar" defined …

arxiv style transfer text text style transfer transfer unsupervised

AI Engineer Intern, Agents

@ Occam AI | US

AI Research Scientist

@ Vara | Berlin, Germany and Remote

Data Architect

@ University of Texas at Austin | Austin, TX

Data ETL Engineer

@ University of Texas at Austin | Austin, TX

Lead GNSS Data Scientist

@ Lurra Systems | Melbourne

Lead Data Modeler

@ Sherwin-Williams | Cleveland, OH, United States