all AI news
Symbolic Autoencoding for Self-Supervised Sequence Learning
Feb. 19, 2024, 5:42 a.m. | Mohammad Hossein Amani, Nicolas Mario Baldwin, Amin Mansouri, Martin Josifoski, Maxime Peyrard, Robert West
cs.LG updates on arXiv.org arxiv.org
Abstract: Traditional language models, adept at next-token prediction in text sequences, often struggle with transduction tasks between distinct symbolic systems, particularly when parallel data is scarce. Addressing this issue, we introduce \textit{symbolic autoencoding} ($\Sigma$AE), a self-supervised framework that harnesses the power of abundant unparallel data alongside limited parallel data. $\Sigma$AE connects two generative models via a discrete bottleneck layer and is optimized end-to-end by minimizing reconstruction loss (simultaneously with supervised loss for the parallel data), such …
abstract adept arxiv cs.ai cs.lg data framework issue language language models next power prediction struggle systems tasks text token type
More from arxiv.org / cs.LG updates on arXiv.org
Sliced Wasserstein with Random-Path Projecting Directions
2 days, 5 hours ago |
arxiv.org
Learning Extrinsic Dexterity with Parameterized Manipulation Primitives
2 days, 5 hours ago |
arxiv.org
The Un-Kidnappable Robot: Acoustic Localization of Sneaking People
2 days, 5 hours ago |
arxiv.org
Jobs in AI, ML, Big Data
Data Engineer
@ Lemon.io | Remote: Europe, LATAM, Canada, UK, Asia, Oceania
Artificial Intelligence – Bioinformatic Expert
@ University of Texas Medical Branch | Galveston, TX
Lead Developer (AI)
@ Cere Network | San Francisco, US
Research Engineer
@ Allora Labs | Remote
Ecosystem Manager
@ Allora Labs | Remote
Founding AI Engineer, Agents
@ Occam AI | New York