all AI news
Reinforcement Learning for Edit-Based Non-Autoregressive Neural Machine Translation
May 3, 2024, 4:15 a.m. | Hao Wang, Tetsuro Morimura, Ukyo Honda, Daisuke Kawahara
cs.CL updates on arXiv.org arxiv.org
Abstract: Non-autoregressive (NAR) language models are known for their low latency in neural machine translation (NMT). However, a performance gap exists between NAR and autoregressive models due to the large decoding space and difficulty in capturing dependency between target words accurately. Compounding this, preparing appropriate training data for NAR models is a non-trivial task, often exacerbating exposure bias. To address these challenges, we apply reinforcement learning (RL) to Levenshtein Transformer, a representative edit-based NAR model, demonstrating …
abstract arxiv autoregressive autoregressive models cs.cl data decoding edit gap however language language models latency low low latency machine machine translation neural machine translation performance reinforcement reinforcement learning space training training data translation type words
More from arxiv.org / cs.CL updates on arXiv.org
Jobs in AI, ML, Big Data
Software Engineer for AI Training Data (School Specific)
@ G2i Inc | Remote
Software Engineer for AI Training Data (Python)
@ G2i Inc | Remote
Software Engineer for AI Training Data (Tier 2)
@ G2i Inc | Remote
Data Engineer
@ Lemon.io | Remote: Europe, LATAM, Canada, UK, Asia, Oceania
Artificial Intelligence – Bioinformatic Expert
@ University of Texas Medical Branch | Galveston, TX
Lead Developer (AI)
@ Cere Network | San Francisco, US