Web: http://arxiv.org/abs/2111.11707

June 20, 2022, 1:12 a.m. | Ru Peng, Nankai Lin, Yi Fang, Shengyi Jiang, Tianyong Hao, Boyu Chen, Junbo Zhao

cs.CL updates on arXiv.org arxiv.org

Syntax knowledge contributes its powerful strength in Neural machine
translation (NMT) tasks. The early NMT model supposed that syntax details can
be automatically learned from numerous texts via attention networks. However,
succeeding researches pointed out that limited by the uncontrolled nature of
attention computation, the model requires an external syntax to capture the
deep syntactic awareness. Although recent syntax-aware NMT methods have bored
great fruits in combining syntax, the additional workloads they introduced
render the model heavy and slow. Particularly, …

arxiv attention boosting machine machine translation network neural neural machine translation self-attention translation

More from arxiv.org / cs.CL updates on arXiv.org

Machine Learning Researcher - Saalfeld Lab

@ Howard Hughes Medical Institute - Chevy Chase, MD | Ashburn, Virginia

Project Director, Machine Learning in US Health

@ ideas42.org | Remote, US

Data Science Intern

@ NannyML | Remote

Machine Learning Engineer NLP/Speech

@ Play.ht | Remote

Research Scientist, 3D Reconstruction

@ Yembo | Remote, US

Clinical Assistant or Associate Professor of Management Science and Systems

@ University at Buffalo | Buffalo, NY