Web: http://arxiv.org/abs/2205.02724

May 6, 2022, 1:11 a.m. | Xiaobing Sun, Wei Lu

cs.CL updates on arXiv.org arxiv.org

Although self-attention based models such as Transformers have achieved
remarkable successes on natural language processing (NLP) tasks, recent studies
reveal that they have limitations on modeling sequential transformations (Hahn,
2020), which may prompt re-examinations of recurrent neural networks (RNNs)
that demonstrated impressive results on handling sequential data. Despite many
prior attempts to interpret RNNs, their internal mechanisms have not been fully
understood, and the question on how exactly they capture sequential features
remains largely unclear. In this work, we present …

arxiv

More from arxiv.org / cs.CL updates on arXiv.org

Director, Applied Mathematics & Computational Research Division

@ Lawrence Berkeley National Lab | Berkeley, Ca

Business Data Analyst

@ MainStreet Family Care | Birmingham, AL

Assistant/Associate Professor of the Practice in Business Analytics

@ Georgetown University McDonough School of Business | Washington DC

Senior Data Science Writer

@ NannyML | Remote

Director of AI/ML Engineering

@ Armis Industries | Remote (US only), St. Louis, California

Digital Analytics Manager

@ Patagonia | Ventura, California