all AI news
Leveraging Locality in Abstractive Text Summarization. (arXiv:2205.12476v1 [cs.CL])
May 26, 2022, 1:11 a.m. | Yixin Liu, Ansong Ni, Linyong Nan, Budhaditya Deb, Chenguang Zhu, Ahmed H. Awadallah, Dragomir Radev
cs.CL updates on arXiv.org arxiv.org
Despite the successes of neural attention models for natural language
generation tasks, the quadratic memory complexity of the self-attention module
with respect to the input length hinders their applications in long text
summarization. Instead of designing more efficient attention modules, we
approach this problem by investigating if models with a restricted context can
have competitive performance compared with the memory-efficient attention
models that maintain a global context by treating the input as an entire
sequence. Our model is applied to …
More from arxiv.org / cs.CL updates on arXiv.org
Jobs in AI, ML, Big Data
Lead GNSS Data Scientist
@ Lurra Systems | Melbourne
Senior Machine Learning Engineer (MLOps)
@ Promaton | Remote, Europe
Healthcare Data Modeler/Data Architect - REMOTE
@ Perficient | United States
Data Analyst – Sustainability, Green IT
@ H&M Group | Stockholm, Sweden
RWE Data Analyst
@ Sanofi | Hyderabad
Machine Learning Engineer
@ JPMorgan Chase & Co. | Jersey City, NJ, United States