Web: http://arxiv.org/abs/2206.06178

June 20, 2022, 1:12 a.m. | Anand Subramoney, Khaleelulla Khan Nazeer, Mark Schöne, Christian Mayr, David Kappel

cs.LG updates on arXiv.org arxiv.org

The scalability of recurrent neural networks (RNNs) is hindered by the
sequential dependence of each time step's computation on the previous time
step's output. Therefore, one way to speed up and scale RNNs is to reduce the
computation required at each time step independent of model size and task. In
this paper, we propose a model that reformulates Gated Recurrent Units (GRU) as
an event-based activity-sparse model that we call the Event-based GRU (EGRU),
where units compute updates only on …

arxiv cross event inference learning lg

More from arxiv.org / cs.LG updates on arXiv.org

Machine Learning Researcher - Saalfeld Lab

@ Howard Hughes Medical Institute - Chevy Chase, MD | Ashburn, Virginia

Project Director, Machine Learning in US Health

@ ideas42.org | Remote, US

Data Science Intern

@ NannyML | Remote

Machine Learning Engineer NLP/Speech

@ Play.ht | Remote

Research Scientist, 3D Reconstruction

@ Yembo | Remote, US

Clinical Assistant or Associate Professor of Management Science and Systems

@ University at Buffalo | Buffalo, NY