April 1, 2024, 4:41 a.m. | Ali Behrouz, Michele Santacatterina, Ramin Zabih

cs.LG updates on arXiv.org arxiv.org

arXiv:2403.19888v1 Announce Type: new
Abstract: Recent advances in deep learning have mainly relied on Transformers due to their data dependency and ability to learn at scale. The attention module in these architectures, however, exhibits quadratic time and space in input size, limiting their scalability for long-sequence modeling. Despite recent attempts to design efficient and effective architecture backbone for multi-dimensional data, such as images and multivariate time series, existing models are either data independent, or fail to allow inter- and intra-dimension …

abstract advances architectures arxiv attention cs.ai cs.cv cs.lg data deep learning however learn modeling scalability scale space state state space models token transformers type

Lead Developer (AI)

@ Cere Network | San Francisco, US

Research Engineer

@ Allora Labs | Remote

Ecosystem Manager

@ Allora Labs | Remote

Founding AI Engineer, Agents

@ Occam AI | New York

AI Engineer Intern, Agents

@ Occam AI | US

AI Research Scientist

@ Vara | Berlin, Germany and Remote