all AI news
Fourier-Mixed Window Attention: Accelerating Informer for Long Sequence Time-Series Forecasting
Feb. 16, 2024, 5:44 a.m. | Nhat Thanh Tran, Jack Xin
cs.LG updates on arXiv.org arxiv.org
Abstract: We study a fast local-global window-based attention method to accelerate Informer for long sequence time-series forecasting. While window attention is local and a considerable computational saving, it lacks the ability to capture global token information which is compensated by a subsequent Fourier transform block. Our method, named FWin, does not rely on query sparsity hypothesis and an empirical approximation underlying the ProbSparse attention of Informer. Through experiments on univariate and multivariate datasets, we show that …
abstract arxiv attention block computational cs.ai cs.lg forecasting fourier global information mixed saving series study token type
More from arxiv.org / cs.LG updates on arXiv.org
Jobs in AI, ML, Big Data
Lead Developer (AI)
@ Cere Network | San Francisco, US
Research Engineer
@ Allora Labs | Remote
Ecosystem Manager
@ Allora Labs | Remote
Founding AI Engineer, Agents
@ Occam AI | New York
AI Engineer Intern, Agents
@ Occam AI | US
AI Research Scientist
@ Vara | Berlin, Germany and Remote