all AI news
Bidirectional Generative Pre-training for Improving Time Series Representation Learning
Feb. 16, 2024, 5:43 a.m. | Ziyang Song, Qincheng Lu, He Zhu, Yue Li
cs.LG updates on arXiv.org arxiv.org
Abstract: Learning time-series representations for discriminative tasks has been a long-standing challenge. Current pre-training methods are limited in either unidirectional next-token prediction or randomly masked token prediction. We propose a novel architecture called Bidirectional Timely Generative Pre-trained Transformer (BiTimelyGPT), which pre-trains on time-series data by both next-token and previous-token predictions in alternating transformer layers. This pre-training task preserves original distribution and data shapes of the time-series. Additionally, the full-rank forward and backward attention matrices exhibit more …
abstract architecture arxiv challenge cs.ai cs.lg current data generative generative pre-trained transformer next novel prediction pre-training representation representation learning series tasks time series token training trains transformer type
More from arxiv.org / cs.LG updates on arXiv.org
Jobs in AI, ML, Big Data
Lead Developer (AI)
@ Cere Network | San Francisco, US
Research Engineer
@ Allora Labs | Remote
Ecosystem Manager
@ Allora Labs | Remote
Founding AI Engineer, Agents
@ Occam AI | New York
AI Engineer Intern, Agents
@ Occam AI | US
AI Research Scientist
@ Vara | Berlin, Germany and Remote