Feb. 6, 2024, 5:43 a.m. | Jiaxiang Dong Haixu Wu Yuxuan Wang Yunzhong Qiu Li Zhang Jianmin Wang Mingsheng Long

cs.LG updates on arXiv.org arxiv.org

Time series pre-training has recently garnered wide attention for its potential to reduce labeling expenses and benefit various downstream tasks. Prior methods are mainly based on pre-training techniques well-acknowledged in vision or language, such as masked modeling and contrastive learning. However, randomly masking time series or calculating series-wise similarity will distort or neglect inherent temporal correlations crucial in time series data. To emphasize temporal correlation modeling, this paper proposes TimeSiam as a simple but effective self-supervised pre-training framework for Time …

attention benefit cs.lg framework labeling language masking modeling pre-training prior reduce series tasks time series training vision will wise

Lead Developer (AI)

@ Cere Network | San Francisco, US

Research Engineer

@ Allora Labs | Remote

Ecosystem Manager

@ Allora Labs | Remote

Founding AI Engineer, Agents

@ Occam AI | New York

AI Engineer Intern, Agents

@ Occam AI | US

AI Research Scientist

@ Vara | Berlin, Germany and Remote