Feb. 26, 2024, 5:42 a.m. | Maurice Kraus, Felix Divo, David Steinmann, Devendra Singh Dhami, Kristian Kersting

cs.LG updates on arXiv.org arxiv.org

arXiv:2402.15404v1 Announce Type: new
Abstract: In natural language processing and vision, pretraining is utilized to learn effective representations. Unfortunately, the success of pretraining does not easily carry over to time series due to potential mismatch between sources and target. Actually, common belief is that multi-dataset pretraining does not work for time series! Au contraire, we introduce a new self-supervised contrastive pretraining approach to learn one encoding from many unlabeled and diverse time series datasets, so that the single learned representation …

abstract arxiv belief cs.lg datasets language language processing learn natural natural language natural language processing pretraining processing representation representation learning series success time series type united vision

Founding AI Engineer, Agents

@ Occam AI | New York

AI Engineer Intern, Agents

@ Occam AI | US

AI Research Scientist

@ Vara | Berlin, Germany and Remote

Data Architect

@ University of Texas at Austin | Austin, TX

Data ETL Engineer

@ University of Texas at Austin | Austin, TX

Lead GNSS Data Scientist

@ Lurra Systems | Melbourne