Feb. 26, 2024, 5:42 a.m. | Maurice Kraus, Felix Divo, David Steinmann, Devendra Singh Dhami, Kristian Kersting

cs.LG updates on arXiv.org arxiv.org

arXiv:2402.15404v1 Announce Type: new
Abstract: In natural language processing and vision, pretraining is utilized to learn effective representations. Unfortunately, the success of pretraining does not easily carry over to time series due to potential mismatch between sources and target. Actually, common belief is that multi-dataset pretraining does not work for time series! Au contraire, we introduce a new self-supervised contrastive pretraining approach to learn one encoding from many unlabeled and diverse time series datasets, so that the single learned representation …

abstract arxiv belief cs.lg datasets language language processing learn natural natural language natural language processing pretraining processing representation representation learning series success time series type united vision

Software Engineer for AI Training Data (School Specific)

@ G2i Inc | Remote

Software Engineer for AI Training Data (Python)

@ G2i Inc | Remote

Software Engineer for AI Training Data (Tier 2)

@ G2i Inc | Remote

Data Engineer

@ Lemon.io | Remote: Europe, LATAM, Canada, UK, Asia, Oceania

Artificial Intelligence – Bioinformatic Expert

@ University of Texas Medical Branch | Galveston, TX

Lead Developer (AI)

@ Cere Network | San Francisco, US