Aug. 16, 2022, 1:10 a.m. | Emadeldeen Eldele, Mohamed Ragab, Zhenghua Chen, Min Wu, Chee-Keong Kwoh, Xiaoli Li, Cuntai Guan

cs.LG updates on arXiv.org arxiv.org

Learning time-series representations when only unlabeled data or few labeled
samples are available can be a challenging task. Recently, contrastive
self-supervised learning has shown great improvement in extracting useful
representations from unlabeled data via contrasting different augmented views
of data. In this work, we propose a novel Time-Series representation learning
framework via Temporal and Contextual Contrasting (TS-TCC) that learns
representations from unlabeled data with contrastive learning. Specifically, we
propose time-series specific weak and strong augmentations and use their views
to …

arxiv classification learning lg representation representation learning semi-supervised series time

Data Architect

@ University of Texas at Austin | Austin, TX

Data ETL Engineer

@ University of Texas at Austin | Austin, TX

Lead GNSS Data Scientist

@ Lurra Systems | Melbourne

Senior Machine Learning Engineer (MLOps)

@ Promaton | Remote, Europe

Software Engineer, Data Tools - Full Stack

@ DoorDash | Pune, India

Senior Data Analyst

@ Artsy | New York City