April 16, 2024, 4:43 a.m. | Gabriel Meseguer-Brocal, Dorian Desblancs, Romain Hennequin

cs.LG updates on arXiv.org arxiv.org

arXiv:2404.09177v1 Announce Type: cross
Abstract: Self-supervised learning has emerged as a powerful way to pre-train generalizable machine learning models on large amounts of unlabeled data. It is particularly compelling in the music domain, where obtaining labeled data is time-consuming, error-prone, and ambiguous. During the self-supervised process, models are trained on pretext tasks, with the primary objective of acquiring robust and informative features that can later be fine-tuned for specific downstream tasks. The choice of the pretext task is critical as …

abstract arxiv comparison cs.lg cs.sd data domain eess.as error experimental machine machine learning machine learning models music process self-supervised learning supervised learning tagging train type view

AI Research Scientist

@ Vara | Berlin, Germany and Remote

Data Architect

@ University of Texas at Austin | Austin, TX

Data ETL Engineer

@ University of Texas at Austin | Austin, TX

Lead GNSS Data Scientist

@ Lurra Systems | Melbourne

Senior Machine Learning Engineer (MLOps)

@ Promaton | Remote, Europe

Senior Software Engineer, Generative AI (C++)

@ SoundHound Inc. | Toronto, Canada