all AI news
EnCodecMAE: Leveraging neural codecs for universal audio representation learning
May 22, 2024, 4:43 a.m. | Leonardo Pepino, Pablo Riera, Luciana Ferrer
cs.LG updates on arXiv.org arxiv.org
Abstract: The goal of universal audio representation learning is to obtain foundational models that can be used for a variety of downstream tasks involving speech, music and environmental sounds. To approach this problem, methods inspired by works on self-supervised learning for NLP, like BERT, or computer vision, like masked autoencoders (MAE), are often adapted to the audio domain. In this work, we propose masking representations of the audio signal, and training a MAE to reconstruct the …
abstract arxiv audio bert computer computer vision cs.lg cs.sd eess.as environmental foundational foundational models music nlp replace representation representation learning self-supervised learning speech supervised learning tasks type universal vision
More from arxiv.org / cs.LG updates on arXiv.org
Jobs in AI, ML, Big Data
Senior Data Engineer
@ Displate | Warsaw
Professor/Associate Professor of Health Informatics [LKCMedicine]
@ Nanyang Technological University | NTU Novena Campus, Singapore
Research Fellow (Computer Science (and Engineering)/Electronic Engineering/Applied Mathematics/Perception Sciences)
@ Nanyang Technological University | NTU Main Campus, Singapore
Java Developer - Assistant Manager
@ State Street | Bengaluru, India
Senior Java/Python Developer
@ General Motors | Austin IT Innovation Center North - Austin IT Innovation Center North
Research Associate (Computer Engineering/Computer Science/Electronics Engineering)
@ Nanyang Technological University | NTU Main Campus, Singapore