Oct. 17, 2022, 1:16 a.m. | Yuan Gong, Andrew Rouditchenko, Alexander H. Liu, David Harwath, Leonid Karlinsky, Hilde Kuehne, James Glass

cs.CV updates on arXiv.org arxiv.org

In this paper, we first extend the recent Masked Auto-Encoder (MAE) model
from a single modality to audio-visual multi-modalities. Subsequently, we
propose the Contrastive Audio-Visual Masked Auto-Encoder (CAV-MAE) by combining
contrastive learning and masked data modeling, two major self-supervised
learning frameworks, to learn a joint and coordinated audio-visual
representation. Our experiments show that the contrastive audio-visual
correspondence learning objective not only enables the model to perform
audio-visual retrieval tasks, but also helps the model learn a better joint
representation. As …

arxiv audio autoencoder masked autoencoder

Software Engineer for AI Training Data (School Specific)

@ G2i Inc | Remote

Software Engineer for AI Training Data (Python)

@ G2i Inc | Remote

Software Engineer for AI Training Data (Tier 2)

@ G2i Inc | Remote

Data Engineer

@ Lemon.io | Remote: Europe, LATAM, Canada, UK, Asia, Oceania

Artificial Intelligence – Bioinformatic Expert

@ University of Texas Medical Branch | Galveston, TX

Lead Developer (AI)

@ Cere Network | San Francisco, US