May 25, 2022, 2:30 p.m. | Synced

Synced syncedreview.com

In the new paper Masked Autoencoders As Spatiotemporal Learners, a Meta AI research team extends masked autoencoders (MAE) to spatiotemporal representation learning for video. The novel approach introduces negligible inductive biases on space-time while achieving strong empirical results compared to vision transformers (ViTs) and outperforms supervised pretraining by large margins.


The post Meta AI Extends MAEs to Video for Self-Supervised Representation Learning With Minimal Domain Knowledge first appeared on Synced.

ai artificial intelligence deep-neural-networks domain knowledge knowledge learning machine learning machine learning & data science masked autoencoder meta meta ai ml representation representation learning research self-supervised learning technology video

More from syncedreview.com / Synced

Data Architect

@ University of Texas at Austin | Austin, TX

Data ETL Engineer

@ University of Texas at Austin | Austin, TX

Lead GNSS Data Scientist

@ Lurra Systems | Melbourne

Senior Machine Learning Engineer (MLOps)

@ Promaton | Remote, Europe

Senior AI & Data Engineer

@ Bertelsmann | Kuala Lumpur, 14, MY, 50400

Analytics Engineer

@ Reverse Tech | Philippines - Remote