Feb. 5, 2024, 3:44 p.m. | Arjun Majumdar Karmesh Yadav Sergio Arnaud Yecheng Jason Ma Claire Chen Sneha Silwal Aryan Jain

cs.LG updates on arXiv.org arxiv.org

We present the largest and most comprehensive empirical study of pre-trained visual representations (PVRs) or visual 'foundation models' for Embodied AI. First, we curate CortexBench, consisting of 17 different tasks spanning locomotion, navigation, dexterous, and mobile manipulation. Next, we systematically evaluate existing PVRs and find that none are universally dominant. To study the effect of pre-training data size and diversity, we combine over 4,000 hours of egocentric videos from 7 different sources (over 4.3M images) and ImageNet to train different-sized …

artificial cortex cs.ai cs.cv cs.lg cs.ro embodied embodied ai embodied intelligence foundation intelligence manipulation mobile navigation next search study tasks visual visual cortex

Founding AI Engineer, Agents

@ Occam AI | New York

AI Engineer Intern, Agents

@ Occam AI | US

AI Research Scientist

@ Vara | Berlin, Germany and Remote

Data Architect

@ University of Texas at Austin | Austin, TX

Data ETL Engineer

@ University of Texas at Austin | Austin, TX

Lead GNSS Data Scientist

@ Lurra Systems | Melbourne