Oct. 7, 2022, 1:16 a.m. | Skanda Koppula, Yazhe Li, Evan Shelhamer, Andrew Jaegle, Nikhil Parthasarathy, Relja Arandjelovic, João Carreira, Olivier Hénaff

cs.CV updates on arXiv.org arxiv.org

Self-supervised methods have achieved remarkable success in transfer
learning, often achieving the same or better accuracy than supervised
pre-training. Most prior work has done so by increasing pre-training
computation by adding complex data augmentation, multiple views, or lengthy
training schedules. In this work, we investigate a related, but orthogonal
question: given a fixed FLOP budget, what are the best datasets, models, and
(self-)supervised training methods for obtaining high accuracy on
representative visual tasks? Given the availability of large datasets, this …

arxiv efficiency pre-training training

Lead GNSS Data Scientist

@ Lurra Systems | Melbourne

Senior Machine Learning Engineer (MLOps)

@ Promaton | Remote, Europe

Principal Engineer, Deep Learning

@ Outrider | Remote

Data Analyst (Bangkok based, relocation provided)

@ Agoda | Bangkok (Central World Office)

Data Scientist II

@ MoEngage | Bengaluru

Machine Learning Engineer

@ Sika AG | Welwyn Garden City, United Kingdom