Web: http://arxiv.org/abs/2209.15589

Oct. 7, 2022, 1:16 a.m. | Skanda Koppula, Yazhe Li, Evan Shelhamer, Andrew Jaegle, Nikhil Parthasarathy, Relja Arandjelovic, João Carreira, Olivier Hénaff

cs.CV updates on arXiv.org arxiv.org

Self-supervised methods have achieved remarkable success in transfer
learning, often achieving the same or better accuracy than supervised
pre-training. Most prior work has done so by increasing pre-training
computation by adding complex data augmentation, multiple views, or lengthy
training schedules. In this work, we investigate a related, but orthogonal
question: given a fixed FLOP budget, what are the best datasets, models, and
(self-)supervised training methods for obtaining high accuracy on
representative visual tasks? Given the availability of large datasets, this …

arxiv efficiency pre-training training

More from arxiv.org / cs.CV updates on arXiv.org

DATA ANALYST /- CONTROLE DE GESTION ET FINANCE H/F

@ METRO/MAKRO | Nanterre, France

Data Analyst

@ Netcentric | Barcelona, Spain

Power BI Developer

@ Lendi Group | Sydney, Australia

Staff Data Scientist - Merchant Services (Remote, North America)

@ Shopify | Dallas, TX, United States

Machine Learning / Data Engineer

@ WATI | Vietnam - Remote

F/H Data Manager

@ Bosch Group | Saint-Ouen-sur-Seine, France

[Fixed-term contract until July 2023] Data Quality Controller - Space Industry Luxembourg (m/f/o)

@ LuxSpace Sarl | Betzdorf, Luxembourg

Senior Data Engineer (Azure DataBricks/datalake)

@ SpectraMedix | East Windsor, NJ, United States

Abschlussarbeit im Bereich Data Analytics (w/m/div.)

@ Bosch Group | Rülzheim, Germany

Data Engineer - Marketing

@ Publicis Groupe | London, United Kingdom

Data Engineer (Consulting division)

@ Starschema | Budapest, Hungary

Team Leader, Master Data Management - Support CN, HK & TW

@ Publicis Groupe | Kuala Lumpur, Malaysia