Jan. 17, 2022, 3:01 p.m. | /u/Yuqing7

Artificial Intelligence www.reddit.com

A DeepMind research team proposes ReLICv2, which demonstrates for the first time that representations learned without labels can consistently outperform a strong, supervised baseline on ImageNet and even achieve comparable results to state-of-the-art self-supervised vision transformers (ViTs).

Here is a quick read: Pushing the Limits of Self-Supervised ResNets: DeepMind’s ReLICv2 Beats Strong Supervised Baselines on ImageNet.

The paper Pushing the Limits of Self-supervised ResNets: Can We Outperform Supervised Learning Without Labels on ImageNet? is on arXiv.

submitted by /u/Yuqing7 …

artificial deepmind imagenet

Data Architect

@ University of Texas at Austin | Austin, TX

Data ETL Engineer

@ University of Texas at Austin | Austin, TX

Lead GNSS Data Scientist

@ Lurra Systems | Melbourne

Senior Machine Learning Engineer (MLOps)

@ Promaton | Remote, Europe

Data Engineer

@ Parker | New York City

Sr. Data Analyst | Home Solutions

@ Three Ships | Raleigh or Charlotte, NC