Aug. 17, 2022, 1:10 a.m. | Renhao Wang, Hang Zhao, Yang Gao

cs.LG updates on arXiv.org arxiv.org

Many recent approaches in contrastive learning have worked to close the gap
between pretraining on iconic images like ImageNet and pretraining on complex
scenes like COCO. This gap exists largely because commonly used random crop
augmentations obtain semantically inconsistent content in crowded scene images
of diverse objects. Previous works use preprocessing pipelines to localize
salient objects for improved cropping, but an end-to-end solution is still
elusive. In this work, we propose a framework which accomplishes this goal via
joint learning …

arxiv bootstrapping cv segmentation

Senior Machine Learning Engineer (MLOps)

@ Promaton | Remote, Europe

Data Analyst (CPS-GfK)

@ GfK | Bucharest

Consultant Data Analytics IT Digital Impulse - H/F

@ Talan | Paris, France

Data Analyst

@ Experian | Mumbai, India

Data Scientist

@ Novo Nordisk | Princeton, NJ, US

Data Architect IV

@ Millennium Corporation | United States