May 13, 2022, 1:11 a.m. | Xiuye Gu, Tsung-Yi Lin, Weicheng Kuo, Yin Cui

cs.LG updates on arXiv.org arxiv.org

We aim at advancing open-vocabulary object detection, which detects objects
described by arbitrary text inputs. The fundamental challenge is the
availability of training data. It is costly to further scale up the number of
classes contained in existing object detection datasets. To overcome this
challenge, we propose ViLD, a training method via Vision and Language knowledge
Distillation. Our method distills the knowledge from a pretrained
open-vocabulary image classification model (teacher) into a two-stage detector
(student). Specifically, we use the teacher …

arxiv cv detection distillation knowledge language vision

Data Scientist (m/f/x/d)

@ Symanto Research GmbH & Co. KG | Spain, Germany

Data Science Sustainability Co-Op (Summer & Fall 2024)

@ O-I | Perrysburg, OH, United States

Research Scientist

@ Chevron Phillips Chemical Company | USA: Kingwood, TX, US, 77339

Data Scientist Python (Django) (m/f/d)

@ RoomPriceGenie | Hybrid Mannheim, Remote DACH, Remote Germany

Operational Transformation & Strategy - Data Operations - Associate

@ JPMorgan Chase & Co. | Mumbai, Maharashtra, India

Senior Data Scientist

@ Rocket Travel | Chicago, IL USA