Oct. 19, 2022, 1:16 a.m. | Alexander Naumann, Felix Hertlein, Benchun Zhou, Laura Dörr, Kai Furmans

cs.CV updates on arXiv.org arxiv.org

State-of-the-art approaches in computer vision heavily rely on sufficiently
large training datasets. For real-world applications, obtaining such a dataset
is usually a tedious task. In this paper, we present a fully automated pipeline
to generate a synthetic dataset for instance segmentation in four steps. In
contrast to existing work, our pipeline covers every step from data acquisition
to the final dataset. We first scrape images for the objects of interest from
popular image search engines and since we rely only …

arxiv dataset dataset generation learn logistics

Founding AI Engineer, Agents

@ Occam AI | New York

AI Engineer Intern, Agents

@ Occam AI | US

AI Research Scientist

@ Vara | Berlin, Germany and Remote

Data Architect

@ University of Texas at Austin | Austin, TX

Data ETL Engineer

@ University of Texas at Austin | Austin, TX

Lead GNSS Data Scientist

@ Lurra Systems | Melbourne