all AI news
TDT: Teaching Detectors to Track without Fully Annotated Videos. (arXiv:2205.05583v1 [cs.CV])
Web: http://arxiv.org/abs/2205.05583
May 12, 2022, 1:10 a.m. | Shuzhi Yu, Guanhang Wu, Chunhui Gu, Mohammed E. Fathy
cs.CV updates on arXiv.org arxiv.org
Recently, one-stage trackers that use a joint model to predict both
detections and appearance embeddings in one forward pass received much
attention and achieved state-of-the-art results on the Multi-Object Tracking
(MOT) benchmarks. However, their success depends on the availability of videos
that are fully annotated with tracking data, which is expensive and hard to
obtain. This can limit the model generalization. In comparison, the two-stage
approach, which performs detection and embedding separately, is slower but
easier to train as their …
More from arxiv.org / cs.CV updates on arXiv.org
Latest AI/ML/Big Data Jobs
Director, Applied Mathematics & Computational Research Division
@ Lawrence Berkeley National Lab | Berkeley, Ca
Business Data Analyst
@ MainStreet Family Care | Birmingham, AL
Assistant/Associate Professor of the Practice in Business Analytics
@ Georgetown University McDonough School of Business | Washington DC
Senior Data Science Writer
@ NannyML | Remote
Director of AI/ML Engineering
@ Armis Industries | Remote (US only), St. Louis, California
Digital Analytics Manager
@ Patagonia | Ventura, California