all AI news
Forget Less, Count Better: A Domain-Incremental Self-Distillation Learning Benchmark for Lifelong Crowd Counting. (arXiv:2205.03307v1 [cs.CV])
Web: http://arxiv.org/abs/2205.03307
May 9, 2022, 1:10 a.m. | Jiaqi Gao, Jingqi Li, Hongming Shan, Yanyun Qu, James Z. Wang, Junping Zhang
cs.CV updates on arXiv.org arxiv.org
Crowd Counting has important applications in public safety and pandemic
control. A robust and practical crowd counting system has to be capable of
continuously learning with the new-coming domain data in real-world scenarios
instead of fitting one domain only. Off-the-shelf methods have some drawbacks
to handle multiple domains. 1) The models will achieve limited performance
(even drop dramatically) among old domains after training images from new
domains due to the discrepancies of intrinsic data distributions from various
domains, which is …
More from arxiv.org / cs.CV updates on arXiv.org
Latest AI/ML/Big Data Jobs
Predictive Ecology Postdoctoral Fellow
@ Lawrence Berkeley National Lab | Berkeley, CA
Data Analyst, Patagonia Action Works
@ Patagonia | Remote
Data & Insights Strategy & Innovation General Manager
@ Chevron Services Company, a division of Chevron U.S.A Inc. | Houston, TX
Faculty members in Research areas such as Bayesian and Spatial Statistics; Data Privacy and Security; AI/ML; NLP; Image and Video Data Analysis
@ Ahmedabad University | Ahmedabad, India
Director, Applied Mathematics & Computational Research Division
@ Lawrence Berkeley National Lab | Berkeley, Ca
Business Data Analyst
@ MainStreet Family Care | Birmingham, AL