all AI news
Impact of a DCT-driven Loss in Attention-based Knowledge-Distillation for Scene Recognition. (arXiv:2205.01997v1 [cs.CV])
Web: http://arxiv.org/abs/2205.01997
May 5, 2022, 1:10 a.m. | Alejandro López-Cifuentes, Marcos Escudero-Viñolo, Jesús Bescós, Juan C. SanMiguel
cs.CV updates on arXiv.org arxiv.org
Knowledge Distillation (KD) is a strategy for the definition of a set of
transferability gangways to improve the efficiency of Convolutional Neural
Networks. Feature-based Knowledge Distillation is a subfield of KD that relies
on intermediate network representations, either unaltered or depth-reduced via
maximum activation maps, as the source knowledge. In this paper, we propose and
analyse the use of a 2D frequency transform of the activation maps before
transferring them. We pose that\textemdash by using global image cues rather
than …
arxiv attention cv distillation impact knowledge knowledge-distillation loss
More from arxiv.org / cs.CV updates on arXiv.org
Latest AI/ML/Big Data Jobs
Data Analyst, Patagonia Action Works
@ Patagonia | Remote
Data & Insights Strategy & Innovation General Manager
@ Chevron Services Company, a division of Chevron U.S.A Inc. | Houston, TX
Faculty members in Research areas such as Bayesian and Spatial Statistics; Data Privacy and Security; AI/ML; NLP; Image and Video Data Analysis
@ Ahmedabad University | Ahmedabad, India
Director, Applied Mathematics & Computational Research Division
@ Lawrence Berkeley National Lab | Berkeley, Ca
Business Data Analyst
@ MainStreet Family Care | Birmingham, AL
Assistant/Associate Professor of the Practice in Business Analytics
@ Georgetown University McDonough School of Business | Washington DC