all AI news
Impact of a DCT-driven Loss in Attention-based Knowledge-Distillation for Scene Recognition. (arXiv:2205.01997v1 [cs.CV])
May 5, 2022, 1:10 a.m. | Alejandro López-Cifuentes, Marcos Escudero-Viñolo, Jesús Bescós, Juan C. SanMiguel
cs.CV updates on arXiv.org arxiv.org
Knowledge Distillation (KD) is a strategy for the definition of a set of
transferability gangways to improve the efficiency of Convolutional Neural
Networks. Feature-based Knowledge Distillation is a subfield of KD that relies
on intermediate network representations, either unaltered or depth-reduced via
maximum activation maps, as the source knowledge. In this paper, we propose and
analyse the use of a 2D frequency transform of the activation maps before
transferring them. We pose that\textemdash by using global image cues rather
than …
arxiv attention cv distillation impact knowledge knowledge-distillation loss
More from arxiv.org / cs.CV updates on arXiv.org
Jobs in AI, ML, Big Data
Data Architect
@ University of Texas at Austin | Austin, TX
Data ETL Engineer
@ University of Texas at Austin | Austin, TX
Lead GNSS Data Scientist
@ Lurra Systems | Melbourne
Senior Machine Learning Engineer (MLOps)
@ Promaton | Remote, Europe
Principal Machine Learning Engineer (AI, NLP, LLM, Generative AI)
@ Palo Alto Networks | Santa Clara, CA, United States
Consultant Senior Data Engineer F/H
@ Devoteam | Nantes, France