all AI news
Robust Representation Learning with Self-Distillation for Domain Generalization
April 15, 2024, 4:45 a.m. | Ankur Singh, Senthilnath Jayavelu
cs.CV updates on arXiv.org arxiv.org
Abstract: Despite the recent success of deep neural networks, there remains a need for effective methods to enhance domain generalization using vision transformers. In this paper, we propose a novel domain generalization technique called Robust Representation Learning with Self-Distillation (RRLD) comprising i) intermediate-block self-distillation and ii) augmentation-guided self-distillation to improve the generalization capabilities of transformer-based models on unseen domains. This approach enables the network to learn robust and general features that are invariant to different augmentations …
abstract arxiv augmentation block cs.cv distillation domain intermediate networks neural networks novel paper representation representation learning robust success transformers type vision vision transformers
More from arxiv.org / cs.CV updates on arXiv.org
Compact 3D Scene Representation via Self-Organizing Gaussian Grids
1 day, 21 hours ago |
arxiv.org
Fingerprint Matching with Localized Deep Representation
1 day, 21 hours ago |
arxiv.org
Jobs in AI, ML, Big Data
Founding AI Engineer, Agents
@ Occam AI | New York
AI Engineer Intern, Agents
@ Occam AI | US
AI Research Scientist
@ Vara | Berlin, Germany and Remote
Data Architect
@ University of Texas at Austin | Austin, TX
Data ETL Engineer
@ University of Texas at Austin | Austin, TX
Lead GNSS Data Scientist
@ Lurra Systems | Melbourne