April 15, 2024, 4:45 a.m. | Ankur Singh, Senthilnath Jayavelu

cs.CV updates on arXiv.org arxiv.org

arXiv:2302.06874v2 Announce Type: replace
Abstract: Despite the recent success of deep neural networks, there remains a need for effective methods to enhance domain generalization using vision transformers. In this paper, we propose a novel domain generalization technique called Robust Representation Learning with Self-Distillation (RRLD) comprising i) intermediate-block self-distillation and ii) augmentation-guided self-distillation to improve the generalization capabilities of transformer-based models on unseen domains. This approach enables the network to learn robust and general features that are invariant to different augmentations …

abstract arxiv augmentation block cs.cv distillation domain intermediate networks neural networks novel paper representation representation learning robust success transformers type vision vision transformers

Founding AI Engineer, Agents

@ Occam AI | New York

AI Engineer Intern, Agents

@ Occam AI | US

AI Research Scientist

@ Vara | Berlin, Germany and Remote

Data Architect

@ University of Texas at Austin | Austin, TX

Data ETL Engineer

@ University of Texas at Austin | Austin, TX

Lead GNSS Data Scientist

@ Lurra Systems | Melbourne