all AI news
Self-Distilled Vision Transformer for Domain Generalization. (arXiv:2207.12392v2 [cs.CV] UPDATED)
Aug. 15, 2022, 1:12 a.m. | Maryam Sultana, Muzammal Naseer, Muhammad Haris Khan, Salman Khan, Fahad Shahbaz Khan
cs.CV updates on arXiv.org arxiv.org
In recent past, several domain generalization (DG) methods have been
proposed, showing encouraging performance, however, almost all of them build on
convolutional neural networks (CNNs). There is little to no progress on
studying the DG performance of vision transformers (ViTs), which are
challenging the supremacy of CNNs on standard benchmarks, often built on i.i.d
assumption. This renders the real-world deployment of ViTs doubtful. In this
paper, we attempt to explore ViTs towards addressing the DG problem. Similar to
CNNs, ViTs …
More from arxiv.org / cs.CV updates on arXiv.org
Jobs in AI, ML, Big Data
Data Architect
@ University of Texas at Austin | Austin, TX
Data ETL Engineer
@ University of Texas at Austin | Austin, TX
Lead GNSS Data Scientist
@ Lurra Systems | Melbourne
Senior Machine Learning Engineer (MLOps)
@ Promaton | Remote, Europe
Business Intelligence Analyst
@ Rappi | COL-Bogotá
Applied Scientist II
@ Microsoft | Redmond, Washington, United States