all AI news
Improving Interpretation Faithfulness for Vision Transformers
May 6, 2024, 4:43 a.m. | Lijie Hu, Yixin Liu, Ninghao Liu, Mengdi Huai, Lichao Sun, Di Wang
cs.LG updates on arXiv.org arxiv.org
Abstract: Vision Transformers (ViTs) have achieved state-of-the-art performance for various vision tasks. One reason behind the success lies in their ability to provide plausible innate explanations for the behavior of neural architectures. However, ViTs suffer from issues with explanation faithfulness, as their focal points are fragile to adversarial attacks and can be easily changed with even slight perturbations on the input image. In this paper, we propose a rigorous approach to mitigate these issues by introducing …
abstract adversarial adversarial attacks architectures art arxiv attacks behavior cs.ai cs.cv cs.lg however improving interpretation lies neural architectures performance reason state success tasks transformers type vision vision transformers
More from arxiv.org / cs.LG updates on arXiv.org
Testing the Segment Anything Model on radiology data
2 days, 2 hours ago |
arxiv.org
Calorimeter shower superresolution
2 days, 2 hours ago |
arxiv.org
Jobs in AI, ML, Big Data
Software Engineer for AI Training Data (School Specific)
@ G2i Inc | Remote
Software Engineer for AI Training Data (Python)
@ G2i Inc | Remote
Software Engineer for AI Training Data (Tier 2)
@ G2i Inc | Remote
Data Engineer
@ Lemon.io | Remote: Europe, LATAM, Canada, UK, Asia, Oceania
Artificial Intelligence – Bioinformatic Expert
@ University of Texas Medical Branch | Galveston, TX
Lead Developer (AI)
@ Cere Network | San Francisco, US