March 18, 2024, 4:42 a.m. | Haoyang Liu, Aditya Singh, Yijiang Li, Haohan Wang

cs.LG updates on arXiv.org arxiv.org

arXiv:2403.10476v1 Announce Type: cross
Abstract: Enhancing the robustness of deep learning models, particularly in the realm of vision transformers (ViTs), is crucial for their real-world deployment. In this work, we provide a finetuning approach to enhance the robustness of vision transformers inspired by the concept of nullspace from linear algebra. Our investigation centers on whether a vision transformer can exhibit resilience to input variations akin to the nullspace property in linear mappings, implying that perturbations sampled from this nullspace do …

abstract algebra arxiv concept cs.cv cs.lg deep learning deployment finetuning investigation linear linear algebra robust robustness transformers type vision vision transformers work world

Data Architect

@ University of Texas at Austin | Austin, TX

Data ETL Engineer

@ University of Texas at Austin | Austin, TX

Lead GNSS Data Scientist

@ Lurra Systems | Melbourne

Senior Machine Learning Engineer (MLOps)

@ Promaton | Remote, Europe

Principal Data Engineering Manager

@ Microsoft | Redmond, Washington, United States

Machine Learning Engineer

@ Apple | San Diego, California, United States