Oct. 31, 2022, 1:15 a.m. | Shunsuke Kitada, Hitoshi Iyatomi

cs.CL updates on arXiv.org arxiv.org

Adversarial training (AT) for attention mechanisms has successfully reduced
such drawbacks by considering adversarial perturbations. However, this
technique requires label information, and thus, its use is limited to
supervised settings. In this study, we explore the concept of incorporating
virtual AT (VAT) into the attention mechanisms, by which adversarial
perturbations can be computed even from unlabeled data. To realize this
approach, we propose two general training techniques, namely VAT for attention
mechanisms (Attention VAT) and "interpretable" VAT for attention mechanisms …

arxiv attention attention mechanisms making training virtual

AI Research Scientist

@ Vara | Berlin, Germany and Remote

Data Architect

@ University of Texas at Austin | Austin, TX

Data ETL Engineer

@ University of Texas at Austin | Austin, TX

Lead GNSS Data Scientist

@ Lurra Systems | Melbourne

Senior Machine Learning Engineer (MLOps)

@ Promaton | Remote, Europe

Senior Machine Learning Engineer

@ Samsara | Canada - Remote