Aug. 11, 2023, 6:44 a.m. | Zhengzhi Lu, He Wang, Ziyi Chang, Guoan Yang, Hubert P. H. Shum

cs.LG updates on arXiv.org arxiv.org

Recently, methods for skeleton-based human activity recognition have been
shown to be vulnerable to adversarial attacks. However, these attack methods
require either the full knowledge of the victim (i.e. white-box attacks),
access to training data (i.e. transfer-based attacks) or frequent model queries
(i.e. black-box attacks). All their requirements are highly restrictive,
raising the question of how detrimental the vulnerability is. In this paper, we
show that the vulnerability indeed exists. To this end, we consider a new
attack task: the …

action recognition adversarial attacks arxiv attack methods attacks box data gradient human knowledge recognition training training data transfer vulnerable

Software Engineer for AI Training Data (School Specific)

@ G2i Inc | Remote

Software Engineer for AI Training Data (Python)

@ G2i Inc | Remote

Software Engineer for AI Training Data (Tier 2)

@ G2i Inc | Remote

Data Engineer

@ Lemon.io | Remote: Europe, LATAM, Canada, UK, Asia, Oceania

Artificial Intelligence – Bioinformatic Expert

@ University of Texas Medical Branch | Galveston, TX

Lead Developer (AI)

@ Cere Network | San Francisco, US