March 14, 2024, 4:43 a.m. | Yanyun Wang, Dehui Du, Haibo Hu, Zi Liang, Yuanhao Liu

cs.LG updates on arXiv.org arxiv.org

arXiv:2209.06388v3 Announce Type: replace
Abstract: Recent years have witnessed the success of recurrent neural network (RNN) models in time series classification (TSC). However, neural networks (NNs) are vulnerable to adversarial samples, which cause real-life adversarial attacks that undermine the robustness of AI models. To date, most existing attacks target at feed-forward NNs and image recognition tasks, but they cannot perform well on RNN-based TSC. This is due to the cyclical computation of RNN, which prevents direct model differentiation. In addition, …

abstract adversarial adversarial attacks ai models arxiv attacks classification cs.cr cs.lg however life multi-objective network networks neural network neural networks nns recurrent neural network rnn robustness samples series success through time series type undermine vulnerable

Data Architect

@ University of Texas at Austin | Austin, TX

Data ETL Engineer

@ University of Texas at Austin | Austin, TX

Lead GNSS Data Scientist

@ Lurra Systems | Melbourne

Senior Machine Learning Engineer (MLOps)

@ Promaton | Remote, Europe

Field Sample Specialist (Air Sampling) - Eurofins Environment Testing – Pueblo, CO

@ Eurofins | Pueblo, CO, United States

Camera Perception Engineer

@ Meta | Sunnyvale, CA