March 14, 2024, 4:43 a.m. | Yanyun Wang, Dehui Du, Haibo Hu, Zi Liang, Yuanhao Liu

cs.LG updates on arXiv.org arxiv.org

arXiv:2209.06388v3 Announce Type: replace
Abstract: Recent years have witnessed the success of recurrent neural network (RNN) models in time series classification (TSC). However, neural networks (NNs) are vulnerable to adversarial samples, which cause real-life adversarial attacks that undermine the robustness of AI models. To date, most existing attacks target at feed-forward NNs and image recognition tasks, but they cannot perform well on RNN-based TSC. This is due to the cyclical computation of RNN, which prevents direct model differentiation. In addition, …

abstract adversarial adversarial attacks ai models arxiv attacks classification cs.cr cs.lg however life multi-objective network networks neural network neural networks nns recurrent neural network rnn robustness samples series success through time series type undermine vulnerable

Senior Data Engineer

@ Displate | Warsaw

Automation and AI Strategist (Remote - US)

@ MSD | USA - New Jersey - Rahway

Assistant Manager - Prognostics Development

@ Bosch Group | Bengaluru, India

Analytics Engineer - Data Solutions

@ MSD | IND - Maharashtra - Pune (Wework)

Jr. Data Engineer (temporary)

@ MSD | COL - Cundinamarca - Bogotá (Colpatria)

Senior Data Engineer

@ KION Group | Atlanta, GA, United States