Web: http://arxiv.org/abs/2209.06388

Sept. 15, 2022, 1:11 a.m. | Yanyun Wang, Dehui Du, Yuanhao Liu

cs.LG updates on arXiv.org arxiv.org

Deep neural network (DNN) classifiers are vulnerable to adversarial attacks.
Although the existing gradient-based attacks have achieved good performance in
feed-forward model and image recognition tasks, the extension for time series
classification in the recurrent neural network (RNN) remains a dilemma, because
the cyclical structure of RNN prevents direct model differentiation and the
visual sensitivity to perturbations of time series data challenges the
traditional local optimization objective to minimize perturbation. In this
paper, an efficient and widely applicable approach called …

arxiv classifiers network neural network optimization quality recurrent neural network series time series

More from arxiv.org / cs.LG updates on arXiv.org

Machine Learning Product Manager (Canada, Remote)

@ FreshBooks | Canada

Data Engineer

@ Amazon.com | Irvine, California, USA

Senior Autonomy Behavior II, Performance Assessment Engineer

@ Cruise LLC | San Francisco, CA

Senior Data Analytics Engineer

@ Intercom | Dublin, Ireland

Data Analyst Intern

@ ADDX | Singapore

Data Science Analyst - Consumer

@ Yelp | London, England, United Kingdom

Senior Data Analyst - Python+Hadoop

@ Capco | India - Bengaluru

DevOps Engineer, Data Team

@ SingleStore | Hyderabad, India

Software Engineer (Machine Learning, AI Platform)

@ Phaidra | Remote

Sr. UI/UX Designer - Artificial Intelligence (ID:1213)

@ Truelogic Software | Remote, anywhere in LATAM

Analytics Engineer

@ carwow | London, England, United Kingdom

HRIS Data Analyst

@ SecurityScorecard | Remote