July 13, 2022, 1:10 a.m. | Jia Liu, Ran Cheng, Yaochu Jin

cs.LG updates on arXiv.org arxiv.org

Deep neural networks have been found vulnerable to adversarial attacks, thus
raising potentially concerns in security-sensitive contexts. To address this
problem, recent research has investigated the adversarial robustness of deep
neural networks from the architectural point of view. However, searching for
architectures of deep neural networks is computationally expensive,
particularly when coupled with adversarial training process. To meet the above
challenge, this paper proposes a bi-fidelity multiobjective neural architecture
search approach. First, we formulate the NAS problem for enhancing adversarial …

arxiv bi fidelity lg neural architectures search

AI Research Scientist

@ Vara | Berlin, Germany and Remote

Data Architect

@ University of Texas at Austin | Austin, TX

Data ETL Engineer

@ University of Texas at Austin | Austin, TX

Lead GNSS Data Scientist

@ Lurra Systems | Melbourne

Senior Machine Learning Engineer (MLOps)

@ Promaton | Remote, Europe

Business Data Analyst

@ Alstom | Johannesburg, GT, ZA