July 13, 2022, 1:10 a.m. | Jia Liu, Ran Cheng, Yaochu Jin

cs.LG updates on arXiv.org arxiv.org

Deep neural networks have been found vulnerable to adversarial attacks, thus
raising potentially concerns in security-sensitive contexts. To address this
problem, recent research has investigated the adversarial robustness of deep
neural networks from the architectural point of view. However, searching for
architectures of deep neural networks is computationally expensive,
particularly when coupled with adversarial training process. To meet the above
challenge, this paper proposes a bi-fidelity multiobjective neural architecture
search approach. First, we formulate the NAS problem for enhancing adversarial …

arxiv bi fidelity lg neural architectures search

Senior Machine Learning Engineer

@ GPTZero | Toronto, Canada

Sr. Data Operations

@ Carousell Group | West Jakarta, Indonesia

Senior Analyst, Business Intelligence & Reporting

@ Deutsche Bank | Bucharest

Business Intelligence Subject Matter Expert (SME) - Assistant Vice President

@ Deutsche Bank | Cary, 3000 CentreGreen Way

Enterprise Business Intelligence Specialist

@ NAIC | Kansas City

Senior Business Intelligence (BI) Developer - Associate

@ Deutsche Bank | Cary, 3000 CentreGreen Way