March 12, 2024, 4:44 a.m. | Wenxin Ding, Arjun Nitin Bhagoji, Ben Y. Zhao, Haitao Zheng

cs.LG updates on arXiv.org arxiv.org

arXiv:2401.09574v2 Announce Type: replace
Abstract: As the deployment of deep learning models continues to expand across industries, the threat of malicious incursions aimed at gaining access to these deployed models is on the rise. Should an attacker gain access to a deployed model, whether through server breaches, insider attacks, or model inversion techniques, they can then construct white-box adversarial attacks to manipulate the model's classification outcomes, thereby posing significant risks to organizations that rely on these models for critical tasks. …

abstract arxiv attacks breaches cs.cr cs.lg deep learning deployment expand industries insider robust scalable server threat through type versioning

AI Research Scientist

@ Vara | Berlin, Germany and Remote

Data Architect

@ University of Texas at Austin | Austin, TX

Data ETL Engineer

@ University of Texas at Austin | Austin, TX

Lead GNSS Data Scientist

@ Lurra Systems | Melbourne

Senior Machine Learning Engineer (MLOps)

@ Promaton | Remote, Europe

Data Analyst (Digital Business Analyst)

@ Activate Interactive Pte Ltd | Singapore, Central Singapore, Singapore