April 9, 2024, 4:42 a.m. | Qun Li, Yuan Meng, Chen Tang, Jiacheng Jiang, Zhi Wang

cs.LG updates on arXiv.org arxiv.org

arXiv:2404.05639v1 Announce Type: new
Abstract: Quantization is a promising technique for reducing the bit-width of deep models to improve their runtime performance and storage efficiency, and thus becomes a fundamental step for deployment. In real-world scenarios, quantized models are often faced with adversarial attacks which cause the model to make incorrect inferences by introducing slight perturbations. However, recent studies have paid less attention to the impact of quantization on the model robustness. More surprisingly, existing studies on this topic even …

abstract adversarial adversarial attacks arxiv attacks cs.ai cs.cr cs.lg deployment efficiency impact inferences performance quantization robustness storage type world

Data Architect

@ University of Texas at Austin | Austin, TX

Data ETL Engineer

@ University of Texas at Austin | Austin, TX

Lead GNSS Data Scientist

@ Lurra Systems | Melbourne

Senior Machine Learning Engineer (MLOps)

@ Promaton | Remote, Europe

C003549 Data Analyst (NS) - MON 13 May

@ EMW, Inc. | Braine-l'Alleud, Wallonia, Belgium

Marketing Decision Scientist

@ Meta | Menlo Park, CA | New York City