all AI news
Defense Against Adversarial Attacks on No-Reference Image Quality Models with Gradient Norm Regularization
March 19, 2024, 4:48 a.m. | Yujia Liu, Chenxi Yang, Dingquan Li, Jianhao Ding, Tingting Jiang
cs.CV updates on arXiv.org arxiv.org
Abstract: The task of No-Reference Image Quality Assessment (NR-IQA) is to estimate the quality score of an input image without additional information. NR-IQA models play a crucial role in the media industry, aiding in performance evaluation and optimization guidance. However, these models are found to be vulnerable to adversarial attacks, which introduce imperceptible perturbations to input images, resulting in significant changes in predicted scores. In this paper, we propose a defense method to improve the stability …
abstract adversarial adversarial attacks arxiv assessment attacks cs.cv defense eess.iv evaluation gradient guidance however image industry information media norm optimization performance quality reference regularization role type
More from arxiv.org / cs.CV updates on arXiv.org
Eyes Wide Shut? Exploring the Visual Shortcomings of Multimodal LLMs
2 days, 4 hours ago |
arxiv.org
Jobs in AI, ML, Big Data
Data Architect
@ University of Texas at Austin | Austin, TX
Data ETL Engineer
@ University of Texas at Austin | Austin, TX
Lead GNSS Data Scientist
@ Lurra Systems | Melbourne
Senior Machine Learning Engineer (MLOps)
@ Promaton | Remote, Europe
Research Scientist
@ Meta | Menlo Park, CA
Principal Data Scientist
@ Mastercard | O'Fallon, Missouri (Main Campus)