April 23, 2024, 4:47 a.m. | Chenxi Yang, Yujia Liu, Dingquan Li, Yan Zhong, Tingting Jiang

cs.CV updates on arXiv.org arxiv.org

arXiv:2404.13277v1 Announce Type: cross
Abstract: Deep neural networks have demonstrated impressive success in No-Reference Image Quality Assessment (NR-IQA). However, recent researches highlight the vulnerability of NR-IQA models to subtle adversarial perturbations, leading to inconsistencies between model predictions and subjective ratings. Current adversarial attacks, however, focus on perturbing predicted scores of individual images, neglecting the crucial aspect of inter-score correlation relationships within an entire image set. Meanwhile, it is important to note that the correlation, like ranking correlation, plays a significant …

abstract adversarial adversarial attacks arxiv assessment attacks beyond cs.cv current eess.iv focus highlight however image networks neural networks perspectives predictions quality ratings reference success type vulnerability

Founding AI Engineer, Agents

@ Occam AI | New York

AI Engineer Intern, Agents

@ Occam AI | US

AI Research Scientist

@ Vara | Berlin, Germany and Remote

Data Architect

@ University of Texas at Austin | Austin, TX

Data ETL Engineer

@ University of Texas at Austin | Austin, TX

Codec Avatars Research Engineer

@ Meta | Pittsburgh, PA