April 23, 2024, 4:47 a.m. | Chenxi Yang, Yujia Liu, Dingquan Li, Yan Zhong, Tingting Jiang

cs.CV updates on arXiv.org arxiv.org

arXiv:2404.13277v1 Announce Type: cross
Abstract: Deep neural networks have demonstrated impressive success in No-Reference Image Quality Assessment (NR-IQA). However, recent researches highlight the vulnerability of NR-IQA models to subtle adversarial perturbations, leading to inconsistencies between model predictions and subjective ratings. Current adversarial attacks, however, focus on perturbing predicted scores of individual images, neglecting the crucial aspect of inter-score correlation relationships within an entire image set. Meanwhile, it is important to note that the correlation, like ranking correlation, plays a significant …

abstract adversarial adversarial attacks arxiv assessment attacks beyond cs.cv current eess.iv focus highlight however image networks neural networks perspectives predictions quality ratings reference success type vulnerability

Data Engineer

@ Lemon.io | Remote: Europe, LATAM, Canada, UK, Asia, Oceania

Artificial Intelligence – Bioinformatic Expert

@ University of Texas Medical Branch | Galveston, TX

Lead Developer (AI)

@ Cere Network | San Francisco, US

Research Engineer

@ Allora Labs | Remote

Ecosystem Manager

@ Allora Labs | Remote

Founding AI Engineer, Agents

@ Occam AI | New York