May 7, 2024, 4:43 a.m. | Peiyu Yang, Naveed Akhtar, Jiantong Jiang, Ajmal Mian

cs.LG updates on arXiv.org arxiv.org

arXiv:2405.02344v1 Announce Type: cross
Abstract: Attribution methods compute importance scores for input features to explain the output predictions of deep models. However, accurate assessment of attribution methods is challenged by the lack of benchmark fidelity for attributing model predictions. Moreover, other confounding factors in attribution estimation, including the setup choices of post-processing techniques and explained model predictions, further compromise the reliability of the evaluation. In this work, we first identify a set of fidelity criteria that reliable benchmarks for attribution …

abstract ai benchmark arxiv assessment attribution backdoor benchmark compute confounding cs.ai cs.cr cs.lg evaluation explainable ai features fidelity however importance predictions setup type

Software Engineer for AI Training Data (School Specific)

@ G2i Inc | Remote

Software Engineer for AI Training Data (Python)

@ G2i Inc | Remote

Software Engineer for AI Training Data (Tier 2)

@ G2i Inc | Remote

Data Engineer

@ Lemon.io | Remote: Europe, LATAM, Canada, UK, Asia, Oceania

Artificial Intelligence – Bioinformatic Expert

@ University of Texas Medical Branch | Galveston, TX

Lead Developer (AI)

@ Cere Network | San Francisco, US