May 8, 2023, 12:47 a.m. | Guoyang Liu, Jindi Zhang, Antoni B. Chan, Janet H. Hsiao

cs.CV updates on arXiv.org arxiv.org

We examined whether embedding human attention knowledge into saliency-based
explainable AI (XAI) methods for computer vision models could enhance their
plausibility and faithfulness. We first developed new gradient-based XAI
methods for object detection models to generate object-specific explanations by
extending the current methods for image classification models. Interestingly,
while these gradient-based methods worked well for explaining image
classification models, when being used for explaining object detection models,
the resulting saliency maps generally had lower faithfulness than human
attention maps when …

artificial artificial intelligence arxiv attention classification computer computer vision detection embedding explainable ai explainable artificial intelligence gradient human image intelligence knowledge vision vision models xai

Founding AI Engineer, Agents

@ Occam AI | New York

AI Engineer Intern, Agents

@ Occam AI | US

AI Research Scientist

@ Vara | Berlin, Germany and Remote

Data Architect

@ University of Texas at Austin | Austin, TX

Data ETL Engineer

@ University of Texas at Austin | Austin, TX

Sr. BI Analyst

@ AkzoNobel | Pune, IN