May 8, 2023, 12:47 a.m. | Guoyang Liu, Jindi Zhang, Antoni B. Chan, Janet H. Hsiao

cs.CV updates on arXiv.org arxiv.org

We examined whether embedding human attention knowledge into saliency-based
explainable AI (XAI) methods for computer vision models could enhance their
plausibility and faithfulness. We first developed new gradient-based XAI
methods for object detection models to generate object-specific explanations by
extending the current methods for image classification models. Interestingly,
while these gradient-based methods worked well for explaining image
classification models, when being used for explaining object detection models,
the resulting saliency maps generally had lower faithfulness than human
attention maps when …

artificial artificial intelligence arxiv attention classification computer computer vision detection embedding explainable ai explainable artificial intelligence gradient human image intelligence knowledge vision vision models xai

Senior Machine Learning Engineer

@ GPTZero | Toronto, Canada

ML/AI Engineer / NLP Expert - Custom LLM Development (x/f/m)

@ HelloBetter | Remote

Doctoral Researcher (m/f/div) in Automated Processing of Bioimages

@ Leibniz Institute for Natural Product Research and Infection Biology (Leibniz-HKI) | Jena

Seeking Developers and Engineers for AI T-Shirt Generator Project

@ Chevon Hicks | Remote

Senior Applied Data Scientist

@ dunnhumby | London

Principal Data Architect - Azure & Big Data

@ MGM Resorts International | Home Office - US, NV