Feb. 27, 2024, 5:43 a.m. | Yasmine Mustafa, Tie Luo

cs.LG updates on arXiv.org arxiv.org

arXiv:2402.16008v1 Announce Type: cross
Abstract: The evolution of deep learning and artificial intelligence has significantly reshaped technological landscapes. However, their effective application in crucial sectors such as medicine demands more than just superior performance, but trustworthiness as well. While interpretability plays a pivotal role, existing explainable AI (XAI) approaches often do not reveal {\em Clever Hans} behavior where a model makes (ungeneralizable) correct predictions using spurious correlations or biases in data. Likewise, current post-hoc XAI methods are susceptible to generating …

abstract application artificial artificial intelligence arxiv cs.cv cs.lg deep learning dementia detection evolution intelligence interpretability masking medicine model interpretability performance pivotal precision role type

Lead Developer (AI)

@ Cere Network | San Francisco, US

Research Engineer

@ Allora Labs | Remote

Ecosystem Manager

@ Allora Labs | Remote

Founding AI Engineer, Agents

@ Occam AI | New York

AI Engineer Intern, Agents

@ Occam AI | US

AI Research Scientist

@ Vara | Berlin, Germany and Remote