July 22, 2022, 1:13 a.m. | Sunnie S. Y. Kim, Nicole Meister, Vikram V. Ramaswamy, Ruth Fong, Olga Russakovsky

cs.CV updates on arXiv.org arxiv.org

As AI technology is increasingly applied to high-impact, high-risk domains,
there have been a number of new methods aimed at making AI models more human
interpretable. Despite the recent growth of interpretability work, there is a
lack of systematic evaluation of proposed techniques. In this work, we
introduce HIVE (Human Interpretability of Visual Explanations), a novel human
evaluation framework that assesses the utility of explanations to human users
in AI-assisted decision making scenarios, and enables falsifiable hypothesis
testing, cross-method comparison, …

arxiv cv hive human interpretability

Data Engineer

@ Lemon.io | Remote: Europe, LATAM, Canada, UK, Asia, Oceania

Artificial Intelligence – Bioinformatic Expert

@ University of Texas Medical Branch | Galveston, TX

Lead Developer (AI)

@ Cere Network | San Francisco, US

Research Engineer

@ Allora Labs | Remote

Ecosystem Manager

@ Allora Labs | Remote

Founding AI Engineer, Agents

@ Occam AI | New York