Feb. 27, 2023, 11:53 p.m. | Allen Institute for AI

Allen Institute for AI www.youtube.com

Abstract: Deep neural networks have achieved remarkable results in several Computer Vision tasks, but their increasing complexity poses challenges in terms of interpretability. In this talk, I will present my research on explainability in deep learning models, ranging from convolutional neural networks (CNNs) to multi-modal transformers, for tasks ranging from static image analysis to active perception, and demonstrate how it can enhance their human-likeness. I will focus on how interpretability can establish user trust, identify failure modes, provide targeted human …

abstract ai models analysis challenges cnns complexity computer computer vision convolutional neural networks decision decision making deep learning explainability failure feedback focus human human feedback human-like identify image interpretability making networks neural networks perception reasoning research talk terms through transformers trust user trust vision

Lead Developer (AI)

@ Cere Network | San Francisco, US

Research Engineer

@ Allora Labs | Remote

Ecosystem Manager

@ Allora Labs | Remote

Founding AI Engineer, Agents

@ Occam AI | New York

AI Engineer Intern, Agents

@ Occam AI | US

AI Research Scientist

@ Vara | Berlin, Germany and Remote