Feb. 27, 2023, 11:53 p.m. | Allen Institute for AI

Allen Institute for AI www.youtube.com

Abstract: Deep neural networks have achieved remarkable results in several Computer Vision tasks, but their increasing complexity poses challenges in terms of interpretability. In this talk, I will present my research on explainability in deep learning models, ranging from convolutional neural networks (CNNs) to multi-modal transformers, for tasks ranging from static image analysis to active perception, and demonstrate how it can enhance their human-likeness. I will focus on how interpretability can establish user trust, identify failure modes, provide targeted human …

abstract ai models analysis challenges cnns complexity computer computer vision convolutional neural networks decision decision making deep learning explainability failure feedback focus human human feedback human-like identify image interpretability making networks neural networks perception reasoning research talk terms through transformers trust user trust vision

Software Engineer for AI Training Data (School Specific)

@ G2i Inc | Remote

Software Engineer for AI Training Data (Python)

@ G2i Inc | Remote

Software Engineer for AI Training Data (Tier 2)

@ G2i Inc | Remote

Data Engineer

@ Lemon.io | Remote: Europe, LATAM, Canada, UK, Asia, Oceania

Artificial Intelligence – Bioinformatic Expert

@ University of Texas Medical Branch | Galveston, TX

Lead Developer (AI)

@ Cere Network | San Francisco, US