all AI news
Saliency Suppressed, Semantics Surfaced: Visual Transformations in Neural Networks and the Brain
April 30, 2024, 4:47 a.m. | Gustaw Opie{\l}ka, Jessica Loke, Steven Scholte
cs.CV updates on arXiv.org arxiv.org
Abstract: Deep learning algorithms lack human-interpretable accounts of how they transform raw visual input into a robust semantic understanding, which impedes comparisons between different architectures, training objectives, and the human brain. In this work, we take inspiration from neuroscience and employ representational approaches to shed light on how neural networks encode information at low (visual saliency) and high (semantic similarity) levels of abstraction. Moreover, we introduce a custom image dataset where we systematically manipulate salient and …
abstract algorithms architectures arxiv brain cs.ai cs.cv deep learning deep learning algorithms human inspiration networks neural networks neuroscience raw robust semantic semantics training type understanding visual work
More from arxiv.org / cs.CV updates on arXiv.org
Jobs in AI, ML, Big Data
Software Engineer for AI Training Data (School Specific)
@ G2i Inc | Remote
Software Engineer for AI Training Data (Python)
@ G2i Inc | Remote
Software Engineer for AI Training Data (Tier 2)
@ G2i Inc | Remote
Data Engineer
@ Lemon.io | Remote: Europe, LATAM, Canada, UK, Asia, Oceania
Artificial Intelligence – Bioinformatic Expert
@ University of Texas Medical Branch | Galveston, TX
Lead Developer (AI)
@ Cere Network | San Francisco, US