April 24, 2024, 4:45 a.m. | Felipe Torres, Hanwei Zhang, Ronan Sicre, St\'ephane Ayache, Yannis Avrithis

cs.CV updates on arXiv.org arxiv.org

arXiv:2404.14996v1 Announce Type: new
Abstract: Explanations obtained from transformer-based architectures in the form of raw attention, can be seen as a class-agnostic saliency map. Additionally, attention-based pooling serves as a form of masking the in feature space. Motivated by this observation, we design an attention-based pooling mechanism intended to replace Global Average Pooling (GAP) at inference. This mechanism, called Cross-Attention Stream (CA-Stream), comprises a stream of cross attention blocks interacting with features at different network depths. CA-Stream enhances interpretability in …

abstract architectures arxiv attention class cs.cv design feature form global image image recognition map masking observation pooling raw recognition space transformer type

Lead Developer (AI)

@ Cere Network | San Francisco, US

Research Engineer

@ Allora Labs | Remote

Ecosystem Manager

@ Allora Labs | Remote

Founding AI Engineer, Agents

@ Occam AI | New York

AI Engineer Intern, Agents

@ Occam AI | US

AI Research Scientist

@ Vara | Berlin, Germany and Remote