June 20, 2022, 1:13 a.m. | Daniel Reisenbüchler, Sophia J. Wagner, Melanie Boxberg, Tingying Peng

cs.CV updates on arXiv.org arxiv.org

Classical multiple instance learning (MIL) methods are often based on the
identical and independent distributed assumption between instances, hence
neglecting the potentially rich contextual information beyond individual
entities. On the other hand, Transformers with global self-attention modules
have been proposed to model the interdependencies among all instances. However,
in this paper we question: Is global relation modeling using self-attention
necessary, or can we appropriately restrict self-attention calculations to
local regimes in large-scale whole slide images (WSIs)? We propose a
general-purpose …

arxiv attention cv graph graph-based local attention prediction transformer

Founding AI Engineer, Agents

@ Occam AI | New York

AI Engineer Intern, Agents

@ Occam AI | US

AI Research Scientist

@ Vara | Berlin, Germany and Remote

Data Architect

@ University of Texas at Austin | Austin, TX

Data ETL Engineer

@ University of Texas at Austin | Austin, TX

Lead GNSS Data Scientist

@ Lurra Systems | Melbourne