Web: http://arxiv.org/abs/2205.06672

June 20, 2022, 1:11 a.m. | Daniel Reisenbüchler, Sophia J. Wagner, Melanie Boxberg, Tingying Peng

cs.LG updates on arXiv.org arxiv.org

Classical multiple instance learning (MIL) methods are often based on the
identical and independent distributed assumption between instances, hence
neglecting the potentially rich contextual information beyond individual
entities. On the other hand, Transformers with global self-attention modules
have been proposed to model the interdependencies among all instances. However,
in this paper we question: Is global relation modeling using self-attention
necessary, or can we appropriately restrict self-attention calculations to
local regimes in large-scale whole slide images (WSIs)? We propose a
general-purpose …

arxiv attention cv graph graph-based local attention prediction transformer

More from arxiv.org / cs.LG updates on arXiv.org

Machine Learning Researcher - Saalfeld Lab

@ Howard Hughes Medical Institute - Chevy Chase, MD | Ashburn, Virginia

Project Director, Machine Learning in US Health

@ ideas42.org | Remote, US

Data Science Intern

@ NannyML | Remote

Machine Learning Engineer NLP/Speech

@ Play.ht | Remote

Research Scientist, 3D Reconstruction

@ Yembo | Remote, US

Clinical Assistant or Associate Professor of Management Science and Systems

@ University at Buffalo | Buffalo, NY