April 8, 2024, 4:44 a.m. | Manjin Kim, Paul Hongsuck Seo, Cordelia Schmid, Minsu Cho

cs.CV updates on arXiv.org arxiv.org

arXiv:2404.03924v1 Announce Type: new
Abstract: We introduce a new attention mechanism, dubbed structural self-attention (StructSA), that leverages rich correlation patterns naturally emerging in key-query interactions of attention. StructSA generates attention maps by recognizing space-time structures of key-query correlations via convolution and uses them to dynamically aggregate local contexts of value features. This effectively leverages rich structural patterns in images and videos such as scene layouts, object motion, and inter-object relations. Using StructSA as a main building block, we develop the …

abstract arxiv attention convolution correlation correlations cs.cv features interactions key maps patterns query self-attention space them transformers type value via vision vision transformers

AI Research Scientist

@ Vara | Berlin, Germany and Remote

Data Architect

@ University of Texas at Austin | Austin, TX

Data ETL Engineer

@ University of Texas at Austin | Austin, TX

Lead GNSS Data Scientist

@ Lurra Systems | Melbourne

Senior Machine Learning Engineer (MLOps)

@ Promaton | Remote, Europe

Senior Data Scientist

@ ITE Management | New York City, United States