Web: http://arxiv.org/abs/2205.02151

May 5, 2022, 1:12 a.m. | Haowei Zhu, Wenjing Ke, Dong Li, Ji Liu, Lu Tian, Yi Shan

cs.LG updates on arXiv.org arxiv.org

Recently, self-attention mechanisms have shown impressive performance in
various NLP and CV tasks, which can help capture sequential characteristics and
derive global information. In this work, we explore how to extend
self-attention modules to better learn subtle feature embeddings for
recognizing fine-grained objects, e.g., different bird species or person
identities. To this end, we propose a dual cross-attention learning (DCAL)
algorithm to coordinate with self-attention learning. First, we propose
global-local cross-attention (GLCA) to enhance the interactions between global
images and …

arxiv attention cross cv identification learning

More from arxiv.org / cs.LG updates on arXiv.org

Director, Applied Mathematics & Computational Research Division

@ Lawrence Berkeley National Lab | Berkeley, Ca

Business Data Analyst

@ MainStreet Family Care | Birmingham, AL

Assistant/Associate Professor of the Practice in Business Analytics

@ Georgetown University McDonough School of Business | Washington DC

Senior Data Science Writer

@ NannyML | Remote

Director of AI/ML Engineering

@ Armis Industries | Remote (US only), St. Louis, California

Digital Analytics Manager

@ Patagonia | Ventura, California