Sept. 16, 2022, 1:15 a.m. | Daniel Bolya, Cheng-Yang Fu, Xiaoliang Dai, Peizhao Zhang, Judy Hoffman

cs.CV updates on arXiv.org arxiv.org

While transformers have begun to dominate many tasks in vision, applying them
to large images is still computationally difficult. A large reason for this is
that self-attention scales quadratically with the number of tokens, which in
turn, scales quadratically with the image size. On larger images (e.g., 1080p),
over 60% of the total computation in the network is spent solely on creating
and applying attention matrices. We take a step toward solving this issue by
introducing Hydra Attention, an extremely …

arxiv attention hydra

Data Architect

@ University of Texas at Austin | Austin, TX

Data ETL Engineer

@ University of Texas at Austin | Austin, TX

Lead GNSS Data Scientist

@ Lurra Systems | Melbourne

Senior Machine Learning Engineer (MLOps)

@ Promaton | Remote, Europe

Risk Management - Machine Learning and Model Delivery Services, Product Associate - Senior Associate-

@ JPMorgan Chase & Co. | Wilmington, DE, United States

Senior ML Engineer (Speech/ASR)

@ ObserveAI | Bengaluru