all AI news
Progressive Token Length Scaling in Transformer Encoders for Efficient Universal Segmentation
April 24, 2024, 4:44 a.m. | Abhishek Aich, Yumin Suh, Samuel Schulter, Manmohan Chandraker
cs.CV updates on arXiv.org arxiv.org
Abstract: A powerful architecture for universal segmentation relies on transformers that encode multi-scale image features and decode object queries into mask predictions. With efficiency being a high priority for scaling such models, we observed that the state-of-the-art method Mask2Former uses ~50% of its compute only on the transformer encoder. This is due to the retention of a full-length token-level representation of all backbone feature scales at each encoder layer. With this observation, we propose a strategy …
abstract architecture art arxiv compute cs.cv decode efficiency encode features image mask2former object predictions queries scale scaling segmentation state token transformer transformers type universal
More from arxiv.org / cs.CV updates on arXiv.org
Jobs in AI, ML, Big Data
Founding AI Engineer, Agents
@ Occam AI | New York
AI Engineer Intern, Agents
@ Occam AI | US
AI Research Scientist
@ Vara | Berlin, Germany and Remote
Data Architect
@ University of Texas at Austin | Austin, TX
Data ETL Engineer
@ University of Texas at Austin | Austin, TX
Codec Avatars Research Engineer
@ Meta | Pittsburgh, PA