April 9, 2024, 4:44 a.m. | Hongjie Wang, Bhishma Dedhia, Niraj K. Jha

cs.LG updates on arXiv.org arxiv.org

arXiv:2305.17328v3 Announce Type: replace-cross
Abstract: Deployment of Transformer models on edge devices is becoming increasingly challenging due to the exponentially growing inference cost that scales quadratically with the number of tokens in the input sequence. Token pruning is an emerging solution to address this challenge due to its ease of deployment on various Transformer backbones. However, most token pruning methods require computationally expensive fine-tuning, which is undesirable in many edge deployment cases. In this work, we propose Zero-TPrune, the first …

arxiv attention cs.ai cs.cv cs.lg eess.iv graph pruning through token transformers type zero-shot

Founding AI Engineer, Agents

@ Occam AI | New York

AI Engineer Intern, Agents

@ Occam AI | US

AI Research Scientist

@ Vara | Berlin, Germany and Remote

Data Architect

@ University of Texas at Austin | Austin, TX

Data ETL Engineer

@ University of Texas at Austin | Austin, TX

Lead GNSS Data Scientist

@ Lurra Systems | Melbourne