Jan. 4, 2022, 9:10 p.m. | Zejiang Hou, Sun-Yuan Kung

cs.CV updates on arXiv.org arxiv.org

Vision transformers (ViT) have recently attracted considerable attentions,
but the huge computational cost remains an issue for practical deployment.
Previous ViT pruning methods tend to prune the model along one dimension
solely, which may suffer from excessive reduction and lead to sub-optimal model
quality. In contrast, we advocate a multi-dimensional ViT compression paradigm,
and propose to harness the redundancy reduction from attention head, neuron and
sequence dimensions jointly. We firstly propose a statistical dependence based
pruning criterion that is generalizable …

arxiv compression cv transformer

Lead GNSS Data Scientist

@ Lurra Systems | Melbourne

Senior Machine Learning Engineer (MLOps)

@ Promaton | Remote, Europe

Senior Engineer - Data Science Operations

@ causaLens | London - Hybrid, England, United Kingdom

F0138 - LLM Developer (AI NLP)

@ Ubiquiti Inc. | Taipei

Staff Engineer, Database

@ Nagarro | Gurugram, India

Artificial Intelligence Assurance Analyst

@ Booz Allen Hamilton | USA, VA, McLean (8251 Greensboro Dr)