Jan. 20, 2022, 2:10 a.m. | Zhexin Li, Tong Yang, Peisong Wang, Jian Cheng

cs.CV updates on arXiv.org arxiv.org

In this paper, we propose a fully differentiable quantization method for
vision transformer (ViT) named as Q-ViT, in which both of the quantization
scales and bit-widths are learnable parameters. Specifically, based on our
observation that heads in ViT display different quantization robustness, we
leverage head-wise bit-width to squeeze the size of Q-ViT while preserving
performance. In addition, we propose a novel technique named switchable scale
to resolve the convergence problem in the joint training of quantization scales
and bit-widths. In …

arxiv cv transformer vision

Senior Machine Learning Engineer (MLOps)

@ Promaton | Remote, Europe

Research Associate (Data Science/Information Engineering/Applied Mathematics/Information Technology)

@ Nanyang Technological University | NTU Main Campus, Singapore

Associate Director of Data Science and Analytics

@ Penn State University | Penn State University Park

Student Worker- Data Scientist

@ TransUnion | Israel - Tel Aviv

Vice President - Customer Segment Analytics Data Science Lead

@ JPMorgan Chase & Co. | Bengaluru, Karnataka, India

Middle/Senior Data Engineer

@ Devexperts | Sofia, Bulgaria