all AI news
Mokey: Enabling Narrow Fixed-Point Inference for Out-of-the-Box Floating-Point Transformer Models. (arXiv:2203.12758v1 [cs.LG])
March 25, 2022, 1:10 a.m. | Ali Hadi Zadeh, Mostafa Mahmoud, Ameer Abdelhadi, Andreas Moshovos
cs.LG updates on arXiv.org arxiv.org
Increasingly larger and better Transformer models keep advancing
state-of-the-art accuracy and capability for Natural Language Processing
applications. These models demand more computational power, storage, and
energy. Mokey reduces the footprint of state-of-the-art 32-bit or 16-bit
floating-point transformer models by quantizing all values to 4-bit indexes
into dictionaries of representative 16-bit fixed-point centroids. Mokey does
not need fine-tuning, an essential feature as often the training resources or
datasets are not available to many. Exploiting the range of values that
naturally occur …
More from arxiv.org / cs.LG updates on arXiv.org
Jobs in AI, ML, Big Data
AI Research Scientist
@ Vara | Berlin, Germany and Remote
Data Architect
@ University of Texas at Austin | Austin, TX
Data ETL Engineer
@ University of Texas at Austin | Austin, TX
Lead GNSS Data Scientist
@ Lurra Systems | Melbourne
Senior Machine Learning Engineer (MLOps)
@ Promaton | Remote, Europe
Robotics Technician - 3rd Shift
@ GXO Logistics | Perris, CA, US, 92571