March 25, 2022, 1:10 a.m. | Ali Hadi Zadeh, Mostafa Mahmoud, Ameer Abdelhadi, Andreas Moshovos

cs.LG updates on arXiv.org arxiv.org

Increasingly larger and better Transformer models keep advancing
state-of-the-art accuracy and capability for Natural Language Processing
applications. These models demand more computational power, storage, and
energy. Mokey reduces the footprint of state-of-the-art 32-bit or 16-bit
floating-point transformer models by quantizing all values to 4-bit indexes
into dictionaries of representative 16-bit fixed-point centroids. Mokey does
not need fine-tuning, an essential feature as often the training resources or
datasets are not available to many. Exploiting the range of values that
naturally occur …

arxiv enabling fixed-point transformer

ML/AI Engineer / NLP Expert - Custom LLM Development (x/f/m)

@ HelloBetter | Remote

Doctoral Researcher (m/f/div) in Automated Processing of Bioimages

@ Leibniz Institute for Natural Product Research and Infection Biology (Leibniz-HKI) | Jena

Seeking Developers and Engineers for AI T-Shirt Generator Project

@ Chevon Hicks | Remote

Global Clinical Data Manager

@ Warner Bros. Discovery | CRI - San Jose - San Jose (City Place)

Global Clinical Data Manager

@ Warner Bros. Discovery | COL - Cundinamarca - Bogotá (Colpatria)

Ingénieur Data Manager / Pau

@ Capgemini | Paris, FR