all AI news
Adaptive Block Floating-Point for Analog Deep Learning Hardware. (arXiv:2205.06287v1 [cs.LG])
May 16, 2022, 1:11 a.m. | Ayon Basumallik, Darius Bunandar, Nicholas Dronen, Nicholas Harris, Ludmila Levkova, Calvin McCarter, Lakshmi Nair, David Walter, David Widemann
cs.LG updates on arXiv.org arxiv.org
Analog mixed-signal (AMS) devices promise faster, more energy-efficient deep
neural network (DNN) inference than their digital counterparts. However, recent
studies show that DNNs on AMS devices with fixed-point numbers can incur an
accuracy penalty because of precision loss. To mitigate this penalty, we
present a novel AMS-compatible adaptive block floating-point (ABFP) number
representation. We also introduce amplification (or gain) as a method for
increasing the accuracy of the number representation without increasing the bit
precision of the output. We evaluate …
More from arxiv.org / cs.LG updates on arXiv.org
Jobs in AI, ML, Big Data
Data Architect
@ University of Texas at Austin | Austin, TX
Data ETL Engineer
@ University of Texas at Austin | Austin, TX
Lead GNSS Data Scientist
@ Lurra Systems | Melbourne
Senior Machine Learning Engineer (MLOps)
@ Promaton | Remote, Europe
Data Strategy & Management - Private Equity Sector - Manager - Consulting - Location OPEN
@ EY | New York City, US, 10001-8604
Data Engineer- People Analytics
@ Volvo Group | Gothenburg, SE, 40531