June 9, 2022, 1:12 a.m. | David Stutz, Nandhini Chandramoorthy, Matthias Hein, Bernt Schiele

cs.CV updates on arXiv.org arxiv.org

Deep neural network (DNN) accelerators received considerable attention in
recent years due to the potential to save energy compared to mainstream
hardware. Low-voltage operation of DNN accelerators allows to further reduce
energy consumption, however, causes bit-level failures in the memory storing
the quantized weights. Furthermore, DNN accelerators are vulnerable to
adversarial attacks on voltage controllers or individual bits. In this paper,
we show that a combination of robust fixed-point quantization, weight clipping,
as well as random bit error training (RandBET) …

arxiv dnn dnn accelerators energy error lg random robustness

Founding AI Engineer, Agents

@ Occam AI | New York

AI Engineer Intern, Agents

@ Occam AI | US

AI Research Scientist

@ Vara | Berlin, Germany and Remote

Data Architect

@ University of Texas at Austin | Austin, TX

Data ETL Engineer

@ University of Texas at Austin | Austin, TX

Consultant Senior Power BI & Azure - CDI - H/F

@ Talan | Lyon, France