Oct. 13, 2022, 1:13 a.m. | Dimitrios Danopoulos, Georgios Zervakis, Kostas Siozios, Dimitrios Soudris, Jörg Henkel

cs.LG updates on arXiv.org arxiv.org

Current state-of-the-art employs approximate multipliers to address the
highly increased power demands of DNN accelerators. However, evaluating the
accuracy of approximate DNNs is cumbersome due to the lack of adequate support
for approximate arithmetic in DNN frameworks. We address this inefficiency by
presenting AdaPT, a fast emulation framework that extends PyTorch to support
approximate inference as well as approximation-aware retraining. AdaPT can be
seamlessly deployed and is compatible with the most DNNs. We evaluate the
framework on several DNN models …

arxiv dnn dnn accelerators pytorch

Founding AI Engineer, Agents

@ Occam AI | New York

AI Engineer Intern, Agents

@ Occam AI | US

AI Research Scientist

@ Vara | Berlin, Germany and Remote

Data Architect

@ University of Texas at Austin | Austin, TX

Data ETL Engineer

@ University of Texas at Austin | Austin, TX

Machine Learning Engineer

@ Apple | Sunnyvale, California, United States