May 19, 2022, 1:11 a.m. | Ahmet Inci, Siri Garudanagiri Virupaksha, Aman Jain, Venkata Vivek Thallam, Ruizhou Ding, Diana Marculescu

cs.LG updates on arXiv.org arxiv.org

As the machine learning and systems community strives to achieve higher
energy-efficiency through custom DNN accelerators and model compression
techniques, there is a need for a design space exploration framework that
incorporates quantization-aware processing elements into the accelerator design
space while having accurate and fast power, performance, and area models. In
this work, we present QAPPA, a highly parameterized quantization-aware power,
performance, and area modeling framework for DNN accelerators. Our framework
can facilitate the future research on design space exploration …

ar arxiv dnn dnn accelerators modeling performance power quantization

Data Architect

@ University of Texas at Austin | Austin, TX

Data ETL Engineer

@ University of Texas at Austin | Austin, TX

Lead GNSS Data Scientist

@ Lurra Systems | Melbourne

Senior Machine Learning Engineer (MLOps)

@ Promaton | Remote, Europe

Data Management Associate

@ EcoVadis | Ebène, Mauritius

Senior Data Engineer

@ Telstra | Telstra ICC Bengaluru