all AI news
ApproxDARTS: Differentiable Neural Architecture Search with Approximate Multipliers
April 15, 2024, 4:41 a.m. | Michal Pinos, Lukas Sekanina, Vojtech Mrazek
cs.LG updates on arXiv.org arxiv.org
Abstract: Integrating the principles of approximate computing into the design of hardware-aware deep neural networks (DNN) has led to DNNs implementations showing good output quality and highly optimized hardware parameters such as low latency or inference energy. In this work, we present ApproxDARTS, a neural architecture search (NAS) method enabling the popular differentiable neural architecture search method called DARTS to exploit approximate multipliers and thus reduce the power consumption of generated neural networks. We showed on …
abstract architecture arxiv computing cs.lg design differentiable dnn energy good hardware inference latency low low latency nas networks neural architecture search neural networks parameters quality search type work
More from arxiv.org / cs.LG updates on arXiv.org
Jobs in AI, ML, Big Data
AI Research Scientist
@ Vara | Berlin, Germany and Remote
Data Architect
@ University of Texas at Austin | Austin, TX
Data ETL Engineer
@ University of Texas at Austin | Austin, TX
Lead GNSS Data Scientist
@ Lurra Systems | Melbourne
Senior Machine Learning Engineer (MLOps)
@ Promaton | Remote, Europe
Senior Data Scientist
@ ITE Management | New York City, United States