all AI news
Post-Training Quantization for Energy Efficient Realization of Deep Neural Networks. (arXiv:2210.07906v1 [cs.LG])
Oct. 17, 2022, 1:12 a.m. | Cecilia Latotzke, Batuhan Balim, Tobias Gemmeke
cs.LG updates on arXiv.org arxiv.org
The biggest challenge for the deployment of Deep Neural Networks (DNNs) close
to the generated data on edge devices is their size, i.e., memory footprint and
computational complexity. Both are significantly reduced with quantization.
With the resulting lower word-length, the energy efficiency of DNNs increases
proportionally. However, lower word-length typically causes accuracy
degradation. To counteract this effect, the quantized DNN is retrained.
Unfortunately, training costs up to 5000x more energy than the inference of the
quantized DNN. To address this …
arxiv energy energy efficient networks neural networks quantization training
More from arxiv.org / cs.LG updates on arXiv.org
Testing the Segment Anything Model on radiology data
1 day, 2 hours ago |
arxiv.org
Calorimeter shower superresolution
1 day, 2 hours ago |
arxiv.org
Jobs in AI, ML, Big Data
Software Engineer for AI Training Data (School Specific)
@ G2i Inc | Remote
Software Engineer for AI Training Data (Python)
@ G2i Inc | Remote
Software Engineer for AI Training Data (Tier 2)
@ G2i Inc | Remote
Data Engineer
@ Lemon.io | Remote: Europe, LATAM, Canada, UK, Asia, Oceania
Artificial Intelligence – Bioinformatic Expert
@ University of Texas Medical Branch | Galveston, TX
Lead Developer (AI)
@ Cere Network | San Francisco, US