Oct. 17, 2022, 1:12 a.m. | Cecilia Latotzke, Batuhan Balim, Tobias Gemmeke

cs.LG updates on arXiv.org arxiv.org

The biggest challenge for the deployment of Deep Neural Networks (DNNs) close
to the generated data on edge devices is their size, i.e., memory footprint and
computational complexity. Both are significantly reduced with quantization.
With the resulting lower word-length, the energy efficiency of DNNs increases
proportionally. However, lower word-length typically causes accuracy
degradation. To counteract this effect, the quantized DNN is retrained.
Unfortunately, training costs up to 5000x more energy than the inference of the
quantized DNN. To address this …

arxiv energy energy efficient networks neural networks quantization training

Software Engineer for AI Training Data (School Specific)

@ G2i Inc | Remote

Software Engineer for AI Training Data (Python)

@ G2i Inc | Remote

Software Engineer for AI Training Data (Tier 2)

@ G2i Inc | Remote

Data Engineer

@ Lemon.io | Remote: Europe, LATAM, Canada, UK, Asia, Oceania

Artificial Intelligence – Bioinformatic Expert

@ University of Texas Medical Branch | Galveston, TX

Lead Developer (AI)

@ Cere Network | San Francisco, US