all AI news
A Converting Autoencoder Toward Low-latency and Energy-efficient DNN Inference at the Edge
March 13, 2024, 4:41 a.m. | Hasanul Mahmud, Peng Kang, Kevin Desai, Palden Lama, Sushil Prasad
cs.LG updates on arXiv.org arxiv.org
Abstract: Reducing inference time and energy usage while maintaining prediction accuracy has become a significant concern for deep neural networks (DNN) inference on resource-constrained edge devices. To address this problem, we propose a novel approach based on "converting" autoencoder and lightweight DNNs. This improves upon recent work such as early-exiting framework and DNN partitioning. Early-exiting frameworks spend different amounts of computation power for different input data depending upon their complexity. However, they can be inefficient in …
abstract accuracy arxiv autoencoder become cs.cv cs.dc cs.lg devices dnn edge edge devices energy inference latency low networks neural networks novel prediction the edge type usage
More from arxiv.org / cs.LG updates on arXiv.org
Jobs in AI, ML, Big Data
Data Architect
@ University of Texas at Austin | Austin, TX
Data ETL Engineer
@ University of Texas at Austin | Austin, TX
Lead GNSS Data Scientist
@ Lurra Systems | Melbourne
Senior Machine Learning Engineer (MLOps)
@ Promaton | Remote, Europe
Sr. VBI Developer II
@ Atos | Texas, US, 75093
Wealth Management - Data Analytics Intern/Co-op Fall 2024
@ Scotiabank | Toronto, ON, CA