all AI news
Beyond Inference: Performance Analysis of DNN Server Overheads for Computer Vision
March 21, 2024, 4:42 a.m. | Ahmed F. AbouElhamayed, Susanne Balle, Deshanand Singh, Mohamed S. Abdelfattah
cs.LG updates on arXiv.org arxiv.org
Abstract: Deep neural network (DNN) inference has become an important part of many data-center workloads. This has prompted focused efforts to design ever-faster deep learning accelerators such as GPUs and TPUs. However, an end-to-end DNN-based vision application contains more than just DNN inference, including input decompression, resizing, sampling, normalization, and data transfer. In this paper, we perform a thorough evaluation of computer vision inference requests performed on a throughput-optimized serving system. We quantify the performance impact …
abstract accelerators analysis application arxiv become beyond center computer computer vision cs.ai cs.cv cs.dc cs.lg data deep learning deep neural network design dnn faster gpus however inference network neural network part performance performance analysis server tpus type vision workloads
More from arxiv.org / cs.LG updates on arXiv.org
Efficient Data-Driven MPC for Demand Response of Commercial Buildings
2 days, 23 hours ago |
arxiv.org
Testing the Segment Anything Model on radiology data
2 days, 23 hours ago |
arxiv.org
Calorimeter shower superresolution
2 days, 23 hours ago |
arxiv.org
Jobs in AI, ML, Big Data
Software Engineer for AI Training Data (School Specific)
@ G2i Inc | Remote
Software Engineer for AI Training Data (Python)
@ G2i Inc | Remote
Software Engineer for AI Training Data (Tier 2)
@ G2i Inc | Remote
Data Engineer
@ Lemon.io | Remote: Europe, LATAM, Canada, UK, Asia, Oceania
Artificial Intelligence – Bioinformatic Expert
@ University of Texas Medical Branch | Galveston, TX
Lead Developer (AI)
@ Cere Network | San Francisco, US