March 21, 2024, 4:42 a.m. | Timur Ibrayev, Isha Garg, Indranil Chakraborty, Kaushik Roy

cs.LG updates on arXiv.org arxiv.org

arXiv:2403.13082v1 Announce Type: cross
Abstract: Deep learning has proved successful in many applications but suffers from high computational demands and requires custom accelerators for deployment. Crossbar-based analog in-memory architectures are attractive for acceleration of deep neural networks (DNN), due to their high data reuse and high efficiency enabled by combining storage and computation in memory. However, they require analog-to-digital converters (ADCs) to communicate crossbar outputs. ADCs consume a significant portion of energy and area of every crossbar processing unit, thus …

abstract accelerators analog applications architectures arxiv computational cs.et cs.lg data deep learning deployment dnn efficiency in-memory memory networks neural networks pruning type

AI Research Scientist

@ Vara | Berlin, Germany and Remote

Data Architect

@ University of Texas at Austin | Austin, TX

Data ETL Engineer

@ University of Texas at Austin | Austin, TX

Lead GNSS Data Scientist

@ Lurra Systems | Melbourne

Senior Machine Learning Engineer (MLOps)

@ Promaton | Remote, Europe

Business Data Analyst

@ Alstom | Johannesburg, GT, ZA