all AI news
[R] Stella Nera: Achieving 161 TOp/s/W with Multiplier-free DNN Acceleration based on Approximate Matrix Multiplication
Jan. 2, 2024, 11:34 a.m. | /u/APaperADay
Machine Learning www.reddit.com
**Code**: [https://github.com/joennlae/halutmatmul](https://github.com/joennlae/halutmatmul)
**Abstract**:
>From classical HPC to deep learning, MatMul is at the heart of today's computing. The recent Maddness method approximates MatMul without the need for multiplication by using a hash-based version of product quantization (PQ) indexing into a look-up table (LUT). **Stella Nera** is the first Maddness accelerator and it achieves 15x higher area efficiency (GMAC/s/mm\^2) and more than 25x higher energy efficiency (TMAC/s/W) than direct MatMul accelerators implemented in the same technology. The hash function …
abstract accelerator computing deep learning efficiency hash hpc indexing look lut machinelearning product quantization table
More from www.reddit.com / Machine Learning
Jobs in AI, ML, Big Data
Data Architect
@ University of Texas at Austin | Austin, TX
Data ETL Engineer
@ University of Texas at Austin | Austin, TX
Lead GNSS Data Scientist
@ Lurra Systems | Melbourne
Senior Machine Learning Engineer (MLOps)
@ Promaton | Remote, Europe
Sr. VBI Developer II
@ Atos | Texas, US, 75093
Wealth Management - Data Analytics Intern/Co-op Fall 2024
@ Scotiabank | Toronto, ON, CA