Jan. 2, 2024, 11:34 a.m. | /u/APaperADay

Machine Learning www.reddit.com

**Paper**: [https://arxiv.org/abs/2311.10207](https://arxiv.org/abs/2311.10207)

**Code**: [https://github.com/joennlae/halutmatmul](https://github.com/joennlae/halutmatmul)

**Abstract**:

>From classical HPC to deep learning, MatMul is at the heart of today's computing. The recent Maddness method approximates MatMul without the need for multiplication by using a hash-based version of product quantization (PQ) indexing into a look-up table (LUT). **Stella Nera** is the first Maddness accelerator and it achieves 15x higher area efficiency (GMAC/s/mm\^2) and more than 25x higher energy efficiency (TMAC/s/W) than direct MatMul accelerators implemented in the same technology. The hash function …

abstract accelerator computing deep learning efficiency hash hpc indexing look lut machinelearning product quantization table

Data Architect

@ University of Texas at Austin | Austin, TX

Data ETL Engineer

@ University of Texas at Austin | Austin, TX

Lead GNSS Data Scientist

@ Lurra Systems | Melbourne

Senior Machine Learning Engineer (MLOps)

@ Promaton | Remote, Europe

Sr. VBI Developer II

@ Atos | Texas, US, 75093

Wealth Management - Data Analytics Intern/Co-op Fall 2024

@ Scotiabank | Toronto, ON, CA