April 26, 2024, 4:42 a.m. | Gabriel Kulp, Andrew Ensinger, Lizhong Chen

cs.LG updates on arXiv.org arxiv.org

arXiv:2404.16317v1 Announce Type: cross
Abstract: Tensors play a vital role in machine learning (ML) and often exhibit properties best explored while maintaining high-order. Efficiently performing ML computations requires taking advantage of sparsity, but generalized hardware support is challenging. This paper introduces FLAASH, a flexible and modular accelerator design for sparse tensor contraction that achieves over 25x speedup for a deep learning workload. Our architecture performs sparse high-order tensor contraction by distributing sparse dot products, or portions thereof, to numerous Sparse …

abstract accelerator architecture arxiv cs.ar cs.lg design generalized hardware machine machine learning modular paper role sparsity support tensor type vital while

Founding AI Engineer, Agents

@ Occam AI | New York

AI Engineer Intern, Agents

@ Occam AI | US

AI Research Scientist

@ Vara | Berlin, Germany and Remote

Data Architect

@ University of Texas at Austin | Austin, TX

Data ETL Engineer

@ University of Texas at Austin | Austin, TX

Alternance DATA/AI Engineer (H/F)

@ SQLI | Le Grand-Quevilly, France