Aug. 29, 2022, 1:11 a.m. | Zihao Ye, Ruihang Lai, Junru Shao, Tianqi Chen, Luis Ceze

cs.LG updates on arXiv.org arxiv.org

Sparse tensors are rapidly becoming critical components of modern deep
learning workloads. However, developing high-performance sparse operators can
be difficult and tedious, and existing vendor libraries cannot satisfy the
escalating demands from new operators. Sparse tensor compilers simplify the
development of operators, but efficient sparse compilation for deep learning
remains challenging because a single sparse format cannot maximize hardware
efficiency, and single-shot compilers cannot keep up with latest hardware and
system advances. We show that the key to addressing both …

arxiv deep learning learning lg

Artificial Intelligence – Bioinformatic Expert

@ University of Texas Medical Branch | Galveston, TX

Lead Developer (AI)

@ Cere Network | San Francisco, US

Research Engineer

@ Allora Labs | Remote

Ecosystem Manager

@ Allora Labs | Remote

Founding AI Engineer, Agents

@ Occam AI | New York

AI Engineer Intern, Agents

@ Occam AI | US