June 16, 2022, 2:28 p.m. | Machine Learning Street Talk

Machine Learning Street Talk www.youtube.com

Patreon: https://www.patreon.com/mlst
Discord: https://discord.gg/ESrGqhf5CB

Vitaliy Chiley is a Machine Learning Research Engineer at the next-generation computing hardware company Cerebras Systems. We spoke about how DL workloads including sparse workloads can run faster on Cerebras hardware.

Pod: https://anchor.fm/machinelearningstreettalk/episodes/77---Vitaliy-Chiley-Cerebras-e1k1hvu

[00:00:00] Housekeeping
[00:01:08] Preamble
[00:01:50] Vitaliy Chiley Introduction
[00:03:11] Cerebrus architecture
[00:08:12] Memory management and FLOP utilisation
[00:18:01] Centralised vs decentralised compute architecture
[00:21:12] Sparsity
[00:22:35] Does Sparse NN imply Heterogeneous compute?
[00:28:09] Cost of distributed memory stores?
[00:29:48] Activation vs weight sparsity …

cerebras

Data Architect

@ University of Texas at Austin | Austin, TX

Data ETL Engineer

@ University of Texas at Austin | Austin, TX

Lead GNSS Data Scientist

@ Lurra Systems | Melbourne

Senior Machine Learning Engineer (MLOps)

@ Promaton | Remote, Europe

Data Management Associate

@ EcoVadis | Ebène, Mauritius

Senior Data Engineer

@ Telstra | Telstra ICC Bengaluru