March 2, 2024, 10:42 a.m. | /u/tunggad

Machine Learning www.reddit.com

Link to paper: [https://arxiv.org/pdf/2402.17764.pdf](https://arxiv.org/pdf/2402.17764.pdf)

is that real? it sounds too good to be real right? If it is true, it not only reduces VRAM capacity and bandwidth required to train and run LLMs, it also suggests simplified hardware implementation due to the lack of need for matmul , it only needs + operation

is that not a threat for nvidia (stock) and amd as well ?

amd bandwidth capacity good hardware implementation llms machinelearning nvidia simplified stock threat train true

Data Engineer

@ Lemon.io | Remote: Europe, LATAM, Canada, UK, Asia, Oceania

Artificial Intelligence – Bioinformatic Expert

@ University of Texas Medical Branch | Galveston, TX

Lead Developer (AI)

@ Cere Network | San Francisco, US

Research Engineer

@ Allora Labs | Remote

Ecosystem Manager

@ Allora Labs | Remote

Founding AI Engineer, Agents

@ Occam AI | New York