all AI news
ConSmax: Hardware-Friendly Alternative Softmax with Learnable Parameters
Feb. 20, 2024, 5:42 a.m. | Shiwei Liu, Guanchen Tao, Yifei Zou, Derek Chow, Zichen Fan, Kauna Lei, Dennis Sylvester, Gregory Kielian, Mehdi Saligane
cs.LG updates on arXiv.org arxiv.org
Abstract: The self-attention mechanism sets transformer-based large language model (LLM) apart from the convolutional and recurrent neural networks. Despite the performance improvement, achieving real-time LLM inference on silicon is challenging due to the extensively used Softmax in self-attention. Apart from the non-linearity, the low arithmetic intensity greatly reduces the processing parallelism, which becomes the bottleneck especially when dealing with a longer context. To address this challenge, we propose Constant Softmax (ConSmax), a software-hardware co-design as an …
abstract arxiv attention cs.ai cs.ar cs.lg hardware improvement inference intensity language language model large language large language model llm low networks neural networks parameters performance real-time recurrent neural networks self-attention silicon softmax transformer type
More from arxiv.org / cs.LG updates on arXiv.org
Jobs in AI, ML, Big Data
Senior Machine Learning Engineer
@ GPTZero | Toronto, Canada
ML/AI Engineer / NLP Expert - Custom LLM Development (x/f/m)
@ HelloBetter | Remote
Doctoral Researcher (m/f/div) in Automated Processing of Bioimages
@ Leibniz Institute for Natural Product Research and Infection Biology (Leibniz-HKI) | Jena
Seeking Developers and Engineers for AI T-Shirt Generator Project
@ Chevon Hicks | Remote
Principal Data Architect - Azure & Big Data
@ MGM Resorts International | Home Office - US, NV
GN SONG MT Market Research Data Analyst 11
@ Accenture | Bengaluru, BDC7A