all AI news
ConSmax: Hardware-Friendly Alternative Softmax with Learnable Parameters
Feb. 20, 2024, 5:42 a.m. | Shiwei Liu, Guanchen Tao, Yifei Zou, Derek Chow, Zichen Fan, Kauna Lei, Dennis Sylvester, Gregory Kielian, Mehdi Saligane
cs.LG updates on arXiv.org arxiv.org
Abstract: The self-attention mechanism sets transformer-based large language model (LLM) apart from the convolutional and recurrent neural networks. Despite the performance improvement, achieving real-time LLM inference on silicon is challenging due to the extensively used Softmax in self-attention. Apart from the non-linearity, the low arithmetic intensity greatly reduces the processing parallelism, which becomes the bottleneck especially when dealing with a longer context. To address this challenge, we propose Constant Softmax (ConSmax), a software-hardware co-design as an …
abstract arxiv attention cs.ai cs.ar cs.lg hardware improvement inference intensity language language model large language large language model llm low networks neural networks parameters performance real-time recurrent neural networks self-attention silicon softmax transformer type
More from arxiv.org / cs.LG updates on arXiv.org
Testing the Segment Anything Model on radiology data
1 day, 11 hours ago |
arxiv.org
Calorimeter shower superresolution
1 day, 11 hours ago |
arxiv.org
Jobs in AI, ML, Big Data
Software Engineer for AI Training Data (School Specific)
@ G2i Inc | Remote
Software Engineer for AI Training Data (Python)
@ G2i Inc | Remote
Software Engineer for AI Training Data (Tier 2)
@ G2i Inc | Remote
Data Engineer
@ Lemon.io | Remote: Europe, LATAM, Canada, UK, Asia, Oceania
Artificial Intelligence – Bioinformatic Expert
@ University of Texas Medical Branch | Galveston, TX
Lead Developer (AI)
@ Cere Network | San Francisco, US