April 1, 2024, 4:42 a.m. | Hanting Chen, Zhicheng Liu, Xutao Wang, Yuchuan Tian, Yunhe Wang

cs.LG updates on arXiv.org arxiv.org

arXiv:2403.19928v1 Announce Type: cross
Abstract: In an effort to reduce the computational load of Transformers, research on linear attention has gained significant momentum. However, the improvement strategies for attention mechanisms typically necessitate extensive retraining, which is impractical for large language models with a vast array of parameters. In this paper, we present DiJiang, a novel Frequency Domain Kernelization approach that enables the transformation of a pre-trained vanilla Transformer into a linear complexity model with little training costs. By employing a …

arxiv compact cs.cl cs.lg language language models large language large language models through type

ML/AI Engineer / NLP Expert - Custom LLM Development (x/f/m)

@ HelloBetter | Remote

Doctoral Researcher (m/f/div) in Automated Processing of Bioimages

@ Leibniz Institute for Natural Product Research and Infection Biology (Leibniz-HKI) | Jena

Seeking Developers and Engineers for AI T-Shirt Generator Project

@ Chevon Hicks | Remote

Cloud Data Platform Engineer

@ First Central | Home Office (Remote)

Associate Director, Data Science

@ MSD | USA - New Jersey - Rahway

Data Scientist Sr.

@ MSD | CHL - Santiago - Santiago (Calle Mariano)