March 25, 2024, 4:41 a.m. | Yoshihide Sawada, Ryuji Saiin, Kazuma Suetake

cs.LG updates on arXiv.org arxiv.org

arXiv:2403.14999v1 Announce Type: new
Abstract: Recently, the number of parameters in DNNs has explosively increased, as exemplified by LLMs (Large Language Models), making inference on small-scale computers more difficult. Model compression technology is, therefore, essential for integration into products. In this paper, we propose a method of quantization-aware training. We introduce a novel normalization (Layer-Batch Normalization) that is independent of the mini-batch size and does not require any additional computation cost during inference. Then, we quantize the weights by the …

abstract age arxiv compression computers cs.ai cs.cv cs.lg cs.ne inference integration language language models large language large language models llms magic making novel paper parameters products quantization scale small technology training type

Founding AI Engineer, Agents

@ Occam AI | New York

AI Engineer Intern, Agents

@ Occam AI | US

AI Research Scientist

@ Vara | Berlin, Germany and Remote

Data Architect

@ University of Texas at Austin | Austin, TX

Data ETL Engineer

@ University of Texas at Austin | Austin, TX

Data Scientist (Database Development)

@ Nasdaq | Bengaluru-Affluence