March 19, 2024, 4:53 a.m. | Haoyun Xu, Runzhe Zhan, Derek F. Wong, Lidia S. Chao

cs.CL updates on arXiv.org arxiv.org

arXiv:2403.11621v1 Announce Type: new
Abstract: Large Language Models (LLMs) are composed of neurons that exhibit various behaviors and roles, which become increasingly diversified as models scale. Recent studies have revealed that not all neurons are active across different datasets, and this sparsity correlates positively with the task-specific ability, leading to advancements in model pruning and training efficiency. Traditional fine-tuning methods engage all parameters of LLMs, which is computationally expensive and may not be necessary. In contrast, Parameter-Efficient Fine-Tuning (PEFT) approaches …

abstract arxiv become cs.cl datasets fine-tuning focus language language model language models large language large language model large language models llms neuron neurons roles scale sparsity studies supervised fine-tuning type

Data Engineer

@ Lemon.io | Remote: Europe, LATAM, Canada, UK, Asia, Oceania

Artificial Intelligence – Bioinformatic Expert

@ University of Texas Medical Branch | Galveston, TX

Lead Developer (AI)

@ Cere Network | San Francisco, US

Research Engineer

@ Allora Labs | Remote

Ecosystem Manager

@ Allora Labs | Remote

Founding AI Engineer, Agents

@ Occam AI | New York