March 11, 2024, 11:44 a.m. | /u/SunsetOneSix

Natural Language Processing www.reddit.com

**Paper**: [https://arxiv.org/abs/2403.03853](https://arxiv.org/abs/2403.03853)

**Abstract**:

>As Large Language Models (LLMs) continue to advance in performance, their size has escalated significantly, with current LLMs containing billions or even trillions of parameters. However, in this study, we discovered that many layers of LLMs exhibit high similarity, and some layers play a negligible role in network functionality. Based on this observation, we define a metric called **Block Influence** (**BI**) to gauge the significance of each layer in LLMs. We then propose a straightforward pruning approach: …

abstract advance block current however language language models languagetechnology large language large language models llms network observation parameters performance role study

Software Engineer for AI Training Data (School Specific)

@ G2i Inc | Remote

Software Engineer for AI Training Data (Python)

@ G2i Inc | Remote

Software Engineer for AI Training Data (Tier 2)

@ G2i Inc | Remote

Data Engineer

@ Lemon.io | Remote: Europe, LATAM, Canada, UK, Asia, Oceania

Artificial Intelligence – Bioinformatic Expert

@ University of Texas Medical Branch | Galveston, TX

Lead Developer (AI)

@ Cere Network | San Francisco, US