all AI news
Redundancy and Concept Analysis for Code-trained Language Models
Feb. 19, 2024, 5:43 a.m. | Arushi Sharma, Zefu Hu, Christopher Quinn, Ali Jannesari
cs.LG updates on arXiv.org arxiv.org
Abstract: Code-trained language models have proven to be highly effective for various code intelligence tasks. However, they can be challenging to train and deploy for many software engineering applications due to computational bottlenecks and memory constraints. Implementing effective strategies to address these issues requires a better understanding of these 'black box' models. In this paper, we perform the first neuron-level analysis for source code models to identify \textit{important} neurons within latent representations. We achieve this by …
abstract analysis applications arxiv bottlenecks code code intelligence computational concept constraints cs.ai cs.lg cs.se deploy engineering intelligence language language models memory redundancy software software engineering strategies tasks train type understanding
More from arxiv.org / cs.LG updates on arXiv.org
Jobs in AI, ML, Big Data
Founding AI Engineer, Agents
@ Occam AI | New York
AI Engineer Intern, Agents
@ Occam AI | US
AI Research Scientist
@ Vara | Berlin, Germany and Remote
Data Architect
@ University of Texas at Austin | Austin, TX
Data ETL Engineer
@ University of Texas at Austin | Austin, TX
Lead GNSS Data Scientist
@ Lurra Systems | Melbourne