April 5, 2024, 4:47 a.m. | William Macke, Michael Doyle

cs.CL updates on arXiv.org arxiv.org

arXiv:2404.03114v1 Announce Type: cross
Abstract: Large Language Models (LLMs) have demonstrated impressive abilities in recent years with regards to code generation and understanding. However, little work has investigated how documentation and other code properties affect an LLM's ability to understand and generate code or documentation. We present an empirical analysis of how underlying properties of code or documentation can affect an LLM's capabilities. We show that providing an LLM with "incorrect" documentation can greatly hinder code understanding, while incomplete or …

abstract arxiv code code generation code understanding cs.ai cs.cl cs.se documentation generate however language language model language models large language large language model large language models llm llms testing type understanding work

Founding AI Engineer, Agents

@ Occam AI | New York

AI Engineer Intern, Agents

@ Occam AI | US

AI Research Scientist

@ Vara | Berlin, Germany and Remote

Data Architect

@ University of Texas at Austin | Austin, TX

Data ETL Engineer

@ University of Texas at Austin | Austin, TX

Data Engineer

@ Kaseya | Bengaluru, Karnataka, India