April 17, 2024, 4:42 a.m. | Mohammed Latif Siddiq, Simantika Dristi, Joy Saha, Joanna C. S. Santos

cs.LG updates on arXiv.org arxiv.org

arXiv:2404.10155v1 Announce Type: cross
Abstract: Large Language Models (LLMs) are gaining popularity among software engineers. A crucial aspect of developing effective code-generation LLMs is to evaluate these models using a robust benchmark. Evaluation benchmarks with quality issues can provide a false sense of performance. In this work, we conduct the first-of-its-kind study of the quality of prompts within benchmarks used to compare the performance of different code generation models. To conduct this study, we analyzed 3,566 prompts from 9 code …

abstract arxiv assessment benchmark benchmarks code code generation cs.lg cs.se engineers evaluation false kind language language models large language large language models llms performance prompts quality robust sense software software engineers study type work

Data Architect

@ University of Texas at Austin | Austin, TX

Data ETL Engineer

@ University of Texas at Austin | Austin, TX

Lead GNSS Data Scientist

@ Lurra Systems | Melbourne

Senior Machine Learning Engineer (MLOps)

@ Promaton | Remote, Europe

Data Scientist

@ Publicis Groupe | New York City, United States

Bigdata Cloud Developer - Spark - Assistant Manager

@ State Street | Hyderabad, India