Feb. 5, 2024, 6:44 a.m. | Manav Singhal Tushar Aggarwal Abhijeet Awasthi Nagarajan Natarajan Aditya Kanade

cs.LG updates on arXiv.org arxiv.org

Existing evaluation benchmarks of language models of code (code LMs) focus almost exclusively on whether the LMs can generate functionally-correct code. In real-world software engineering, developers think beyond functional correctness. They have requirements on "how" a functionality should be implemented to meet overall system design objectives like efficiency, security, and maintainability. They would also trust the code LMs more if the LMs demonstrate robust understanding of requirements and code semantics.
We propose a new benchmark NoFunEval to evaluate code LMs …

benchmarks beyond code cs.ai cs.cl cs.lg cs.se design developers efficiency engineering evaluation focus functional funny generate language language models lms requirements security software software engineering think world

Software Engineer for AI Training Data (School Specific)

@ G2i Inc | Remote

Software Engineer for AI Training Data (Python)

@ G2i Inc | Remote

Software Engineer for AI Training Data (Tier 2)

@ G2i Inc | Remote

Data Engineer

@ Lemon.io | Remote: Europe, LATAM, Canada, UK, Asia, Oceania

Artificial Intelligence – Bioinformatic Expert

@ University of Texas Medical Branch | Galveston, TX

Lead Developer (AI)

@ Cere Network | San Francisco, US