March 5, 2024, 2:52 p.m. | Wenpin Hou, Zhicheng Ji

cs.CL updates on arXiv.org arxiv.org

arXiv:2403.00894v1 Announce Type: cross
Abstract: We systematically evaluated the performance of seven large language models in generating programming code using various prompt strategies, programming languages, and task difficulties. GPT-4 substantially outperforms other large language models, including Gemini Ultra and Claude 2. The coding performance of GPT-4 varies considerably with different prompt strategies. In most LeetCode and GeeksforGeeks coding contests evaluated in this study, GPT-4 employing the optimal prompt strategy outperforms 85 percent of human participants. Additionally, GPT-4 demonstrates strong capabilities …

abstract arxiv claude claude 2 code coding cs.ai cs.cl cs.pl cs.se evaluation gemini gemini ultra gpt gpt-4 language language models languages large language large language models performance programming programming languages prompt strategies type

Artificial Intelligence – Bioinformatic Expert

@ University of Texas Medical Branch | Galveston, TX

Lead Developer (AI)

@ Cere Network | San Francisco, US

Research Engineer

@ Allora Labs | Remote

Ecosystem Manager

@ Allora Labs | Remote

Founding AI Engineer, Agents

@ Occam AI | New York

AI Engineer Intern, Agents

@ Occam AI | US