Nov. 22, 2023, 5:07 p.m. | /u/RealAGIFan

Machine Learning www.reddit.com

I've been pondering something recently. Did you notice that achieving over 70% on the well-known HumanEval pass@1 hasn't been making major headlines? Models like WizardCoderV2, Phind, Deepseek, and XwinCoder have all surpassed the 67% reported in GPT-4’s report. Some of them are even closely tailing the 82% of GPT-4 API’s. So, are these models really performing that well?
Here's something intriguing: I found this image in the latest release of XwinCoder’s repo: [Xwin-LM/Xwin-Coder at main · Xwin-LM/Xwin-LM (github.com)](https://github.com/Xwin-LM/Xwin-LM/tree/main/Xwin-Coder)



[Results …

api code code llms gpt gpt-4 gpt-4 api humaneval llms machinelearning major making open source open source code report something them

AI Research Scientist

@ Vara | Berlin, Germany and Remote

Data Architect

@ University of Texas at Austin | Austin, TX

Data ETL Engineer

@ University of Texas at Austin | Austin, TX

Lead GNSS Data Scientist

@ Lurra Systems | Melbourne

Senior Machine Learning Engineer (MLOps)

@ Promaton | Remote, Europe

Robotics Technician - 3rd Shift

@ GXO Logistics | Perris, CA, US, 92571