May 9, 2024, 10:48 a.m. | /u/EternalBlueFriday

Machine Learning www.reddit.com

**Paper**: [https://arxiv.org/abs/2405.03553](https://arxiv.org/abs/2405.03553)

**Code**: [https://github.com/MARIO-Math-Reasoning/Super\_MARIO](https://github.com/MARIO-Math-Reasoning/Super_MARIO)

**Model**: [https://huggingface.co/MARIO-Math-Reasoning/AlaphaMath-7B](https://huggingface.co/MARIO-Math-Reasoning/AlaphaMath-7B)

**Abstract**:

>Recent advancements in large language models (LLMs) have substantially enhanced their mathematical reasoning abilities. However, these models still struggle with complex problems that require multiple reasoning steps, frequently leading to logical or numerical errors. While numerical mistakes can largely be addressed by integrating a code interpreter, identifying logical errors within intermediate steps is more challenging. Moreover, manually annotating these steps for training is not only expensive but also demands specialized expertise. In this …

abstract code errors however intermediate interpreter language language models large language large language models llms machinelearning mathematical reasoning mistakes multiple numerical reasoning struggle while

Software Engineer for AI Training Data (School Specific)

@ G2i Inc | Remote

Software Engineer for AI Training Data (Python)

@ G2i Inc | Remote

Software Engineer for AI Training Data (Tier 2)

@ G2i Inc | Remote

Data Engineer

@ Lemon.io | Remote: Europe, LATAM, Canada, UK, Asia, Oceania

Artificial Intelligence – Bioinformatic Expert

@ University of Texas Medical Branch | Galveston, TX

Lead Developer (AI)

@ Cere Network | San Francisco, US