May 7, 2024, 4:50 a.m. | Guoxin Chen, Minpeng Liao, Chengxi Li, Kai Fan

cs.CL updates on arXiv.org arxiv.org

arXiv:2405.03553v1 Announce Type: new
Abstract: Recent advancements in large language models (LLMs) have substantially enhanced their mathematical reasoning abilities. However, these models still struggle with complex problems that require multiple reasoning steps, frequently leading to logical or numerical errors. While numerical mistakes can largely be addressed by integrating a code interpreter, identifying logical errors within intermediate steps is more challenging. Moreover, manually annotating these steps for training is not only expensive but also demands specialized expertise. In this study, we …

abstract arxiv code cs.ai cs.cl errors however interpreter language language models large language large language models llms mathematical reasoning mistakes multiple numerical process process supervision reasoning struggle supervision type while

Software Engineer for AI Training Data (School Specific)

@ G2i Inc | Remote

Software Engineer for AI Training Data (Python)

@ G2i Inc | Remote

Software Engineer for AI Training Data (Tier 2)

@ G2i Inc | Remote

Data Engineer

@ Lemon.io | Remote: Europe, LATAM, Canada, UK, Asia, Oceania

Artificial Intelligence – Bioinformatic Expert

@ University of Texas Medical Branch | Galveston, TX

Lead Developer (AI)

@ Cere Network | San Francisco, US