Jan. 31, 2024, 4:41 p.m. | Haoxiong Liu, Yifan Zhang, Yifan Luo, Andrew Chi-Chih Yao

cs.CL updates on arXiv.org arxiv.org

Despite the advancements in large language models (LLMs) for mathematical
reasoning, solving competition-level math problems remains a significant
challenge, especially for open-source LLMs without external tools. We introduce
the MMIQC dataset, comprising a mixture of processed web data and synthetic
question-response pairs, aimed at enhancing the mathematical reasoning
capabilities of base language models. Models fine-tuned on MMIQC consistently
surpass their counterparts in performance on the MATH benchmark across various
model sizes. Notably, Qwen-72B-MMIQC achieves a 45.0% accuracy, exceeding the
previous …

arxiv capabilities challenge competition cs.cl data dataset iterative language language models large language large language models llms math mathematical reasoning question reasoning synthetic tools via web word

Software Engineer for AI Training Data (School Specific)

@ G2i Inc | Remote

Software Engineer for AI Training Data (Python)

@ G2i Inc | Remote

Software Engineer for AI Training Data (Tier 2)

@ G2i Inc | Remote

Data Engineer

@ Lemon.io | Remote: Europe, LATAM, Canada, UK, Asia, Oceania

Artificial Intelligence – Bioinformatic Expert

@ University of Texas Medical Branch | Galveston, TX

Lead Developer (AI)

@ Cere Network | San Francisco, US