Feb. 12, 2024, 5:46 a.m. | Huaiyuan Ying Shuo Zhang Linyang Li Zhejian Zhou Yunfan Shao Zhaoye Fei Yichuan Ma Jiawei Hong

cs.CL updates on arXiv.org arxiv.org

The math abilities of large language models can represent their abstract reasoning ability. In this paper, we introduce and open-source our math reasoning LLMs InternLM-Math which is continue pre-trained from InternLM2. We unify chain-of-thought reasoning, reward modeling, formal reasoning, data augmentation, and code interpreter in a unified seq2seq format and supervise our model to be a versatile math reasoner, verifier, prover, and augmenter. These abilities can be used to develop the next math LLMs or self-iteration. InternLM-Math obtains open-sourced state-of-the-art …


Research Scholar (Technical Research)

@ Centre for the Governance of AI | Hybrid; Oxford, UK

HPC Engineer (x/f/m) - DACH

@ Meshcapade GmbH | Remote, Germany

Business Consultant-AI/ML

@ Bosch Group | Bengaluru, India

Senior Network Defense Analyst (AI/ML) - Hybrid

@ Noblis | Linthicum, MD, United States

Senior Data Analyst

@ Peloton | New York City

SC2024-003425 Data Scientist (NS) - WED 6 Mar

@ EMW, Inc. | Brussels, Brussels, Belgium