Jan. 24, 2024, 10:29 a.m. | /u/OpenMMLab

Machine Learning www.reddit.com

**Shanghai AI Laboratory introduces new SOTA math LLMs with 7B and 20B sized open-sourced.**

Github: [https://github.com/InternLM/InternLM-Math](https://github.com/InternLM/InternLM-Math)

Huggingface: [https://huggingface.co/internlm/internlm2-math-7b](https://huggingface.co/internlm/internlm2-math-7b)

Demo: [https://huggingface.co/spaces/internlm/internlm2-math-7b](https://huggingface.co/spaces/internlm/internlm2-math-7b)



https://preview.redd.it/4emyeapn7dec1.png?width=1224&format=png&auto=webp&s=6a79ba3e4b98f48befed91eded1cf286b9fca137

# Features:

* **7B and 20B Chinese and English Math LMs with better than ChatGPT performances.** InternLM2-Math are continued pretrained from InternLM2-Base with \~100B high quality math-related tokens and SFT with \~2M bilingual math supervised data. We apply minhash and exact number match to decontaminate possible test set leakage.
* **Add Lean as a support language for math …

bilingual chatgpt chinese english features laboratory llms lms machinelearning math performances quality reasoning sft shanghai solver sota tokens

Lead Developer (AI)

@ Cere Network | San Francisco, US

Research Engineer

@ Allora Labs | Remote

Ecosystem Manager

@ Allora Labs | Remote

Founding AI Engineer, Agents

@ Occam AI | New York

AI Engineer Intern, Agents

@ Occam AI | US

AI Research Scientist

@ Vara | Berlin, Germany and Remote