NuminaMath is a series of language models that are trained to solve math problems using tool-integrated reasoning (TIR). NuminaMath 7B TIR won the first progress prize of the
AI Math Olympiad (AIMO)
, with a score of 29/50 on the public and private tests sets.
Stage 1:
fine-tune the base model on a large, diverse dataset of natural language math problems and solutions, where each solution is templated with Chain of Thought (CoT) to facilitate reasoning.
Stage 2:
fine-tune the model from Stage 1 on a synthetic dataset of tool-integrated reasoning, where each math problem is decomposed into a sequence of rationales, Python programs, and their outputs. Here we followed
Microsoft’s ToRA paper
and prompted GPT-4 to produce solutions in the ToRA format with code execution feedback. Fine-tuning on this data produces a reasoning agent that can solve mathematical problems via a mix of natural language reasoning and use of the Python REPL to compute intermediate results.
Model description
Model type:
A 7B parameter math LLM fine-tuned in two stages of supervised fine-tuning, first on a dataset with math problem-solution pairs and then on a synthetic dataset with examples of multi-step generations using tool-integrated reasoning.
Here's how you can run the model using the
pipeline()
function from 🤗 Transformers:
import re
import torch
from transformers import pipeline
pipe = pipeline("text-generation", model="AI-MO/NuminaMath-7B-TIR", torch_dtype=torch.bfloat16, device_map="auto")
messages = [
{"role": "user", "content": "For how many values of the constant $k$ will the polynomial $x^{2}+kx+36$ have two distinct integer roots?"},
]
prompt = pipe.tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
gen_config = {
"max_new_tokens": 1024,
"do_sample": False,
"stop_strings": ["```output"], # Generate until Python code block is complete"tokenizer": pipe.tokenizer,
}
outputs = pipe(prompt, **gen_config)
text = outputs[0]["generated_text"]
print(text)
# WARNING: This code will execute the python code in the string. We show this for eductional purposes only.# Please refer to our full pipeline for a safer way to execute code.
python_code = re.findall(r"```python(.*?)```", text, re.DOTALL)[0]
exec(python_code)
The above executes a single step of Python code - for more complex problems, you will want to run the logic for several steps to obtain the final solution.
Bias, Risks, and Limitations
NuminaMath 7B TIR was created to solve problems in the narrow domain of competition-level mathematics. As a result, the model should not be used for general chat applications. With greedy decoding, we find the model is capable of solving problems at the level of
AMC 12
, but often struggles generate a valid solution on harder problems at the AIME and Math Olympiad level. The model also struggles to solve geometry problems, likely due to it's limited capacity and lack of other modalities like vision.
Training procedure
Training hyperparameters
The following hyperparameters were used during training:
learning_rate: 2e-05
train_batch_size: 4
eval_batch_size: 8
seed: 42
distributed_type: multi-GPU
num_devices: 8
total_train_batch_size: 32
total_eval_batch_size: 64
optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
lr_scheduler_type: cosine
lr_scheduler_warmup_ratio: 0.1
num_epochs: 4.0
Framework versions
Transformers 4.40.1
Pytorch 2.3.1
Datasets 2.18.0
Tokenizers 0.19.1
Citation
If you find NuminaMath 7B TIR is useful in your work, please cite it with:
@misc{numina_math_7b,
author = {Edward Beeching and Shengyi Costa Huang and Albert Jiang and Jia Li and Benjamin Lipkin and Zihan Qina and Kashif Rasul and Ziju Shen and Roman Soletskyi and Lewis Tunstall},
title = {NuminaMath 7B TIR},
year = {2024},
publisher = {Numina & Hugging Face},
journal = {Hugging Face repository},
howpublished = {\url{https://huggingface.co/AI-MO/NuminaMath-7B-TIR}}
}
Runs of AI-MO NuminaMath-7B-TIR on huggingface.co
26.1K
Total runs
0
24-hour runs
1.1K
3-day runs
4.5K
7-day runs
19.7K
30-day runs
More Information About NuminaMath-7B-TIR huggingface.co Model
NuminaMath-7B-TIR huggingface.co is an AI model on huggingface.co that provides NuminaMath-7B-TIR's model effect (), which can be used instantly with this AI-MO NuminaMath-7B-TIR model. huggingface.co supports a free trial of the NuminaMath-7B-TIR model, and also provides paid use of the NuminaMath-7B-TIR. Support call NuminaMath-7B-TIR model through api, including Node.js, Python, http.
NuminaMath-7B-TIR huggingface.co is an online trial and call api platform, which integrates NuminaMath-7B-TIR's modeling effects, including api services, and provides a free online trial of NuminaMath-7B-TIR, you can try NuminaMath-7B-TIR online for free by clicking the link below.
AI-MO NuminaMath-7B-TIR online free url in huggingface.co:
NuminaMath-7B-TIR is an open source model from GitHub that offers a free installation service, and any user can find NuminaMath-7B-TIR on GitHub to install. At the same time, huggingface.co provides the effect of NuminaMath-7B-TIR install, users can directly use NuminaMath-7B-TIR installed effect in huggingface.co for debugging and trial. It also supports api for free installation.