unsloth / Qwen2.5-32B-bnb-4bit

huggingface.co
Total runs: 65.2K
24-hour runs: 0
7-day runs: 43.0K
30-day runs: 43.4K
Model's Last Updated: September 24 2024
text-generation

Introduction of Qwen2.5-32B-bnb-4bit

Model Details of Qwen2.5-32B-bnb-4bit

Finetune Llama 3.1, Gemma 2, Mistral 2-5x faster with 70% less memory via Unsloth!

We have a Qwen 2.5 (all model sizes) free Google Colab Tesla T4 notebook . Also a Qwen 2.5 conversational style notebook .

✨ Finetune for Free

All notebooks are beginner friendly ! Add your dataset, click "Run All", and you'll get a 2x faster finetuned model which can be exported to GGUF, vLLM or uploaded to Hugging Face.

Unsloth supports Free Notebooks Performance Memory use
Llama-3.1 8b ▶️ Start on Colab 2.4x faster 58% less
Phi-3.5 (mini) ▶️ Start on Colab 2x faster 50% less
Gemma-2 9b ▶️ Start on Colab 2.4x faster 58% less
Mistral 7b ▶️ Start on Colab 2.2x faster 62% less
TinyLlama ▶️ Start on Colab 3.9x faster 74% less
DPO - Zephyr ▶️ Start on Colab 1.9x faster 19% less

Qwen2.5-32B

Introduction

Qwen2.5 is the latest series of Qwen large language models. For Qwen2.5, we release a number of base language models and instruction-tuned language models ranging from 0.5 to 72 billion parameters. Qwen2.5 brings the following improvements upon Qwen2:

  • Significantly more knowledge and has greatly improved capabilities in coding and mathematics , thanks to our specialized expert models in these domains.
  • Significant improvements in instruction following , generating long texts (over 8K tokens), understanding structured data (e.g, tables), and generating structured outputs especially JSON. More resilient to the diversity of system prompts , enhancing role-play implementation and condition-setting for chatbots.
  • Long-context Support up to 128K tokens and can generate up to 8K tokens.
  • Multilingual support for over 29 languages, including Chinese, English, French, Spanish, Portuguese, German, Italian, Russian, Japanese, Korean, Vietnamese, Thai, Arabic, and more.

This repo contains the base 32B Qwen2.5 model , which has the following features:

  • Type: Causal Language Models
  • Training Stage: Pretraining
  • Architecture: transformers with RoPE, SwiGLU, RMSNorm, and Attention QKV bias
  • Number of Parameters: 32.5B
  • Number of Paramaters (Non-Embedding): 31.0B
  • Number of Layers: 64
  • Number of Attention Heads (GQA): 40 for Q and 8 for KV
  • Context Length: 131,072 tokens

We do not recommend using base language models for conversations. Instead, you can apply post-training, e.g., SFT, RLHF, continued pretraining, etc., on this model.

For more details, please refer to our blog , GitHub , and Documentation .

Requirements

The code of Qwen2.5 has been in the latest Hugging face transformers and we advise you to use the latest version of transformers .

With transformers<4.37.0 , you will encounter the following error:

KeyError: 'qwen2'
Evaluation & Performance

Detailed evaluation results are reported in this 📑 blog .

For requirements on GPU memory and the respective throughput, see results here .

Citation

If you find our work helpful, feel free to give us a cite.

@misc{qwen2.5,
    title = {Qwen2.5: A Party of Foundation Models},
    url = {https://qwenlm.github.io/blog/qwen2.5/},
    author = {Qwen Team},
    month = {September},
    year = {2024}
}

@article{qwen2,
      title={Qwen2 Technical Report}, 
      author={An Yang and Baosong Yang and Binyuan Hui and Bo Zheng and Bowen Yu and Chang Zhou and Chengpeng Li and Chengyuan Li and Dayiheng Liu and Fei Huang and Guanting Dong and Haoran Wei and Huan Lin and Jialong Tang and Jialin Wang and Jian Yang and Jianhong Tu and Jianwei Zhang and Jianxin Ma and Jin Xu and Jingren Zhou and Jinze Bai and Jinzheng He and Junyang Lin and Kai Dang and Keming Lu and Keqin Chen and Kexin Yang and Mei Li and Mingfeng Xue and Na Ni and Pei Zhang and Peng Wang and Ru Peng and Rui Men and Ruize Gao and Runji Lin and Shijie Wang and Shuai Bai and Sinan Tan and Tianhang Zhu and Tianhao Li and Tianyu Liu and Wenbin Ge and Xiaodong Deng and Xiaohuan Zhou and Xingzhang Ren and Xinyu Zhang and Xipin Wei and Xuancheng Ren and Yang Fan and Yang Yao and Yichang Zhang and Yu Wan and Yunfei Chu and Yuqiong Liu and Zeyu Cui and Zhenru Zhang and Zhihao Fan},
      journal={arXiv preprint arXiv:2407.10671},
      year={2024}
}

Runs of unsloth Qwen2.5-32B-bnb-4bit on huggingface.co

65.2K
Total runs
0
24-hour runs
39.5K
3-day runs
43.0K
7-day runs
43.4K
30-day runs

More Information About Qwen2.5-32B-bnb-4bit huggingface.co Model

More Qwen2.5-32B-bnb-4bit license Visit here:

https://choosealicense.com/licenses/apache-2.0

Qwen2.5-32B-bnb-4bit huggingface.co

Qwen2.5-32B-bnb-4bit huggingface.co is an AI model on huggingface.co that provides Qwen2.5-32B-bnb-4bit's model effect (), which can be used instantly with this unsloth Qwen2.5-32B-bnb-4bit model. huggingface.co supports a free trial of the Qwen2.5-32B-bnb-4bit model, and also provides paid use of the Qwen2.5-32B-bnb-4bit. Support call Qwen2.5-32B-bnb-4bit model through api, including Node.js, Python, http.

Qwen2.5-32B-bnb-4bit huggingface.co Url

https://huggingface.co/unsloth/Qwen2.5-32B-bnb-4bit

unsloth Qwen2.5-32B-bnb-4bit online free

Qwen2.5-32B-bnb-4bit huggingface.co is an online trial and call api platform, which integrates Qwen2.5-32B-bnb-4bit's modeling effects, including api services, and provides a free online trial of Qwen2.5-32B-bnb-4bit, you can try Qwen2.5-32B-bnb-4bit online for free by clicking the link below.

unsloth Qwen2.5-32B-bnb-4bit online free url in huggingface.co:

https://huggingface.co/unsloth/Qwen2.5-32B-bnb-4bit

Qwen2.5-32B-bnb-4bit install

Qwen2.5-32B-bnb-4bit is an open source model from GitHub that offers a free installation service, and any user can find Qwen2.5-32B-bnb-4bit on GitHub to install. At the same time, huggingface.co provides the effect of Qwen2.5-32B-bnb-4bit install, users can directly use Qwen2.5-32B-bnb-4bit installed effect in huggingface.co for debugging and trial. It also supports api for free installation.

Qwen2.5-32B-bnb-4bit install url in huggingface.co:

https://huggingface.co/unsloth/Qwen2.5-32B-bnb-4bit

Url of Qwen2.5-32B-bnb-4bit

Qwen2.5-32B-bnb-4bit huggingface.co Url

Provider of Qwen2.5-32B-bnb-4bit huggingface.co

unsloth
ORGANIZATIONS

Other API from unsloth

huggingface.co

Total runs: 123.6K
Run Growth: 96.6K
Growth Rate: 80.71%
Updated: Dezember 10 2024
huggingface.co

Total runs: 26.4K
Run Growth: -1.2K
Growth Rate: -4.50%
Updated: September 30 2024
huggingface.co

Total runs: 11.7K
Run Growth: 4.8K
Growth Rate: 40.89%
Updated: September 03 2024
huggingface.co

Total runs: 10.5K
Run Growth: 302
Growth Rate: 2.44%
Updated: September 03 2024
huggingface.co

Total runs: 8.3K
Run Growth: 5.1K
Growth Rate: 63.14%
Updated: September 03 2024
huggingface.co

Total runs: 5.3K
Run Growth: 3.9K
Growth Rate: 74.18%
Updated: November 13 2024
huggingface.co

Total runs: 5.2K
Run Growth: 1.7K
Growth Rate: 33.21%
Updated: September 03 2024
huggingface.co

Total runs: 4.8K
Run Growth: 90
Growth Rate: 1.88%
Updated: September 03 2024
huggingface.co

Total runs: 3.8K
Run Growth: 744
Growth Rate: 19.43%
Updated: September 03 2024
huggingface.co

Total runs: 3.8K
Run Growth: 1.8K
Growth Rate: 48.19%
Updated: November 12 2024
huggingface.co

Total runs: 3.7K
Run Growth: 265
Growth Rate: 7.13%
Updated: September 03 2024
huggingface.co

Total runs: 3.4K
Run Growth: 903
Growth Rate: 26.68%
Updated: November 12 2024
huggingface.co

Total runs: 3.3K
Run Growth: 277
Growth Rate: 8.30%
Updated: September 03 2024
huggingface.co

Total runs: 3.3K
Run Growth: 1.4K
Growth Rate: 44.55%
Updated: September 03 2024
huggingface.co

Total runs: 2.8K
Run Growth: 1.5K
Growth Rate: 53.82%
Updated: November 12 2024
huggingface.co

Total runs: 2.7K
Run Growth: 1.7K
Growth Rate: 62.21%
Updated: September 03 2024
huggingface.co

Total runs: 2.4K
Run Growth: 1.1K
Growth Rate: 48.70%
Updated: September 03 2024