cuuupid / cogvideox-5b

Generate high quality videos from a prompt

replicate.com
Total runs: 1.7K
24-hour runs: 0
7-day runs: 100
30-day runs: 100
Github
Model's Last Updated: 2024年8月27日

Introduction of cogvideox-5b

Model Details of cogvideox-5b

Readme

CogVideoX is an open-source version of the video generation model originating from QingYing . The table below displays the list of video generation models we currently offer, along with their foundational information.

Model Name CogVideoX-2B CogVideoX-5B
Model Description Entry-level model, balancing compatibility. Low cost for running and secondary development. Larger model with higher video generation quality and better visual effects.
Inference Precision FP16* (Recommended) , BF16, FP32, FP8*, INT8, no support for INT4 BF16 (Recommended) , FP16, FP32, FP8*, INT8, no support for INT4
Single GPU VRAM Consumption FP16: 18GB using SAT / 12.5GB* using diffusers
INT8: 7.8GB* using diffusers
BF16: 26GB using SAT / 20.7GB* using diffusers
INT8: 11.4GB* using diffusers
Multi-GPU Inference VRAM Consumption FP16: 10GB* using diffusers BF16: 15GB* using diffusers
Inference Speed
(Step = 50, FP/BF16)
Single A100: ~90 seconds
Single H100: ~45 seconds
Single A100: ~180 seconds
Single H100: ~90 seconds
Fine-tuning Precision FP16 BF16
Fine-tuning VRAM Consumption (per GPU) 47 GB (bs=1, LORA)
61 GB (bs=2, LORA)
62GB (bs=1, SFT)
63 GB (bs=1, LORA)
80 GB (bs=2, LORA)
75GB (bs=1, SFT)
Prompt Language English*
Prompt Length Limit 226 Tokens
Video Length 6 Seconds
Frame Rate 8 Frames per Second
Video Resolution 720 x 480, no support for other resolutions (including fine-tuning)
Positional Encoding 3d_sincos_pos_embed 3d_rope_pos_embed
Download Page (Diffusers) 🤗 HuggingFace
🤖 ModelScope
🟣 WiseModel
🤗 HuggingFace
🤖 ModelScope
🟣 WiseModel
Download Page (SAT) SAT

Data Explanation

  • When testing with the diffusers library, the enable_model_cpu_offload() option and pipe.vae.enable_tiling() optimization were enabled. This solution has not been tested for actual VRAM/memory usage on devices other than NVIDIA A100/H100 . Generally, this solution can be adapted to all devices with NVIDIA Ampere architecture and above. If optimization is disabled, VRAM usage will increase significantly, with peak VRAM approximately 3 times the value in the table.
  • When performing multi-GPU inference, the enable_model_cpu_offload() optimization needs to be disabled.
  • Using an INT8 model will result in reduced inference speed. This is done to accommodate GPUs with lower VRAM, allowing inference to run properly with minimal video quality loss, though the inference speed will be significantly reduced.
  • The 2B model is trained using FP16 precision, while the 5B model is trained using BF16 precision. It is recommended to use the precision used in model training for inference.
  • FP8 precision must be used on NVIDIA H100 and above devices, requiring source installation of the torch , torchao , diffusers , and accelerate Python packages. CUDA 12.4 is recommended.
  • Inference speed testing also used the aforementioned VRAM optimization scheme. Without VRAM optimization, inference speed increases by about 10%. Only models using diffusers support quantization.
  • The model only supports English input; other languages can be translated to English during large model refinements.
Citation

🌟 If you find our work helpful, please leave us a star and cite our paper.

@article{yang2024cogvideox,
  title={CogVideoX: Text-to-Video Diffusion Models with An Expert Transformer},
  author={Yang, Zhuoyi and Teng, Jiayan and Zheng, Wendi and Ding, Ming and Huang, Shiyu and Xu, Jiazheng and Yang, Yuanming and Hong, Wenyi and Zhang, Xiaohan and Feng, Guanyu and others},
  journal={arXiv preprint arXiv:2408.06072},
  year={2024}
}
@article{hong2022cogvideo,
  title={CogVideo: Large-scale Pretraining for Text-to-Video Generation via Transformers},
  author={Hong, Wenyi and Ding, Ming and Zheng, Wendi and Liu, Xinghan and Tang, Jie},
  journal={arXiv preprint arXiv:2205.15868},
  year={2022}
}

Runs of cuuupid cogvideox-5b on replicate.com

1.7K
Total runs
0
24-hour runs
100
3-day runs
100
7-day runs
100
30-day runs

More Information About cogvideox-5b replicate.com Model

cogvideox-5b replicate.com

cogvideox-5b replicate.com is an AI model on replicate.com that provides cogvideox-5b's model effect (Generate high quality videos from a prompt), which can be used instantly with this cuuupid cogvideox-5b model. replicate.com supports a free trial of the cogvideox-5b model, and also provides paid use of the cogvideox-5b. Support call cogvideox-5b model through api, including Node.js, Python, http.

cogvideox-5b replicate.com Url

https://replicate.com/cuuupid/cogvideox-5b

cuuupid cogvideox-5b online free

cogvideox-5b replicate.com is an online trial and call api platform, which integrates cogvideox-5b's modeling effects, including api services, and provides a free online trial of cogvideox-5b, you can try cogvideox-5b online for free by clicking the link below.

cuuupid cogvideox-5b online free url in replicate.com:

https://replicate.com/cuuupid/cogvideox-5b

cogvideox-5b install

cogvideox-5b is an open source model from GitHub that offers a free installation service, and any user can find cogvideox-5b on GitHub to install. At the same time, replicate.com provides the effect of cogvideox-5b install, users can directly use cogvideox-5b installed effect in replicate.com for debugging and trial. It also supports api for free installation.

cogvideox-5b install url in replicate.com:

https://replicate.com/cuuupid/cogvideox-5b

cogvideox-5b install url in github:

https://github.com/cuuupid/cog-cogvideox

Url of cogvideox-5b

Provider of cogvideox-5b replicate.com

Other API from cuuupid

replicate

Best-in-class clothing virtual try on in the wild (non-commercial use only)

Total runs: 581.3K
Run Growth: 65.2K
Growth Rate: 11.26%
Updated: 2024年8月24日
replicate

Embed text with Qwen2-7b-Instruct

Total runs: 337.6K
Run Growth: 155.8K
Growth Rate: 46.48%
Updated: 2024年8月6日
replicate

GLM-4V is a multimodal model released by Tsinghua University that is competitive with GPT-4o and establishes a new SOTA on several benchmarks, including OCR.

Total runs: 76.9K
Run Growth: 2.9K
Growth Rate: 3.77%
Updated: 2024年7月2日
replicate

Microsoft's tool to convert Office documents, PDFs, images, audio, and more to LLM-ready markdown.

Total runs: 3.8K
Run Growth: 3.1K
Growth Rate: 85.83%
Updated: 2025年1月17日
replicate

Convert scanned or electronic documents to markdown, very very very fast

Total runs: 2.3K
Run Growth: 0
Growth Rate: 0.00%
Updated: 2023年12月7日
replicate

Flux finetuned for black and white line art.

Total runs: 1.4K
Run Growth: 100
Growth Rate: 7.14%
Updated: 2024年8月23日
replicate

SDXL finetuned on line art

Total runs: 1.1K
Run Growth: 0
Growth Rate: 0.00%
Updated: 2024年6月5日
replicate

Translate audio while keeping the original style, pronunciation and tone of your original audio.

Total runs: 767
Run Growth: 70
Growth Rate: 9.13%
Updated: 2023年12月6日
replicate

SOTA open-source model for chatting with videos and the newest model in the Qwen family

Total runs: 448
Run Growth: 21
Growth Rate: 4.69%
Updated: 2024年8月31日
replicate

F5-TTS, a new state-of-the-art in open source voice cloning

Total runs: 171
Run Growth: 0
Growth Rate: 0.00%
Updated: 2024年10月14日
replicate

Zonos-v0.1 beta, a SOTA text-to-speech Transformer model with extraordinary expressive range, built by Zyphra.

Total runs: 164
Run Growth: 93
Growth Rate: 56.71%
Updated: 2025年2月11日
replicate

Finetuned E5 embeddings for instruct based on Mistral.

Total runs: 131
Run Growth: 0
Growth Rate: 0.00%
Updated: 2024年2月3日
replicate

MiniCPM LLama3-V 2.5, a new SOTA open-source VLM that surpasses GPT-4V-1106 and Phi-128k on a number of benchmarks.

Total runs: 127
Run Growth: 0
Growth Rate: 0.00%
Updated: 2024年6月4日
replicate

Llama-3-8B finetuned with ReFT to hyperfocus on New Jersey, the Garden State, the best state, the only state!

Total runs: 105
Run Growth: 0
Growth Rate: 0.00%
Updated: 2024年6月3日
replicate

make meow emojis!

Total runs: 68
Run Growth: 0
Growth Rate: 0.00%
Updated: 2024年1月11日
replicate

An example using Garden State Llama to ReFT on the Golden Gate bridge.

Total runs: 30
Run Growth: 0
Growth Rate: 0.00%
Updated: 2024年6月3日