VGen is an open-source video synthesis codebase developed by the Tongyi Lab of Alibaba Group, featuring state-of-the-art video generative models. This repository includes implementations of the following methods:
VGen can produce high-quality videos from the input text, images, desired motion, desired subjects, and even the feedback signals provided. It also offers a variety of commonly used video generation tools such as visualization, sampling, training, inference, join training using images and videos, acceleration, and more.
🔥News!!!
[2023.12]
We release the high-efficiency video generation method
VideoLCM
[2023.12]
We release the code and model of I2VGen-XL and the ModelScope T2V
[2023.12]
We release the T2V method
HiGen
and customizing T2V method
DreamVideo
.
We have provided a
demo dataset
that includes images and videos, along with their lists in
data
.
Please note that the demo images used here are for testing purposes and were not included in the training.
Clone codeb
git clone https://github.com/damo-vilab/i2vgen-xl.git
cd i2vgen-xl
Getting Started with VGen
(1) Train your text-to-video model
Executing the following command to enable distributed training is as easy as that.
python train_net.py --cfg configs/t2v_train.yaml
In the
t2v_train.yaml
configuration file, you can specify the data, adjust the video-to-image ratio using
frame_lens
, and validate your ideas with different Diffusion settings, and so on.
Before the training, you can download any of our open-source models for initialization. Our codebase supports custom initialization and
grad_scale
settings, all of which are included in the
Pretrain
item in yaml file.
During the training, you can view the saved models and intermediate inference results in the
workspace/experiments/t2v_train
directory.
After the training is completed, you can perform inference on the model using the following command.
python inference.py --cfg configs/t2v_infer.yaml
Then you can find the videos you generated in the
workspace/experiments/test_img_01
directory. For specific configurations such as data, models, seed, etc., please refer to the
t2v_infer.yaml
file.
In a few minutes, you can retrieve the high-definition video you wish to create from the
workspace/experiments/test_img_01
directory. At present, we find that the current model performs inadequately on
anime images
and
images with a black background
due to the lack of relevant training data. We are consistently working to optimize it.
Due to the compression of our video quality in GIF format, please click 'HRER' below to view the original video.
Our codebase essentially supports all the commonly used components in video generation. You can manage your experiments flexibly by adding corresponding registration classes, including
ENGINE, MODEL, DATASETS, EMBEDDER, AUTO_ENCODER, DISTRIBUTION, VISUAL, DIFFUSION, PRETRAIN
, and can be compatible with all our open-source algorithms according to your own needs. If you have any questions, feel free to give us your feedback at any time.
Integration of I2VGenXL with 🧨 diffusers
I2VGenXL is supported in the 🧨 diffusers library. Here's how to use it:
import torch
from diffusers import I2VGenXLPipeline
from diffusers.utils import load_image, export_to_gif
repo_id = "ali-vilab/i2vgen-xl"
pipeline = I2VGenXLPipeline.from_pretrained(repo_id, torch_dtype=torch.float16, variant="fp16").to("cuda")
image_url = "https://github.com/ali-vilab/i2vgen-xl/blob/main/data/test_images/img_0009.png?download=true"
image = load_image(image_url).convert("RGB")
prompt = "Papers were floating in the air on a table in the library"
generator = torch.manual_seed(8888)
frames = pipeline(
prompt=prompt,
image=image,
generator=generator
).frames[0]
print(export_to_gif(frames))
If this repo is useful to you, please cite our corresponding technical paper.
@article{2023i2vgenxl,
title={I2VGen-XL: High-Quality Image-to-Video Synthesis via Cascaded Diffusion Models},
author={Zhang, Shiwei and Wang, Jiayu and Zhang, Yingya and Zhao, Kang and Yuan, Hangjie and Qing, Zhiwu and Wang, Xiang and Zhao, Deli and Zhou, Jingren},
booktitle={arXiv preprint arXiv:2311.04145},
year={2023}
}
@article{2023videocomposer,
title={VideoComposer: Compositional Video Synthesis with Motion Controllability},
author={Wang, Xiang and Yuan, Hangjie and Zhang, Shiwei and Chen, Dayou and Wang, Jiuniu, and Zhang, Yingya, and Shen, Yujun, and Zhao, Deli and Zhou, Jingren},
booktitle={arXiv preprint arXiv:2306.02018},
year={2023}
}
@article{wang2023modelscope,
title={Modelscope text-to-video technical report},
author={Wang, Jiuniu and Yuan, Hangjie and Chen, Dayou and Zhang, Yingya and Wang, Xiang and Zhang, Shiwei},
journal={arXiv preprint arXiv:2308.06571},
year={2023}
}
@article{dreamvideo,
title={DreamVideo: Composing Your Dream Videos with Customized Subject and Motion},
author={Wei, Yujie and Zhang, Shiwei and Qing, Zhiwu and Yuan, Hangjie and Liu, Zhiheng and Liu, Yu and Zhang, Yingya and Zhou, Jingren and Shan, Hongming},
journal={arXiv preprint arXiv:2312.04433},
year={2023}
}
@article{qing2023higen,
title={Hierarchical Spatio-temporal Decoupling for Text-to-Video Generation},
author={Qing, Zhiwu and Zhang, Shiwei and Wang, Jiayu and Wang, Xiang and Wei, Yujie and Zhang, Yingya and Gao, Changxin and Sang, Nong },
journal={arXiv preprint arXiv:2312.04483},
year={2023}
}
@article{wang2023videolcm,
title={VideoLCM: Video Latent Consistency Model},
author={Wang, Xiang and Zhang, Shiwei and Zhang, Han and Liu, Yu and Zhang, Yingya and Gao, Changxin and Sang, Nong },
journal={arXiv preprint arXiv:2312.09109},
year={2023}
}
Disclaimer
This open-source model is trained with using
WebVid-10M
and
LAION-400M
datasets and is intended for
RESEARCH/NON-COMMERCIAL USE ONLY
.
Runs of ali-vilab i2vgen-xl on huggingface.co
24.1K
Total runs
0
24-hour runs
478
3-day runs
1.4K
7-day runs
-2.6K
30-day runs
More Information About i2vgen-xl huggingface.co Model
i2vgen-xl huggingface.co is an AI model on huggingface.co that provides i2vgen-xl's model effect (), which can be used instantly with this ali-vilab i2vgen-xl model. huggingface.co supports a free trial of the i2vgen-xl model, and also provides paid use of the i2vgen-xl. Support call i2vgen-xl model through api, including Node.js, Python, http.
i2vgen-xl huggingface.co is an online trial and call api platform, which integrates i2vgen-xl's modeling effects, including api services, and provides a free online trial of i2vgen-xl, you can try i2vgen-xl online for free by clicking the link below.
ali-vilab i2vgen-xl online free url in huggingface.co:
i2vgen-xl is an open source model from GitHub that offers a free installation service, and any user can find i2vgen-xl on GitHub to install. At the same time, huggingface.co provides the effect of i2vgen-xl install, users can directly use i2vgen-xl installed effect in huggingface.co for debugging and trial. It also supports api for free installation.