Learn How to Create Stunning Animated Videos with Stable Diffusion

Learn How to Create Stunning Animated Videos with Stable Diffusion

Table of Contents

  1. Introduction
  2. Installing Stable Diffusion
  3. Choosing the Right Model
  4. Exporting Frames from a Video
  5. Stylizing Images with Stable Diffusion
  6. Adjusting Settings for Consistency
  7. Processing Frames in Batch
  8. Stitching Frames into a Video
  9. Improving Consistency with the Flicker Plugin
  10. Enhancing the Animation with Color Grading
  11. Upscaling the Video with Topaz Video AI
  12. Conclusion

Introduction

In this article, we will explore how to use AI to generate animations out of a real video. We will focus on maximizing consistency in the outputs using Stable Diffusion, along with a couple of other tools. We will cover the installation process, choosing the right model for your desired animation style, exporting frames from a video, stylizing images with Stable Diffusion, adjusting settings for consistency, processing frames in batch, stitching frames into a video, improving consistency with the Flicker plugin, enhancing the animation with color grading, and upscaling the video with Topaz Video AI. By the end of this article, you will have a comprehensive understanding of how to utilize AI to Create stunning animations from real videos. So let's dive in!

Installing Stable Diffusion

Before we can start creating animations, we need to install Stable Diffusion on our computer. To do this, follow the link provided in the description and download the latest version of Stable Diffusion. Once the setup file is downloaded, launch it and choose the installation location. After the installation is complete, open the new web UI launcher file. It will start updating settings and loading files. You will be prompted to download the Stable Diffusion base model, which is essential for generating a variety of images. Once the download is finished, the Stable Diffusion interface will automatically launch on your default browser.

Choosing the Right Model

Stable Diffusion offers various diffusion models that cater to different animation styles. The default base model works for most use cases, but if you want to produce specific styles and aesthetics, there are specialized models available. In this article, we will be using the Arcane Diffusion model, inspired by the popular arcade series. To download this model, head to the Files and Versions section in the Stable Diffusion web UI. Look for the Arcane Diffusion V3 checkpoint, right-click on the arrow, and save the link as. Then, save the checkpoint file in the Stable Diffusion folder. Refresh the checkpoint list in the Stable Diffusion UI, and switch to the Arcane model. Now We Are ready to start working with our desired animation style.

Exporting Frames from a Video

To create animations from a real video, we need to first export individual frames. You can use any video editing software for this, such as Adobe Media Encoder. Import the video you want to Stylize and change the format to PNG. You can also lower the frame rate to save time and give the video a cartoon or animation feel. Enable rendering at maximum depth and maximum render quality. Select an output destination and hit render. The frames will start exporting to the chosen folder.

Stylizing Images with Stable Diffusion

Once we have exported the frames, we can start stylizing them using Stable Diffusion. In the Stable Diffusion UI, go to the Image to Image tab and upload an image by selecting one of the exported frames. Choose a frame that is sharp, not blurry, and has most of the elements from the scene visible. Click on Interrogate Clip to get a description of the image content. If needed, you can add more details to the description. Set the width and Height ratio to match your image. Keep in mind that higher Dimensions will take longer to process. Leave the rest of the settings as default for now and Apply the Arcane style to the image by clicking on Generate.

Adjusting Settings for Consistency

To achieve consistency in the animation output, we need to adjust the settings in Stable Diffusion. Experiment with the denoising strength and CFG Scale settings to find the desired look. Lower denoising strength will result in more creativity from the AI, while higher values will stick closer to the original image. Try different values for both settings until you get the desired consistency and style. Also, consider changing the sampling method, sampling steps, and enabling sigma adjustments for further refinement. Keep in mind that each input may require different settings, so be patient and experimental during this process.

Processing Frames in Batch

Once we have finalized the settings for an individual frame, we can process the rest of the frames in batch. In the Batch tab of Stable Diffusion, paste the path of the original frames folder and the path to the exported frames folder. Click on Generate to start processing. The frames will be Stylized according to the chosen settings and exported to the specified folder. You can preview the output and make any necessary adjustments.

Stitching Frames into a Video

After processing the frames, we need to stitch them together to create a Cohesive video. Use video editing software like After Effects to import the exported frames as a PNG sequence. Set the frame rate to match the original sequence. Create a composition with the imported sequence and preview the animation. To improve consistency, use the Flicker plugin, which reduces inconsistencies between frames. Apply the plugin to the clip, adjust the settings, and see the difference it makes in the overall look of the animation.

Enhancing the Animation with Color Grading

Color grading can significantly enhance the look and feel of the animation. Play around with contrast, shadows, and hues to polish the animation further. Experiment with different adjustments to achieve the desired aesthetic. Color grading helps to create a unified and visually appealing final animation.

Upscaling the Video with Topaz Video AI

To increase the resolution and sharpness of the animation, use Topaz Video AI. Import the video into the software, choose the desired upscale ratio, enable frame interpolation if necessary, and adjust the output settings. Export the upscaled video, and you will Notice a clear improvement in sharpness and Detail. This step is especially useful if the frames were processed at a lower resolution initially.

Conclusion

In this article, we have explored how to use AI to generate animations from real videos. We have covered the installation process of Stable Diffusion, choosing the right model, exporting frames from a video, stylizing images with Stable Diffusion, adjusting settings for consistency, processing frames in batch, stitching frames into a video, enhancing consistency with the Flicker plugin, enhancing the animation with color grading, and upscaling the video with Topaz Video AI. By following these steps, you can create stunning animations with AI-powered tools. Remember to experiment, be patient, and have fun throughout the process. Enjoy creating your own animations and exploring the creative possibilities of AI technology.

Most people like

Find AI tools in Toolify

Join TOOLIFY to find the ai tools

Get started

Sign Up
App rating
4.9
AI Tools
20k+
Trusted Users
5000+
No complicated
No difficulty
Free forever
Browse More Content