Creating Stunning AI Portraits in TouchDesigner
Table of Contents
- Introduction
- Setting up the AI Portrait Project
- Preparing for the Project
- Background Removal
- Using a Compute Render Tool
- Adding Video Device and Threshold
- Tracking the Silhouette with Blob Tracking
- Triggering the Image Generation
- Customizing the Prompt and Changing the Image
- Adding Additional Features
- Conclusion
Setting up the AI Portrait Project
In this article, we will guide You step by step on how to Create an AI portrait project that is triggered when someone enters the frame. We will also Show you how to update the prompt for the portrait and customize the project according to your preferences.
Introduction
Welcome to another video with the Interactive Mive HQ! In this video tutorial, we will walk you through the process of creating an AI portrait project. This project is designed to generate a new frame whenever someone enters the frame. We will also demonstrate how you can update the prompt and customize the project to suit your needs.
Preparing for the Project
Before we dive into Touch Designer, there are a few things you need to prepare. In this project, we will be using the "AIdita Broadcast App" to get background removal. You can also use a Kinect, but the app provides a more accessible option as it only requires a webcam. Make sure you have the GeForce RTX graphics card as it is a system requirement for the app. Once you have the app open, go to the camera tab and enable the background removal effect to have a clean silhouette for the project.
Background Removal
To achieve the background removal effect, we will add a video device in TOP and select the Nvidia Broadcast as the device. This will give us a clean silhouette of ourselves. After adding the video device in TOP, we will Apply a threshold to get a clean silhouette. Adjust the threshold until you have a white silhouette on a black background. To enhance the lines of the silhouette, we will also add a blur effect.
Using a Compute Render Tool
In this project, we will be using a compute render tool created by Toren Blank Smith. Make sure you download the file that includes the compute render tool and the necessary dependencies, such as the TDAS Sync IO. Also, obtain your custom API key from the Website and enter it into the compute render tool. Test if the compute render tool is working correctly by generating an image using a random prompt.
Adding Video Device and Threshold
To trigger the image generation, we will add a crop TOP to specify the area in which we want a trigger. Whenever someone enters that area and fills up the crop, the image will be triggered. We will also add an Analyze CHOP to detect the minimum pixels in the area. Using a CHOP to TOP, we will extract the red Channel. We will then use a MATH TOP to round the values. Finally, we will add a NULL TOP to hold the processed image until triggering it.
Tracking the Silhouette with Blob Tracking
To track the movement of the silhouette, we will use blob tracking. This will allow us to trigger the image generation when the silhouette is at the center of the screen. We will use the centroid of the blob to control the triggering. In the project setup, we will reference the compute render tool and create a system that triggers the portrait image when the silhouette is at the center.
Triggering the Image Generation
Once the setup is complete, we will connect the video device output to the compute render tool. We will also enable the "Use Image To Image" option and adjust the parameters such as iteration and guidance to achieve the desired portrait effect. It is recommended to use a portrait-related prompt for the best results.
Customizing the Prompt and Changing the Image
To add more flexibility to the project, we will create a custom component that allows us to change the prompt. This will enable us to generate different images Based on the prompt we input. By binding the prompt parameter to the compute render tool, changing the prompt in one component will reflect in the other component.
Adding Additional Features
To enhance the project, we will add a few more features. We will display a loading sign to indicate that the compute render tool is generating an image. We will also display the camera input on the bottom corner of the frame. Additionally, we will provide the ability to add a prompt overlay on the frame.
Conclusion
In conclusion, this AI portrait project allows you to generate unique portraits triggered by someone entering the frame. By customizing the prompt and adding additional features, you can create a personalized and interactive portrait experience. Have fun exploring different prompt ideas and share your creations with the Interactive and Immersive HQ community.
Highlights
- Create an AI portrait project that generates images triggered by someone entering the frame.
- Customize the prompt to generate unique portraits.
- Use background removal techniques for a clean silhouette.
- Utilize a compute render tool to generate the portrait image.
- Add additional features such as a loading sign and camera input display.
FAQ
Q: Can I use a different graphics card instead of the GeForce RTX for the background removal?
A: The GeForce RTX is recommended for the best performance with the AIdita Broadcast App. However, you can try using a different graphics card and see if it is compatible.
Q: Can I change the resolution of the generated images?
A: Yes, you can adjust the resolution settings in the project setup to match your preferences.
Q: How can I share my creations with the Interactive and Immersive HQ community?
A: You can share your creations by tagging Interactive and Immersive HQ on social media platforms or by joining the community and posting your work in the private forum. Don't forget to use the provided handles and hashtags.