Master MetaHuman Animator in Unreal Engine
Table of Contents
- Introduction
- MetaHuman Animator Overview
- System Requirements
- Capture Source Configuration
- Ingesting Takes
- Managing Footage Queue
- Ingestion Process
- Configuring MetaHuman DNA Rig
- Performing Mesh Solve
- View Modes for Fidelity Inspection
- Calibrating MetaHuman DNA Rig
- Adding Teeth Pose
- Preparing Asset for Performance
- MetaHuman Performance Asset
- Configuring Performance Asset
- Processing Animation
- Exporting Animation to Unreal Engine
- Animation Playback in Unreal Engine
- Back-solving Animation to Control Rig
- Conclusion
MetaHuman Animator: Streamlining Workflow
MetaHuman Animator is a powerful feature set available in the MetaHuman plugin for Unreal Engine. In this tutorial, we will guide You through the workflow of using MetaHuman Animator to streamline your animation process. From configuring capture sources to processing animation, we will cover each step in Detail. So let's get started.
1. Introduction
Welcome to the tutorial for MetaHuman Animator, where we will explore the efficient workflow for this powerful tool. MetaHuman Animator is a feature set available in the MetaHuman plugin for Unreal Engine. This update requires Unreal Engine 5.2 and the latest version of the MetaHuman rig. In this tutorial, we will focus on the streamline in Editor Workflow feature of MetaHuman Animator. We will guide you through the process of configuring capture sources, ingesting takes, calibrating the MetaHuman DNA rig, and processing animation. By the end of this tutorial, you will have a clear understanding of the workflow, allowing you to Create amazing content with MetaHuman Animator.
2. MetaHuman Animator Overview
MetaHuman Animator is a feature set available in the MetaHuman plugin for Unreal Engine. It provides a streamlined workflow for animating MetaHuman characters. With MetaHuman Animator, you can capture and process performance data, configure the MetaHuman DNA rig, and generate keyframes of facial animation. This powerful tool automates many of the complex tasks involved in facial animation, allowing animators to focus on creativity and storytelling. Whether you are a professional animator or a hobbyist, MetaHuman Animator offers a user-friendly interface and efficient workflow to bring your characters to life.
3. System Requirements
Before diving into the workflow of MetaHuman Animator, it's essential to ensure your system meets the necessary requirements. To use MetaHuman Animator, you will need Unreal Engine 5.2 and the latest version of the MetaHuman rig. Make sure you have these versions installed on your system before proceeding with the workflow. Additionally, allocate sufficient system resources, such as system memory and processing power, to handle the heavy data and workflow of performance capture. We recommend using a global cache with at least four Threads and a couple of gigabytes of system memory for optimal performance.
4. Capture Source Configuration
The first step in the MetaHuman Animator workflow is configuring the capture sources. Capture sources work with the device and calibration data to capture performance footage. You need one capture source for each calibration used by a given device. For iPhone devices, which are factory calibrated, a single capture source is sufficient. Capture sources can operate in two modes: connected mode and Archive mode. In connected mode, you configure the IP of the device and ingest footage directly from it. In archive mode, you work with already downloaded footage stored at a specified location.
5. Ingesting Takes
After configuring capture sources, the next step is ingesting takes. This is done through the Capture Manager, an additional tool available under the Tools top menu. In the Capture Manager, you will see a list of capture sources within your project, along with their device Type and status. Select the desired source and view the available takes. You can configure the destination path for the media at the bottom, which applies to all selected takes. Once you have selected the desired takes, add them to the queue for ingestion. It is important to note that adding to the queue does not automatically start the processing or importing. To begin the ingestion process, click on Import All, and the system will start processing the footage.
6. Managing Footage Queue
Processing large volumes of footage can take some time and may result in an abundance of assets. To manage this, MetaHuman Animator provides an option to automatically save the new assets as the queue moves forward. This is beneficial to prevent name clashes when multiple capture sources contain takes with the same name. After the footage ingestion, the target folder will contain a subfolder with the capture search name. Inside this folder, you will find a captured data asset and a folder containing finer-grain assets produced by the ingest process.
7. Ingestion Process
After footage ingestion, the system requires information about the device that captured it. This information is provided through a new asset called the Capture Source. Capture sources work with the device and calibration data to ensure accurate performance capture. Each calibration used by a given device requires a separate capture source. For iPhone devices, which are factory calibrated, only one capture source is necessary. Configure the capture source with the necessary information and proceed to the next step.
8. Configuring MetaHuman DNA Rig
The MetaHuman DNA rig is the Core component of MetaHuman Animator. It is responsible for encoding MetaHuman faces and calibrating them to the performer. With this release, the MetaHuman DNA rig is being extended to work with footage. Configure the MetaHuman DNA rig by selecting the neutral pose and using the Guided Workflow toolbar at the top. For optimal results, the neutral pose should have a relaxed expression, open eyes staring straight ahead, and a closed mouth with the seal of the lips in view and no gaps. These parameters affect the rig's rest state in its neutral pose.