Body Tracking with UE5

In this tutorial, you will learn how to animate 3D avatars with your own movements using Body Tracking.

ZED Blueprints

Main Stereolabs blueprints used to make the body tracking work (click to unwrap)


Put the BP_ZED_Initializer in your scene to configure all the features of the ZED camera. This actor is a direct reference to the C++ code of ZEDInitializer.cpp. Pay special attention to some parameters :

  • Input Type : will the SDK compute data from a ZED camera, a SVO file, or a stream input.
  • Resolution, FPS, Depth Mode : Main parameters of the ZED camera. See the API Reference for more details. Default values HD 1080p, 30 fps and Depth Mode Ultra will work the best in most cases.
  • Loop and Real Time : If you’re using a SVO file, you will ewant to tick these two to true in most cases. It enables SVO looping and syncing the SDK processing time with the SVO timestamps.
  • Depth Occlusion : If enabled, the virtual objects will be hidden by the render of the real scene, using the depth of each pixel of the camera.
  • Show Zed Image : If enabled, the ZED Camera/SVO video stream will be rendered in fullscreen in the level viewport at runtime. You may want to leave it on in some cases (MR applications mostly) but in this tutorial, we want the virtual scene to occupy the screen.


This blueprint manages the whole avataring process, from enabling body tracking to sending the skeleton data to the animation blueprint.

  • Enabling the Object Detection :

  • Automatically : If the “Start Body Tracking Automatically” option is ticked, the body detection will start as soon as the ZED camera is ready.

  • Manually : If it is not ticked, it can be started in the details panel manually.

  • Put a small viewport with the image from the ZED Camera / SVO file in the main viewport.

  • If it is a new detection, spawn an avatar.

  • If it is an already tracked skeleton, find the avatar in question in the avatars actor map.

  • Set Animation Blueprint’s Body Data and Skeletal Mesh variables (see AnimNode_ZEDPose.cpp below.)

BP_ZED_Manny & ABP_ZED_Manny

The BP_ZED_Manny Actor Blueprint includes a skeletal mesh using a specific animation blueprint, the ABP_ZED_Manny. The procedure to create your own is detailed below in this article.

NB : As mentioned in this guide, for now, the skeleton used must be in T-Pose to have as correct as possible correspondence of the movements.

The ABP_ZED_Manny is fed a SlObjectData structure containing all the data about the skeleton, mainly key points positions, bones orientations, and location of the skeleton. A Control Rig node might be activated afterward, sticking the feet to the ground when nearing it.

ZED Pose - Animation Blueprint node

This node is generated by the AnimNode_ZEDPose.cpp file, do not hesitate to take a peek at it. This is the core of the avataring, it applies the rotations of the bones of the SDK skeleton to the Unreal avatar bones (using the remap asset).

  • Object Data : SlObjectData structure used to retrieve the location and orientations of key points and bones from the SDK. It must be set externally (e.g. from the BP_BodyTracking_Visualizer). Explore StereolabsBaseTypes.h for more details on this structure.
  • Remap Asset : This map of names is used to make the correspondence between the bones' names in the SDK and the ones of the avatar.
  • Mirror on Z Axis : Enable mirror-mode for the avatar. If you raise your right hand, it will raise its left.
  • Height Offset : Can be used to apply an external offset to the character Z location (height).
  • Rotation Slerp Intensity : Intensity of the smoothing applied to the avatar movements. You can adjust it to adjust latency on movements and jittering.
  • Root Location Lerp Intensity : Intensity of the smoothing on the location of the root of the skeleton. Used to avoid sudden displacement which can happen in case of unstable data from the camera.
  • Skeletal Mesh : Used to give the skeletal mesh to the C++ script, which then manipulates its Pose.

What is Body Tracking?

The Body Tracking is very similar to the “Classic” Object Detection module but is using another highly-optimized AI model to detect 3D people’s skeleton, expressed as keypoints. It can detect up to 34 keypoints on a single person. For more details about the Body Tracking feature, please take a look at the Body Tracking documentation page.

In Unreal, we will use this 3D keypoints information (more precisely the orientation of each keypoint) to animate a 3D humanoid avatar based on a person movement in real-time.

Preparing your ZED Camera

Optimize the AI models

The Object Detection/Body Tracking modules use AI models that need to be optimized before their first use. The UE5 plugin does not do this them by itself, so we advise you to optimize them externally using the ZED Diagnostic tool located in the ZED SDK installation folder (usually Program Files (x86)/ZED SDK/tools).

The process is described here: How can I optimize the ZED SDK AI models manually?

Ensure Floor Plane detection

The Object Detection module can use the floor plane position to make more assumptions. To do so, the floor plane should be visible in the image when starting the module as in this image:

Setting Up the Scene

Basic Settings

Now we’ve got to add a Blueprint from the ZED plugin. But by default, Content from plugins is hidden. To fix this, click on View Options at the bottom right of the Content Browser and enable Show Plugin Content.

Now click on the folder icon beside Content and click on Stereolabs Content to switch to the plugin’s content folder.

In the Content Browser, go to Plugins -> Stereolab Content -> ZED -> Blueprints and drag a BP_ZED_Initializer into the scene. This is the object that sets up your camera and handles communication with your app.

  • Select the BP_ZED_Initializer blueprint, and in the ZED section, uncheck the Show Zed Image parameter. This parameter has to be disabled so we can see the 3D scene and not the zed’s image in fullscreen.

  • In the Init Parameters section, set the Resolution to 1080p. This is not required but increases object detection accuracy
  • Set Depth Mode to ULTRA. Also not required but can improve object detection accuracy.
  • If your camera is fixed and will not move, enable Set as Static in the Tracking Parameters section to prevent incorrect drift throughout the scene.

Body Tracking Settings

Open the ZED / Body Tracking section of the BP_ZED_Initializer.

Choose the Body Tracking Model you want, depending on your accuracy/performance needs.

Three body formats are available, you can refer to the Body Tracking Documentation.

Multiple settings are available:

  • Image Sync: Synchronize the body detection to the image grab.
  • Enable Tracking: If enabled, the ZED SDK will track bodies between frames, providing more accurate data and giving access to more information, such as velocity.
  • Enable Body Fitting: The fitting process takes the history of each tracked person to deduce all missing keypoints thanks to the human kinematic’s constraint used by the body tracking module. It is also able to extract local rotation between a pair of neighbor bones by solving the inverse kinematic problem.
  • Body Selection: Sets the keypoints that will be given by the ZED SDK. Full: all keypoints of the body format. Upper body: Only the keypoints from the hips up (arms, head, torso)
  • Max Range Defines an upper depth range for detections (in centimeters).
  • Prediction Timeout: Duration during which the SDK will predict the position of a lost body before its state is switched to SEARCHING.
  • Allow Reduced Precision Inference Allows inference to run at a lower precision to improve runtime and memory usage.
  • Person Confidence Threshold: Sets the minimum confidence value for a detected person to be published. Ex: If set to 40, the ZED SDK needs to be at least 40% confident that a detected person exists.
  • Minimum Keypoint Threshold : Minimum number of detected keypoints required to consider a person detected. Can be useful to remove partially occluded detection.

Go to Content -> ZED -> Blueprints -> BodyTracking and add BP_BodyTracking_Visualizer into the scene as well. This object will manage the avataring of the 3D models in the scene.

Body Tracking Visualizer Settings

Movement Smoothing

Interpolation values used to smooth the rotations of the limbs. The default values allow to filter the occasional jittering that can occur in the SVO data when the target is standing still.


This setting allows the virtual limbs to collide with the virtual body. This prevents the arms/legs to pass through the chest or each other. This feature uses the colliders provided by the Physics Assets corresponding to the skeleton.

Foot IK

This settings allows the Control Rig in the ABP_ZED_Manny to take effect and compensate small gaps between the feet and the floor when the feet are near the floor, effectively sticking them in place.

Bone Scaling [Experimental]

This feature makes the limbs of the ZED Avatar match the length of the SDK skeleton bones individually. However, this causes issues with Foot IK and Self-Collision, and should not be used in a production environment as is.

Height Offset

This value is applied as a flat height offset to the skeletons managed by the AnimZEDPose node.

Note: Unexpected behaviour can occur if “Stick Avatar On Floor” is ticked, as this is a manual solution to fix a floor detection issue, when “Stick Avatar On Floor” is the automatic counterpart.

Stick Avatar On Floor

If this is enabled, a dynamic height offset will be applied to the avatar to stick the lower foot on top of the ground in the virtual scene.

Using your own avatar [Deprecated version]

We now recommend using the UE5 Retargeter to use your own avatars with the body tracking feature. The old way of importing the avatar might cause issues and we can’t guarantee it will continue to work in further versions. Nevertheless, it may help understanding the whole retargeting and animation setup.

More information about retargeting can be found on the official Unreal Engine documentation.

Deprecated version (click to unwrap).

Importing and using your own 3D avatar and rig as base [Deprecated]

One 3D Avatar is already available in the Example project but you can also import your own model. First, your 3D model needs to be rigged, otherwise it won’t be possible to animate it.

For now, our plugin allows to animate correctly an avatar if it is imported in the right format. We need the avatar to respect two conditions:

  • The first condition is on the avatar’s reference pose. The reference pose is the default pose of the avatar you can see when you open the skeleton asset window. We need this reference pose to be in T-Pose, which corresponds to the pose where the avatar is standing up with its arms held horizontally.

  • The second condition is on the avatar’s global orientation when imported. We want to start from a known orientation, so we need the avatar to be facing the +X direction in its reference pose.

To respect the first condition, you might need to change your avatar’s reference pose if it isn’t already a T-Pose. You can change its reference pose using Blender, for instance.

Moreover, in order to meet the second condition, you’ll have to tick the checkbox Force Front X Axis to force the avatar to look at the +X direction. This way we make sure the avatar starts with a known orientation.

Importing procedure with Blender

  1. Import the mesh into Blender.
  2. Make sure the mesh is oriented following Blender defaults parameters: Facing -Y, Z up.
  3. Rig the mesh.
  • If you’re using Mixamo to make your skeleton, ensure your scale is applied before exporting your fbx/obj to Mixamo.
  1. Pose the mesh in T-Pose.
  2. When exporting from Blender, ensure your scale / transforms are applied beforehand, and use the default FBX export settings.
  • You can leave “Add leaf bones” checked or not, depending on your needs.
  1. Import to Unreal, checking the following settings :
  • Skeletal Mesh
  • Convert Scene
  • Force Front XAxis
  • If you’re using a T-Pose which is not the rest pose of your rig (e.g. from Mixamo):
    • Update Skeleton Reference Pose
    • Use T0 As Ref Pose

Creating an Animation Blueprint

Once you imported the avatar, the next thing to do is to create an Animation Blueprint. For that, Right click on the Skeletal Mesh you just imported, and select Create -> Anim Blueprint.

Now, you need to configure this Anim Blueprint to receive the skeleton data from the ZED SDK and to apply it correctly onto the avatar.

  • Open the Anim blueprint and add a ZED Pose component.
  • Create a variable for each input of this component and link it to the Output Pose node.

Setting up the Remap Asset

Your avatar joints probably don’t have the same name than the one given by our plugin. The Remap Asset variable is used to match the joints from the ZED SDK with the joints of your avatar, exactly as the Livelink Remap asset.

  • Select the Remap Asset variable and, for each of the 34 joints from the ZED SDK, add its matching name from your avatar.

For example, “LEFT_SHOULDER” is mapped to “LeftForeArm”, or “LEFT_KNEE” to “LeftLeg”.

Creating an Actor blueprint

Now, we have all the pieces to create a Blueprint than can be instantiated in the scene and animated using the ZED SDK skeleton data.

  • Create an Actor Blueprint and add a Skeletal Mesh component.
  • Set the Skeletal Mesh and the Anim class fields with the skeletal mesh you just imported and the Anim Blueprint previously created.

Your Actor is now ready to be automatically instantiated in the scene.

Setting up the Body Tracking Manager

Now, you need to configure the Body Tracking Manager so it will create a new instance of your Actor blueprint for each new detection in the scene and share the keypoint data with its anim blueprint.

  • Click on Edit BP_BodyTracking_Visualizer in the world hierarchy
  • In the Event Graph, modify the parts of the blueprint highlighted in the following screenshot with your own Instance of AnimBlueprint.
  • It will share the skeleton data from the ZED SDK to the ZED Pose node of the Anim blueprint for preprocessing.
  • Then, the preprocessed data will be used to animated your Avatar in real-time.

Using your own avatar with the Retargeting system of UE5

Import your rigged mesh into Unreal Engine

Importing procedure with Blender (click to unwrap)
  1. Import the mesh into Blender.
  2. Make sure the mesh is oriented following Blender defaults parameters: Facing -Y, Z up.
  3. Rig the mesh.
  • If you’re using Mixamo to make your skeleton, ensure your scale is applied before exporting your fbx/obj to Mixamo.
  1. Pose the mesh in T-Pose. (optional)
  2. When exporting from Blender, ensure your scale / transforms are applied beforehand, and use the default FBX export settings.
  • You can leave “Add leaf bones” checked or not, depending on your needs.
  1. Import to Unreal, checking the following settings :
  • Skeletal Mesh
  • Convert Scene (optional)
  • Force Front XAxis (optional)
  • If you’re using a T-Pose which is not the rest pose of your rig (e.g. from Mixamo): (optional)
    • Update Skeleton Reference Pose
    • Use T0 As Ref Pose

The first step is to import your rigged mesh into Unreal. If you can, import a character in T-Pose, facing X, or tick “Force Front X Axis” in the import options. The full process to import a mesh is described above.

Note: Having the avatar face X and be in T-Pose is not mandatory, as the pose will be adjusted later in the retargeting process anyway. However, it does make the setup process easier.

Create an IK Rig

Once the new avatar is imported, create an IK Rig targeting its skeletal mesh.

The ZED avatar uses the Mixamo rig. The Retarget Chains of the new avatar should match the ZED’s ones as closely as possible :

  • Spine: from the pelvis to the upper torso. Do not include the Retarget Root bone.
  • Neck: from the neck to the head bone (not the top of the head)
  • LeftArm/RightArm: from the clavicle to the wrist
  • LeftLeg/RightLed: from the hip to the bottom of the foot

Setup of the retargeter

Create an IK Retargeter from the ZED avatar.

Choose IKR_ZED_Manny as the source IK Rig. If does not exist in your project, create it first from the SKM_ZED_Manny skeletal mesh located in ZED/Assets/Mannequin/Mesh/ZED_Manny following the same procedure as for your custom mesh.

Correct the pose of the new avatar to match the one of the ZED avatar.

You can test the accuracy of the retargeting and adjust the retargeter settings by playing any Mixamo animation.

Create an animation blueprint

The creation of the animation blueprint to make the new avatar move should be fairly simple. First, create an animation blueprint targeting the skeleton of the new avatar.

Add a Retarget Pose From Mesh node, referencing the just-made parameter. The “Source Mesh Component” can be left empty, as the new avatar’s mesh will be a child of the ZED avatar’s mesh. However, ensure that the “Use Attached Parent” option is ticked in the node Details.

Create an actor

Create a child actor to BP_ZED_Manny, the ZED avatar. Add a child “Skeletal Mesh Component” to its Skeletal Mesh Component.

Set the mesh of this child to the mesh of the new avatar, and the Anim Class to the one just created. Check that the meshes overlap correctly, then set the “Visible” setting of the parent mesh to false and ensure that AlwaysTickPoseAndRefreshBones is set to “Always Tick Pose and Refresh Bones” on the parent mesh.

Use the actor with the visualizer

Replace the BP_ZED_Manny actor with your new one in the BP_BodyTracking_Visualizer. Uncheck the “Avatars visible” option in the details panel, and click play. You should see your avatar walking in the scene.

Run the Scene

After a short initialization period, the app will pause for few seconds as the body tracking module loads.

Once it does, you should see an Avatar imitating your movements in real-time.

Note : To see a similar scene already built, check out the L_BodyTracking level. There’s also plenty of other scenes for getting started that will show you the basics of camera tracking, spatial mapping and more.