Body Tracking with UE5 In this tutorial, you will learn how to animate 3D avatars with your own movements using Body Tracking. ZED Blueprints Main Stereolabs blueprints used to make the body tracking work BP_ZED_Initializer Put the BP_ZED_Initializer in your scene to configure all the features of the ZED camera. This actor is a direct reference to the C++ code of ZEDInitializer.cpp. Pay special attention to some parameters : Input Type : will the SDK compute data from a ZED camera, a SVO file, or a stream input. Resolution, FPS, Depth Mode : Main parameters of the ZED camera. See the API Reference for more details. Default values HD 1080p, 30 fps and Depth Mode Ultra will work the best in most cases. Loop and Real Time : If you’re using a SVO file, you will want to tick these two to true in most cases. It enables SVO looping and syncing the SDK processing time with the SVO timestamps. Depth Occlusion : If enabled, the virtual objects will be hidden by the render of the real scene, using the depth of each pixel of the camera. Show Zed Image : If enabled, the ZED Camera/SVO video stream will be rendered in fullscreen in the level viewport at runtime. You may want to leave it on in some cases (MR applications mostly) but in this tutorial, we want the virtual scene to occupy the screen. BP_BodyTracking_Visualizer This blueprint manages the whole avataring process, from enabling object detection to sending the skeleton data to the animation blueprint. Enabling the Object Detection : Automatically : If the “Start Object Detection Automatically” option is ticked, the object detection will start as soon as the ZED camera is ready. Manually : If it is not ticked, it can be started in the details panel manually. Watch for the retrieve object callback Put a small viewport with the image from the ZED Camera / SVO file in the main viewport. Loop on each retrieved object and check its tracking state. If it is a new detection, spawn an avatar. If it is an already tracked skeleton, find the avatar in question in the avatars actor map. Set Animation Blueprint’s Object Data and Skeletal Mesh variables (see AnimNode_ZEDPose.cpp below.) Manage avatar height scaling if the “Match Body Height” option is ticked. For now, the estimated height is computed via a ratio between the tracked skeleton’s height and the Unreal Engine’s avatar height. The Unreal avatar’s height is retrieved at the end of the spawning procedure. The scale of the avatar actor is then set at the end of the next Retrieve Object, as a check that the skeleton is visible enough for the skeleton height to be grabbed with enough accuracy. BP_ZED_Manny & ABP_ZED_Manny The BP_ZED_Manny Actor Blueprint includes a skeletal mesh using a specific animation blueprint, the ABP_ZED_Manny. The procedure to create your own is detailed below in this article. NB : As mentioned in this guide, for now, the skeleton used must be in T-Pose to have as correct as possible correspondence of the movements. The ABP_ZED_Manny is fed a SlObjectData structure containing all the data about the skeleton, mainly key points positions, bones orientations, and location of the skeleton. Both of these blueprints come with an alternate version implementing Foot IK to stick their feet to the ground and limit the ice skating effect. ZED Pose - Animation Blueprint node This node is generated by the AnimNode_ZEDPose.cpp file, do not hesitate to take a peek at it. This is the core of the avataring, it applies the rotations of the bones of the SDK skeleton to the Unreal avatar bones (using the remap asset). Object Data : SlObjectData structure used to retrieve the location and orientations of key points and bones from the SDK. It must be set externally (e.g. from the BP_BodyTracking_Visualizer). Explore StereolabsBaseTypes.h for more details on this structure. Remap Asset : This map of names is used to make the correspondance between the bones' names in the SDK and the ones of the avatar. Mirror on Z Axis : Enable mirror-mode for the avatar. If you raise your right hand, it will raise its left. Height Offset : Can be used to apply an external offset to the character Z location (height). Rotation Slerp Intensity : Intensity of the smoothing applied to the avatar movements. You can adjust it to adjust latency on movements and jittering. Root Location Lerp Intensity : Intensity of the smoothing on the location of the root of the skeleton. Used to avoid sudden displacement which can happen in case of unstable data from the camera. Skeletal Mesh : Used to give the skeletal mesh to the C++ script, which then manipulates its Pose. What is Body Tracking? The Body Tracking is very similar to the “Classic” Object Detection module but is using another highly-optimized AI model to detect 3D people’s skeleton, expressed as keypoints. It can detect up to 34 keypoints on a single person. For more details about the Body Tracking feature, please take a look at the Body Tracking documentation page. In Unreal, we will use this 3D keypoints information (more precisely the orientation of each keypoint) to animate a 3D humanoid avatar based on a person movement in real-time. Preparing your ZED 2 The Object Detection module can use the floor plane position to make more assumptions. To do so, the floor plane should be visible in the image when starting the module as in this image: Setting Up the Scene Basic Settings Now we’ve got to add a Blueprint from the ZED plugin. But by default, Content from plugins is hidden. To fix this, click on View Options at the bottom right of the Content Browser and enable Show Plugin Content. Now click on the folder icon beside Content and click on Stereolabs Content to switch to the plugin’s content folder. In the Content Browser, go to Plugins -> Stereolab Content -> ZED -> Blueprints and drag a BP_ZED_Initializer into the scene. This is the object that sets up your camera and handles communication with your app. Select the BP_ZED_Initializer blueprint, and in the ZED section, uncheck the Show Zed Image parameter. This parameter has to be disabled so we can see the 3D scene and not the zed’s image in fullscreen. In the Init Parameters section, set the Resolution to 1080p. This is not required but increases object detection accuracy Set Depth Mode to ULTRA. Also not required but can improve object detection accuracy. If your camera is fixed and will not move, enable Set as Static in the Tracking Parameters section to prevent incorrect drift throughout the scene. Body Tracking Settings Open the ZED / Object Detection section of the BP_ZED_Initializer. Change the Object Detection Model to any HUMAN_BODY_XX detection model. Multiple settings are available: Image Sync: Synchronize the object detection to the image grab. Enable Tracking: If enabled, the ZED SDK will track objects between frames, providing more accurate data and giving access to more information, such as velocity. Enable Body Fitting: The fitting process takes the history of each tracked person to deduce all missing keypoints thanks to the human kinematic’s constraint used by the body tracking module. It is also able to extract local rotation between a pair of neighbor bones by solving the inverse kinematic problem. Max Range Defines an upper depth range for detections (in centimeters). Person Confidence Threshold: Sets the minimum confidence value for a detected person to be published. Ex: If set to 40, the ZED SDK needs to be at least 40% confident that a detected person exists. Minimum Keypoint Threshold : Minimum number of detected keypoints required to consider a person detected. Can be useful to remove partially occluded detection. Go to Content -> ZED -> Blueprints -> BodyTracking and add BP_BodyTracking_Visualizer into the scene as well. This object will manage the avataring of the 3D models in the scene. Importing your own 3D avatar One 3D Avatar is already available in the Example project but you can also import your own model. First, your 3D model needs to be rigged, otherwise it won’t be possible to animate it. For now, our plugin allows to animate correctly an avatar if it is imported in the right format. We need the avatar to respect two conditions: The first condition is on the avatar’s reference pose. The reference pose is the default pose of the avatar you can see when you open the skeleton asset window. We need this reference pose to be in T-Pose, which corresponds to the pose where the avatar is standing up with its arms held horizontally. The second condition is on the avatar’s global orientation when imported. We want to start from a known orientation, so we need the avatar to be facing the +X direction in its reference pose. To respect the first condition, you might need to change your avatar’s reference pose if it isn’t already a T-Pose. You can change its reference pose using Blender, for instance. Moreover, in order to meet the second condition, you’ll have to tick the checkbox Force Front X Axis to force the avatar to look at the +X direction. This way we make sure the avatar starts with a known orientation. Creating an Animation Blueprint Once you imported the avatar, the next thing to do is to create an Animation Blueprint. For that, Right click on the Skeletal Mesh you just imported, and select Create -> Anim Blueprint. Now, you need to configure this Anim Blueprint to receive the skeleton data from the ZED SDK and to apply it correctly onto the avatar. Open the Anim blueprint and add a ZED Pose component. Create a variable for each input of this component and link it to the Output Pose node. Setting up the Remap Asset Your avatar joints probably don’t have the same name than the one given by our plugin. The Remap Asset variable is used to match the joints from the ZED SDK with the joints of your avatar, exactly as the Livelink Remap asset. Select the Remap Asset variable and, for each of the 34 joints from the ZED SDK, add its matching name from your avatar. For example, “LEFT_SHOULDER” is mapped to “LeftForeArm”, or “LEFT_KNEE” to “LeftLeg”. Creating an Actor blueprint Now, we have all the pieces to create a Blueprint than can be instanciated in the scene and animated using the ZED SDK skeleton data. Create an Actor Blueprint and add a Skeletal Mesh component. Set the Skeletal Mesh and the Anim class fields with the skeletal mesh you just imported and the Anim Blueprint previously created. Your Actor is now ready to be automaticaly instanciated in the scene. Setting up the Body Tracking Manager Now, you need to configure the Body Tracking Manager so it will create a new instance of your Actor blueprint for each new detection in the scene and share the keypoint data with its anim blueprint. Click on Edit BP_BodyTracking_Visualizer in the world hierarchy In the Event Graph, modify the parts of the blueprint highlighted in the following screenshot with your own Instance of AnimBlueprint. It will share the skeleton data from the ZED SDK to the ZED Pose node of the Anim blueprint for preprocessing. Then, the preprocessed data will be used to animated your Avatar in real-time. Run the Scene After a short initialization period, the app will pause for few seconds as the object detection module loads. Once it does, you should see an Avatar imitating your movements in real-time. Note : To see a similar scene already built, check out the L_BodyTracking level. There’s also plenty of other scenes for getting started that will show you the basics of camera tracking, spatial mapping and more.