Camera Tracking with UE5
In this tutorial, you will learn how to manipulate a virtual camera using a real ZED Camera movements and orientation, implementing the ZED Positional Tracking into Unreal Engine 5.
ZED Blueprints #
Main Stereolabs blueprints & actors used to make the camera tracking work
BP_ZED_Initializer in your scene to configure all the features of the ZED camera. This actor is a direct reference to the C++ code of
BP_ZED_Initializer is present in the scene and the
BP_ZED_GameMode is active, then the plugin will look for a
ZEDPawn in the scene (creating one if none is found) and instantiate the
ZEDCamera on it.
Pay special attention to some parameters :
- Input Type : will the SDK compute data from a ZED camera, a SVO file, or a stream input.
- Resolution, FPS, Depth Mode : Main parameters of the ZED camera. See the API Reference for more details. Default values HD 1080p, 30 fps and Depth Mode Ultra will work the best in most cases.
- Loop and Real Time : If you’re using a SVO file, you will want to tick these two to true in most cases. It enables SVO looping and syncing the SDK processing time with the SVO timestamps.
- Tracking : In the tracking parameters, you can enable or disable Tracking. If enabled, the virtual ZEDCamera component will use input from the ZED camera’s odometry to move in the virtual scene.
- Enable IMU Fusion : If this option is enabled, the ZED SDK will use the internal IMU data of the ZED Camera, along with visual odometry, to find its orientation and position in the real world. This results in more accurate tracking, so you will want to keep this on most of the time.
- Area Memory : Enable the use of the Area Memory feature of the ZED Camera. It is not mandatory, but should produce more accurate positional tracking data, especially over time, if the environment does not change too much.
- Depth Occlusion : If enabled, the virtual objects will be hidden by the render of the real scene, using the depth of each pixel of the camera.
- Show Zed Image : If enabled, the ZED Camera/SVO video stream will be rendered in fullscreen in the level viewport at runtime. You may want to leave it on in some cases (MR applications mostly) but in this tutorial, we want the virtual scene to occupy the screen.
The ZEDPawn holds the ZEDCamera actor and has a Cine Camera component.
- Camera : Edit the Cine Camera component settings : all the intrinsic camera parameters such as the focal length or the aperture, and all the post processing options and other parameters.
- Enable Lerp : Enable or disable smoothing of the camera movements, to manage jittering vs latency.
- Lerp Intensity : Intensity of the smoothing. The alpha of the linear interpolation that smooths movements is the duration since the previous frame multiplied by this value, and clamped between 0 and 1.
- Toggle Freeze : Tick this to freeze the virtual camera in place, allowing you to reposition the real one.
- Translation Multiplier : The factors that will be applied on the real camera’s translations to calculate the virtual camera’s ones, on each axis.
The ZEDCamera actor is spawned under the ZEDPawn at the start of the scene. It manages mainly three things.
- Firstly, the rendering cameras and planes where the plugin apply the textures created via the SDK data.
- Secondly, the potential recording or playback of a SVO file.
- Thirdly, the activation and deactivation of the tracking of the camera at runtime.
What is Camera Tracking? #
Camera Tracking is the most basic usage you can make of your ZED Camera in Unreal Engine 5. It uses optical odometry, along with IMU sensors data if enabled, to track the ZED Camera position in the real world, in real-time. The
ZEDCamera actor comes with a Cine Camera component, allowing advanced control on the rendering of the virtual scene in the final image. This sample also features camera repositioning, translation speed parameters and basic camera stabilization, all configurable in the
ZEDPawn details panel.
Setting Up the Scene #
Basic Settings #
We’ve got to add a Blueprint from the ZED plugin. But by default, Content from plugins is hidden. To fix this, click on View Options at the bottom right of the Content Browser and enable Show Plugin Content.
Now click on the folder icon beside Content and click on Stereolabs Content to switch to the plugin’s content folder.
In the Content Browser, go to Plugins -> Stereolab Content -> ZED -> Blueprints and drag a BP_ZED_Initializer into the scene. This is the object that sets up your camera and handles communication with your app.
- Select the BP_ZED_Initializer blueprint, and in the ZED section, uncheck the Show Zed Image parameter. This parameter has to be disabled so we can see the 3D scene and not the zed’s image in fullscreen.
- Check that the Tracking is enabled in the Tracking Parameters of the BP_ZED_Initializer, along with Enable IMU Fusion.
- Check that Set as Static is disabled, since your camera will move.
- You can enable Set Floor as Origin to spawn the camera above the world’s origin, at a height matching the height of the real camera. If not, it will spawn at the world’s origin.
Adding a ZEDPawn #
To have better control at where the ZEDCamera will start in the scene, you can also drag a ZEDPawn in it. It can be found either :
- in the Plugins -> Stereolabs C++ Classes -> ZED -> Public -> Core -> ZEDPawn
- or in the Place Actors panel.
With the BP_ZED_GameMode active, the BP_ZED_Initializer will look for a ZEDPawn in the scene. If it finds one, the starting location of the virtual camera in the virtual world will be the location of this ZEDPawn.
Note : Whether the Set Floor as Origin parameter is enabled or not, the X and Y coordinates origin of the world will be replaced with the X and Y coordinates of the ZEDPawn (effectively relocating the world’s origin above or under the ZEDPawn). Furthermore, if it is enabled, the ZEDPawn will also be repositioned by an offset corresponding to the real camera height once the floor is found.
ZEDPawn Features #
Movement smoothing #
Enable or disable Lerp, and adjust its intensity according to your needs.
Init parameter of camera “FPS” is set to 15 to simulate low framerate. Without linear interpolation, a lot of jittering occurs since the camera sensors are very accurate and each movement is noticeable.
Camera repositioning #
Click Toggle Freeze, then reposition your real camera, and click it again to unfreeze and go on moving from another position. The translations and rotations are made in the local space of the camera: moving the camera forward in the real world will move it forward in the virtual scene.
Translation multiplier #
The translations of the real camera are multiplied by the Translation Multiplier vector’s values. It can be used to lock completely an axis, or to intensify the translations.
Run the Scene #
After a short initialization period, you will see the virtual scene from the ZEDPawn position. Your position may shift a short time after the level starts, if the floor is found, to adjust the virtual camera position to match the real one.
You can now move in the virtual scene by moving the camera in the real world!
Go further #
Pretty much all the other samples use camera tracking at some level, since it is one of the most basic features coming along the essential BP_ZED_Initializer present in all levels using the ZED. Do not hesitate to play around with the ZEDPawn in those other maps, where you can position it prior to starting the level in order to position the camera more accurately.
Note: To see a similar scene already built, check out the L_BodyTracking level. There’s also plenty of other scenes for getting started that will show you the basics of body tracking, spatial mapping and more.