Tutorial - Motion Tracking

This tutorial shows how to use the ZED as a motion tracker. The program will loop until 1000 positions are grabbed. We assume that you have followed previous tutorials.

Getting Started

  • First, download the latest version of the ZED SDK.
  • Download the C++ or Python sample code.
  • Follow the instructions on how to build your project in C++ or Python on Windows and Linux.

Code Overview

Create a camera

As in previous tutorials, we create, configure and open the ZED.

// Create a ZED camera object
Camera zed;

// Set configuration parameters
InitParameters init_params;
init_params.camera_resolution = RESOLUTION_HD720; // Use HD720 video mode (default fps: 60)
init_params.coordinate_system = COORDINATE_SYSTEM::RIGHT_HANDED_Y_UP; // Use a right-handed Y-up coordinate system
init_params.coordinate_units = UNIT::METER; // Set units in meters

// Open the camera
ERROR_CODE err = zed.open(init_params);
if (err != SUCCESS)

Enable positional tracking

Once the camera is opened, we must enable the positional tracking module in order to get the position and orientation of the ZED.

// Enable positional tracking with default parameters
sl::TrackingParameters tracking_parameters;
err = zed.enableTracking(tracking_parameters);
if (err != SUCCESS)

In the above example, we leave the default tracking parameters. For the list of available parameters, check the API documentation.

Capture pose data

Now that the ZED is opened and the positional tracking enabled, we create a loop to grab and retrieve the camera position. The camera position is given by the class Pose. This class contains the translation and orientation of the camera, as well as image timestamp and tracking confidence.

A pose is always linked to a reference frame. The SDK provides two reference frames : REFERENCE_FRAME_WORLD and REFERENCE_FRAME_CAMERA. For more information, check Coordinate Frames. In this tutorial, we get the camera position in the World Frame.

// Track the camera position during 1000 frames
int i = 0;
sl::Pose zed_pose;
while (i < 1000) {
    if (zed.grab() == ERROR_CODE::SUCCESS) {

        zed.getPosition(zed_pose, REFERENCE_FRAME_WORLD); // Get the pose of the left eye of the camera with reference to the world frame

        // Display the translation and timestamp
        printf("Translation: Tx: %.3f, Ty: %.3f, Tz: %.3f, Timestamp: %llu\n", zed_pose.getTranslation().tx, zed_pose.getTranslation().ty, zed_pose.getTranslation().tz, zed_pose.timestamp);

        // Display the orientation quaternion
        printf("Orientation: Ox: %.3f, Oy: %.3f, Oz: %.3f, Ow: %.3f\n\n", zed_pose.getOrientation().ox, zed_pose.getOrientation().oy, zed_pose.getOrientation().oz, zed_pose.getOrientation().ow);


Inertial Data

If a an IMU is available (ex: ZED 2, ZED Mini), we can have access to inertial data:

bool zed_mini = (zed.getCameraInformation().camera_model == MODEL_ZED_M);

First, we test that the opened camera is a ZED Mini, then, we display some useful IMU data, such as the quaternion and the linear acceleration.

if (zed_mini) { // Display IMU data

    // Get IMU data
    zed.getIMUData(imu_data, TIME_REFERENCE_IMAGE); // Get the data

    // Filtered orientation quaternion
    printf("IMU Orientation: Ox: %.3f, Oy: %.3f, Oz: %.3f, Ow: %.3f\n", imu_data.getOrientation().ox,
            imu_data.getOrientation().oy, imu_data.getOrientation().oz, zed_pose.getOrientation().ow);
    // Raw acceleration
    printf("IMU Acceleration: x: %.3f, y: %.3f, z: %.3f\n", imu_data.linear_acceleration.x,
            imu_data.linear_acceleration.y, imu_data.linear_acceleration.z);

This will loop until the ZED has been tracked during 1000 frames. We display the camera translation (in meters) in the console window and close the camera before exiting the application.

// Disable positional tracking and close the camera
return 0;

You can now use the ZED as an inside-out positional tracker. You can now read the next tutorial to learn how to use the Spatial Mapping.