Tutorial - Using Camera Tracking

This tutorials shows how to get the position and orientation of the camera in real-time. The program will loop until 1000 positions are grabbed. We assume that you have followed previous tutorials.

Getting Started

  • First, download the latest version of the ZED SDK.
  • Download the C++ or Python sample code.

Code Overview

Open the camera

As in previous tutorials, we create, configure and open the ZED.

// Create a ZED camera object
Camera zed;

// Set configuration parameters
InitParameters init_params;
init_params.camera_resolution = RESOLUTION::HD720; // Use HD720 video mode (default fps: 60)
init_params.coordinate_system = COORDINATE_SYSTEM::RIGHT_HANDED_Y_UP; // Use a right-handed Y-up coordinate system
init_params.coordinate_units = UNIT::METER; // Set units in meters

// Open the camera
ERROR_CODE err = zed.open(init_params);
if (err != ERROR_CODE::SUCCESS)
    exit(-1);

Enable positional tracking

Once the camera is opened, we must enable the positional tracking module with enablePositionalTracking() in order to get the position and orientation of the ZED.

// Enable positional tracking with default parameters
PositionalTrackingParameters tracking_parameters;
err = zed.enablePositionalTracking(tracking_parameters);
if (err != ERROR_CODE::SUCCESS)
    exit(-1);

In the above example, we use the default tracking parameters set in the ZED SDK. For the list of available parameters, check the Tracking API docs.

Capture pose data

Now that motion tracking is enabled, we create a loop to grab and retrieve the camera position. The camera position is given by the class Pose. This class contains the translation and orientation of the camera, as well as image timestamp and tracking confidence.

A pose is always linked to a reference frame. The SDK provides two reference frames : REFERENCE_FRAME::WORLD and REFERENCE_FRAME::CAMERA. For more information, see the Coordinate Frames section.

In this tutorial, we retrieve the camera position in the World Frame.

// Track the camera position during 1000 frames
int i = 0;
sl::Pose zed_pose;
while (i < 1000) {
    if (zed.grab() == ERROR_CODE::SUCCESS) {

        // Get the pose of the left eye of the camera with reference to the world frame
        zed.getPosition(zed_pose, REFERENCE_FRAME::WORLD);

        // Display the translation and timestamp
        printf("Translation: Tx: %.3f, Ty: %.3f, Tz: %.3f, Timestamp: %llu\n", zed_pose.getTranslation().tx, zed_pose.getTranslation().ty, zed_pose.getTranslation().tz, zed_pose.timestamp);

        // Display the orientation quaternion
        printf("Orientation: Ox: %.3f, Oy: %.3f, Oz: %.3f, Ow: %.3f\n\n", zed_pose.getOrientation().ox, zed_pose.getOrientation().oy, zed_pose.getOrientation().oz, zed_pose.getOrientation().ow);
        i++;
    }
}

Inertial Data

If an IMU is available (ex: ZED 2, ZED Mini), the Positional Tracking module will fuse internally visual and inertial data to provide improved position tracking.

You can also access IMU data using the code below:

if (zed.getSensorsData(sensor_data, TIME_REFERENCE::IMAGE) == ERROR_CODE::SUCCESS) {
       // Get IMU orientation
       auto imu_orientation = sensor_data.imu.pose.getOrientation();
       // Get IMU acceleration
       auto acceleration = sensor_data.imu.linear_acceleration;
       cout << "IMU Orientation: {" << zed_orientation << "}, Acceleration: {" << acceleration << "}\n";
   }

For more information on Camera-IMU and other onboard sensors, check the Sensors section.

Close the Camera

After tracking the ZED camera position for 1000 frames, we disable the tracking module and close the camera.

// Disable positional tracking and close the camera
zed.disablePositionalTracking();
zed.close();
return 0;

Advanced Example

To learn how to retrieve and display the live position and orientation of the camera in a 3D window, transform pose data and change coordinate systems and units, check the advanced Motion Tracking sample code.

Next Steps

Read the next tutorials to learn how to use Spatial Mapping and Object Detection.