Using the API

Depth Sensing Configuration

To configure depth sensing, use InitParameters at initialization and RuntimeParameters to change specific parameters during use.

// Set configuration parameters
InitParameters init_params;
init_params.depth_mode = DEPTH_MODE_ULTRA; // Use ULTRA depth mode
init_params.coordinate_units = UNIT_MILLIMETER; // Use millimeter units (for depth measurements)
# Set configuration parameters
init_params = sl.InitParameters()
init_params.depth_mode = sl.DEPTH_MODE.DEPTH_MODE_ULTRA # Use ULTRA depth mode
init_params.coordinate_units = sl.UNIT.UNIT_MILLIMETER # Use millimeter units (for depth measurements)

For more information on depth configuration parameters, see Advanced Settings.

Getting Depth Data

To extract the depth map of a scene, use grab() to grab a new image and retrieveMeasure() to retrieve the depth aligned on the left image. retrieveMeasure() can be used to retrieve a depth map, a confidence map or a point cloud.

sl::Mat image;
sl::Mat depth_map;
if (zed.grab() == SUCCESS) {
  // A new image and depth is available if grab() returns SUCCESS
  zed.retrieveImage(image, VIEW_LEFT); // Retrieve left image
  zed.retrieveMeasure(depth_map, MEASURE_DEPTH); // Retrieve depth
}
image = sl.Mat()
depth_map = sl.Mat()
runtime_parameters = sl.RuntimeParameters()
if zed.grab(runtime_parameters) ==  sl.ERROR_CODE.SUCCESS :
  # A new image and depth is available if grab() returns SUCCESS
  zed.retrieve_image(image, sl.VIEW.VIEW_LEFT) # Retrieve left image
  zed.retrieve_measure(depth_map, sl.MEASURE.MEASURE_DEPTH) # Retrieve depth

Accessing Depth Values

The depth matrix stores 32-bit floating-point values which represent depth (Z) for each (X,Y) pixel. To access these values, use getValue().

float depth_value=0;
depth_map.getValue(x,y,&depth_value);
depth_value = depth_map.get_value(x,y)

By default, depth values are expressed in millimeters. Units can be changed using InitParameters::coordinate_units. Advanced users can retrieve images, depth and points clouds either in CPU memory (default) or in GPU memory using retrieveMeasure(*, *, MEM_GPU).

Displaying Depth Image

The 32-bit depth map can be displayed as a grayscale 8-bit image. To display the depth map, we scale its values to [0, 255], where 255 (white) represents the closest possible depth value and 0 (black) represents the most distant possible depth value. We call this process depth normalization. To retrieve a depth image, use retrieveImage(depth,VIEW_DEPTH). Do no use the 8-bit depth image in your application for other purpose than displaying depth.

sl::Mat depth_for_display;
zed.retrieveImage(depth_for_display,VIEW_DEPTH);
depth_for_display = sl.Mat()
zed.retrieve_image(depth_for_display, sl.VIEW.VIEW_DEPTH)

Getting Point Cloud Data

A 3D point cloud with (X,Y,Z) coordinates and RGBA color can be retrieved using retrieveMeasure().

sl::Mat point_cloud;
zed.retrieveMeasure(point_cloud, MEASURE_XYZRGBA);
point_cloud = sl.Mat()
zed.retrieve_measure(point_cloud, sl.MEASURE.MEASURE_XYZRGBA)

To access a specific pixel value, use getValue().

float4 point3D;
// Get the 3D point cloud values for pixel (i,j)
point_cloud.getValue(i,j,&point3D);
float x = point3D.x;
float y = point3D.y;
float z = point3D.z;
float color = point3D.w;
# Get the 3D point cloud values for pixel (i,j)
point3D = point_cloud.get_value(i,j)
x = point3D[0]
y = point3D[1]
z = point3D[2]
color = point3D[3]

The point cloud stores its data on 4 channels using 32-bit float for each channel. The last float is used to store color information, where R, G, B, and alpha channels (4 x 8-bit) are concatenated into a single 32-bit float. You can choose between different color formats using MEASURE_XYZ<COLOR>. For example, BGRA color is available using retrieveMeasure(point_cloud, MEASURE_XYZBGRA).

Measuring distance in point cloud

When measuring distances, use the 3D point cloud instead of the depth map. The Euclidean distance formula allows to calculate the distance of an object relative to the left eye of the camera.

float4 point3D;
// Measure the distance of a point in the scene represented by pixel (i,j)
point_cloud.getValue(i,j,&point3D);
float distance = sqrt(point3D.x*point3D.x + point3D.y*point3D.y + point3D.z*point3D.z);
# Measure the distance of a point in the scene represented by pixel (i,j)
point3D = point_cloud.get_value(i,j)
distance = math.sqrt(point3D[0]*point3D[0] + point3D[1]*point3D[1] + point3D[2]*point3D[2])

Getting Normal Map

Suface normals can be retrieved using retrieveMeasure(normal_map, MEASURE_NORMALS). Normal maps are useful for traversability estimation and realtime lighting. The output is a 4 channels 32-bit matrix (X,Y,Z,empty), where X,Y,Z values encode the direction of the normal vectors.

Adjusting Depth Resolution

To improve the performance of your application and speed up data acquisition, you can retrieve a lower resolution measure by specifying the width and height parameters in retrieveMeasure(). You can also specify where you would like to have the data available, in CPU (RAM) or GPU memory.

sl::Mat point_cloud;
// Retrieve a resized point cloud
// width and height specify the total number of columns and rows for the point cloud dataset
width = zed.getResolution().width / 2;
height = zed.getResolution().height / 2;
zed.retrieveMeasure(point_cloud, MEASURE_XYZRGBA, MEM_GPU, width, height);
point_cloud = sl.Mat()
# Retrieve a resized point cloud
# width and height specify the total number of columns and rows for the point cloud dataset
width = zed.get_resolution().width / 2;
height = zed.get_resolution().height / 2;
zed.retrieve_measure(point_cloud, sl.MEASURE.MEASURE_XYZRGBA, sl.MEM.MEM_GPU, width, height)

Code Example

For code examples, check out the Tutorial and Sample on GitHub.