Using the Depth Sensing API
Depth Sensing Configuration #
To enable depth sensing, set options in InitParameters
when initializing the camera. For runtime adjustments—such as toggling depth computation or changing sensing modes—use RuntimeParameters
while the camera is running.
// Set configuration parameters
InitParameters init_params;
init_params.depth_mode = DEPTH_MODE::ULTRA; // Use ULTRA depth mode
init_params.coordinate_units = UNIT::MILLIMETER; // Use millimeter units (for depth measurements)
# Set configuration parameters
init_params = sl.InitParameters()
init_params.depth_mode = sl.DEPTH_MODE.ULTRA # Use ULTRA depth mode
init_params.coordinate_units = sl.UNIT.MILLIMETER # Use millimeter units (for depth measurements)
// Set depth mode in ULTRA
InitParameters init_parameters = new InitParameters();
init_parameters.depthMode = DEPTH_MODE.ULTRA; // Use ULTRA depth mode
init_parameters.coordinateUnits = UNIT.MILLIMETER; // Use millimeter units (for depth measurements)
For more information on depth configuration parameters, see Depth Settings.
Retrieving Depth Data #
To obtain the depth map of a scene, first call grab()
to capture a new frame, then use retrieveMeasure()
to access the depth data aligned with the left image. The retrieveMeasure()
function allows you to retrieve various types of data, including the depth map, confidence map, normal map, or point cloud, depending on the specified measure type.
sl::Mat image;
sl::Mat depth_map;
if (zed.grab() == ERROR_CODE::SUCCESS) {
// A new image and depth is available if grab() returns SUCCESS
zed.retrieveImage(image, VIEW::LEFT); // Retrieve left image
zed.retrieveMeasure(depth_map, MEASURE::DEPTH); // Retrieve depth
}
image = sl.Mat()
depth_map = sl.Mat()
runtime_parameters = sl.RuntimeParameters()
if zed.grab(runtime_parameters) == sl.ERROR_CODE.SUCCESS :
# A new image and depth is available if grab() returns SUCCESS
zed.retrieve_image(image, sl.VIEW.LEFT) # Retrieve left image
zed.retrieve_measure(depth_map, sl.MEASURE.DEPTH) # Retrieve depth
sl.Mat image = new sl.Mat();
sl.Mat depth_map = new sl.Mat();
uint mWidth = (uint)zed.ImageWidth;
uint mHeight = (uint)zed.ImageHeight;
image.Create(mWidth, mHeight, MAT_TYPE.MAT_8U_C4, MEM.CPU); // Mat needs to be created before use.
depth.Create(mWidth, mHeight, MAT_TYPE.MAT_32F_C1, MEM.CPU); // Mat needs to be created before use.
sl.RuntimeParameters runtimesParameters = new sl.RuntimeParameters();
if (zed.Grab(ref runtimeParameters) == sl.ERROR_CODE.SUCCESS) {
// A new image and depth is available if Grab() returns SUCCESS
zed.RetrieveImage(image, VIEW.LEFT); // Retrieve left image
zed.RetrieveMeasure(depth_map, MEASURE.DEPTH); // Retrieve depth
}
Accessing Depth Values #
The depth map is stored in a sl::Mat
object, which acts as a 2D matrix where each element represents the distance from the camera to a specific point in the scene. Each pixel at coordinates (X, Y) contains a 32-bit floating-point value indicating the depth (Z) at that location, typically in millimeters unless otherwise configured.
To access the depth value at a particular pixel, use the getValue()
method provided by the SDK. This allows you to retrieve the distance from the camera to the object at the specified pixel coordinates.
float depth_value=0;
depth_map.getValue(x, y, &depth_value);
depth_value = depth_map.get_value(x, y)
depth_value.GetValue(1, 2, out float depth_value);
By default, depth values are expressed in millimeters. Units can be changed using InitParameters::coordinate_units
. Advanced users can retrieve images, depth and points clouds either in CPU memory (default) or in GPU memory using retrieveMeasure(*, *, MEM_GPU)
.
Displaying Depth Image #
The 32-bit depth map can be displayed as a grayscale 8-bit image
To display the depth map, the ZED SDK scales the real depth values to 8-bit values [0, 255], where 255 (white) represents the closest possible depth value and 0 (black) represents the most distant possible depth value. We call this process depth normalization.
To retrieve a depth image, you can use retrieveImage(depth, VIEW::DEPTH)
.
📌 Note: Do not use the 8-bit depth image in your application for other purposes than displaying depth.
sl::Mat depth_for_display;
zed.retrieveImage(depth_for_display, VIEW::DEPTH);
depth_for_display = sl.Mat()
zed.retrieve_image(depth_for_display, sl.VIEW.DEPTH)
sl.Mat depth_for_display = new sl.Mat();
uint mWidth = (uint)zed.ImageWidth;
uint mHeight = (uint)zed.ImageHeight;
depth_for_display.Create(mWidth, mHeight, MAT_TYPE.MAT_32F_C1, MEM.CPU); // Mat needs to be created before use.
zed.RetrieveImage(depth_for_display, VIEW.DEPTH);
Getting Point Cloud Data #
The ZED camera can also provide a 3D point cloud, which is a collection of points in 3D space representing the scene. Each point in the point cloud corresponds to a pixel in the depth map and contains its (X, Y, Z) coordinates along with color information (RGBA).
A 3D point cloud with (X,Y,Z) coordinates and RGBA color can be retrieved using retrieveMeasure()
.
sl::Mat point_cloud;
zed.retrieveMeasure(point_cloud, MEASURE::XYZRGBA);
point_cloud = sl.Mat()
zed.retrieve_measure(point_cloud, sl.MEASURE.XYZRGBA)
sl.Mat point_cloud = new sl.Mat();
uint mWidth = (uint)zed.ImageWidth;
uint mHeight = (uint)zed.ImageHeight;
point_cloud.Create(mWidth, mHeight, MAT_TYPE.MAT_32F_C4, MEM.CPU); // Mat needs to be created before use.
zed.RetrieveMeasure(point_cloud, MEASURE.XYZRGBA);
To access a specific pixel value, use getValue()
.
float4 point3D;
// Get the 3D point cloud values for pixel (i, j)
point_cloud.getValue(i, j, &point3D);
float x = point3D.x;
float y = point3D.y;
float z = point3D.z;
float color = point3D.w;
# Get the 3D point cloud values for pixel (i, j)
point3D = point_cloud.get_value(i, j)
x = point3D[0]
y = point3D[1]
z = point3D[2]
color = point3D[3]
float4 point3D = new float4();
// Get the 3D point cloud values for pixel (i, j)
point_cloud.GetValue(i, j, out point3D);
float x = point3D.x;
float y = point3D.y;
float z = point3D.z;
float color = point3D.w;
The point cloud stores its data on 4 channels using a 32-bit float for each channel. The last float is used to store color information, where R, G, B, and alpha channels (4 x 8-bit) are concatenated into a single 32-bit float.
You can choose between different color formats using XYZ<COLOR>
. For example, BGRA color is available using retrieveMeasure(point_cloud, MEASURE::XYZBGRA)
.
Measuring distance in point cloud #
When measuring distances, use the 3D point cloud instead of the depth map. The Euclidean distance formula allows us to calculate the distance of an object relative to the left eye of the camera.
float4 point3D;
// Measure the distance of a point in the scene represented by pixel (i,j)
point_cloud.getValue(i, j, &point3D);
float distance = sqrt(point3D.x * point3D.x + point3D.y * point3D.y + point3D.z * point3D.z);
# Measure the distance of a point in the scene represented by pixel (i,j)
point3D = point_cloud.get_value(i, j)
distance = math.sqrt(point3D[0] * point3D[0] + point3D[1] * point3D[1] + point3D[2] * point3D[2])
float4 point3D = new float4();
// Measure the distance of a point in the scene represented by pixel (i,j)
point_cloud.GetValue(i, j, out point3D);
float distance = (float)Math.Sqrt(point3D.x * point3D.x + point3D.y * point3D.y + point3D.z * point3D.z);
Getting Normal Map #
Retrieving Surface Normals #
You can obtain a normal map by calling retrieveMeasure()
with the NORMALS
measure type. Surface normals are useful for applications such as traversability analysis and real-time lighting, as they describe the orientation of surfaces in the scene.
The normal map is stored as a 4-channel, 32-bit floating-point matrix, where the X, Y, and Z components represent the direction of the normal vector at each pixel. The fourth channel is unused.
sl::Mat normal_map;
zed.retrieveMeasure(normal_map, MEASURE::NORMALS);
normal_map = sl.Mat()
zed.retrieve_measure(normal_map, sl.MEASURE.NORMALS)
sl.Mat normal_map = new sl.Mat();
uint mWidth = (uint)zed.ImageWidth;
uint mHeight = (uint)zed.ImageHeight;
normal_map.Create(mWidth, mHeight, MAT_TYPE.MAT_32F_C4, MEM.CPU); // Mat needs to be created before use.
zed.RetrieveMeasure(normal_map, MEASURE.NORMALS);
To access the normal vector at a specific pixel, use the getValue()
method, which returns the (X, Y, Z) components of the normal.
Adjusting Depth Resolution #
To optimize performance and reduce data acquisition time, you can retrieve depth or point cloud data at a lower resolution by specifying the desired width and height in the retrieveMeasure()
function. Additionally, you can choose whether the data is stored in CPU (RAM) or GPU memory by setting the appropriate memory type parameter. This flexibility allows you to balance processing speed and resource usage according to your application’s needs.
sl::Mat point_cloud;
// Retrieve a resized point cloud
// width and height specify the total number of columns and rows for the point cloud dataset
width = zed.getResolution().width / 2;
height = zed.getResolution().height / 2;
zed.retrieveMeasure(point_cloud, MEASURE::XYZRGBA, MEM::GPU, width, height);
point_cloud = sl.Mat()
# Retrieve a resized point cloud
# width and height specify the total number of columns and rows for the point cloud dataset
width = zed.get_resolution().width / 2
height = zed.get_resolution().height / 2
zed.retrieve_measure(point_cloud, sl.MEASURE.XYZRGBA, sl.MEM.GPU, width, height)
sl.Mat point_cloud = new sl.Mat();
// Retrieve a resized point cloud
// width and height specify the total number of columns and rows for the point cloud dataset
width = zed.ImageWidth / 2;
height = zed.ImageHeight / 2;
point_cloud.Create(width, height, MAT_TYPE.MAT_32F_C4, MEM.CPU); // Mat needs to be created before use.
zed.RetrieveMeasure(point_cloud, MEASURE.XYZRGBA, MEM.GPU, new Resolution(width, height));
Code Example #
For code examples, check out the Tutorial and Sample on GitHub.