What's New


The new ZED SDK 3.4 brings major improvements to positional tracking area recognition and ZEDfu spatial mapping application, along with many other fixes and SDK enhancements.

This release also adds support for JetPack 4.5.


General Improvements
  • Improved local streaming on receiver side by optimizing the frame packet reception rate.
  • This reduce a lot the number of frames dropped on the receiver.

Bug Fixes
  • Fixed a random crash that would occur when using sl::ObjectDetectionParameters::enable_mask_output = true.
  • Fixed a random crash that would occur when changing sl::ObjectDetectionRuntimeParameters::object_class_detection_confidence_threshold.
  • Fixed Camera::getStreamingDeviceList() when using multiple subnetwork.
  • Improved API Documentation of sl::FLIP_MODE.


General Improvements
  • Improved local streaming on server side that should reduce packet drops.
Bug Fixes
  • Fixed a bug when using SENSING_MODE::FILL on jetson, causing a black depth image in fill mode.
  • Added missing texture_confidence_threshold in Runtimeparameters::save/load function. This parameter can now be correctly saved and loaded.
  • Fixed sl::Mat::write() function when using U16_C1 mat format (provided by MEASURE::DEPTH_U16_MM and MEASURE::DEPTH_U16_MM_RIGHT).
  • Fixed an issue in SVO real time mode where grab() returns an ERROR_CODE::CAMERA_NOT_INITIALIZED when next image in SVO was not available in the next 2 seconds.
    This is fixed by waiting for the next image without considering a timeout of 2 seconds.
    This only affects SVO real time mode.
  • Fixed memory address sanitizer issue in Object Detection module.
  • Fixed a bug with InitParameter::load/save function when using streaming input type.
  • The ip/port was not properly saved in the yml file and load function was therefore returning false. It is now correctly saved and therefore will be properly parsed during load.

  • Fixed a bug with IMU online bias correction that leads to corrupted stored bias values. Maximum correction has been decreased to avoid wrong orientation for IMU.
  • It only affects ZED-M and ZED2.

  • Fixed missing SensorsData::imu::pose_covariance and CameraInformation::sensors_configuration::camera_imu_transform when using Streaming or SVO input mode.
  • Fixed a bug in sl::FusedPointCloud::save() when trying to save a fused point cloud as an OBJ file.


New Features

Positional Tracking
  • Positional Tracking with area memory is now faster and more accurate, leading to improved global tracking accuracy.

    Previously, camera relocalization and loop closures required the camera to be close to the image and position saved in the database. It is now more robust and accurate, and will recognize places even if the camera is not close to the saved position in the database.

  • Added a new OFFLINE mode in ZEDfu which greatly increases tracking and mapping quality on recorded SVOs.

    An offline two-pass mode for SVOs in ZEDfu has been added to better handle positional tracking loop closures and reduce drift. You can select the "Offline" mode in the main interface when using SVOs file in ZEDfu.

    This will first pre-process the SVO file by creating a complete relocalization database, then re-run the SVO file and use the database to improve positional tracking and mapping.

    This results into a much more precise fused mesh compared to the LIVE method. This also greatly reduces the LOST tracking state that would often occur when running spatial mapping.


  • Added a new depth format MEASURE::DEPTH_U16_MM and MEASURE::DEPTH_U16_MM_RIGHT.

    Retrieved depth maps can now have a MAT_TYPE::U16_C1 type with values stored in millimeters. This is useful for applications and integrations that require a 16-bit depth format.

  • Added a parameter to import external OpenCV calibration files with InitParameters::optional_opencv_calibration_file.

    This allows users to calibrate their stereo cameras with OpenCV stereo calibration module, and provide an XML/YAML/JSON calibration file to the ZED SDK.

  • Added Camera::updateSelfCalibration() function to launch a camera self calibration update process at runtime.

    This allows to run self-calibration while the camera is running, improving depth estimation in rare cases where mechanical shocks or thermal changes might have impacted left/right image alignment.

    Self-calibration can be called anytime after Camera::open() has been successfully called. This function is similar to resetSelfCalibration() that was removed in 3.0.

  • Added sl::Resolution getResolution(RESOLUTION resolution) function.

    You can now get the actual resolution (width and height) of a specific camera video mode.

  • Increased image timestamp adjustment speed when the system clock changes (when using NTP/PTP synchronization).

    Previously, the maximum allowed correction for the image timestamp was 500us for a single frame. Now it is at a maximum of 4ms per frame.
    Therefore, the correction should be much more faster when a suddent jump happens in system clock.

  • Fixed an issue with SensorData::imu that provided different data between LIVE input and STREAMING input.

    This issue was reported on github.
    Now Camera::getSensorsData() will provide the same values if using LIVE or STREAMING input.

Object Detection
  • Added skeletons keypoints confidence in ObjectData::keypoint_confidence.

    It is now possible to get the confidence of every skeleton keypoint. This can be used to filter low confidence keypoints, or to help fusing keypoints seen by multiple stereo cameras.

  • Added a max range parameter for Object Detection module using ObjectDetectionParameters::max_range.

    You can now set a maximum distance for 3D object detection. This allows to limit the 3D tracking space and remove detections outside a given range.

  • ZED Calibration Tool can now be used with a ZED 2.

  • Added a magnetometer calibration feature in ZED Sensor Viewer tool.

    You can start the calibration procedure by selecting "Magnetometer", then "Calibrate Magnetometer", and follow the guidelines.

  • Added Object detection max_range, keypoints confidence for skeletons, getResolution() function.

  • Added function sleep_us to tell a program to sleep for a given period of time in microseconds.

  • Added function to save (resp. load) SpatialMappingParameters into (resp. from) a file.

  • Added an ‘effective_rate’ attribute to BarometerData to retrieve the realtime data acquisition rate.

  • Added an attribute to sl.Camera class to check whether or not the Positional Tracking is enabled.

  • Added FLIP_MODE enum.

  • Added object_class_filter attribute to ObjectDetectionParameters to select which objects to track and detect.

  • Fixed transpose_mat, inverse_mat, zeros static methods in Matrix3f and Matrix4f classes.

  • Fixed ObjectData attribute setters.

  • Fixed method zeros() to create a sl.Orientation filled with zeros.

  • Fixed PointCloudChunk, FusedPointCloud vertices: they contained only 3D positional info and now include color information.

  • Fixed method to retrieve the chunks in a FusedPointCloud.

  • Fixed methods to get the current RuntimeParameters, StreamingParameters, RecordingParameters, RecordingStatus and StreamingProperties: some attributes could not be retrieved properly.

Bug Fixes
  • Fixed magnetometer data coordinate frame.

    Previously it was given in the coordinate frame of the magnetometer. It is now given in the coordinate frame of the IMU, so that the gyroscope, accelerometer and magnetometer on the ZED 2 share the same coordinate frame.

  • Fixed an issue with Tegra encoder limiting the number of calls to the recording module.

    Previously, it was limited to around 900-1000 enableRecording()/disableRecording() calls due to a bug in the Tegra encoder. Calls are now unlimited.


  • Added compatibility with ZED SDK 3.4.

  • Added support for UINT16 depth maps.

  • Changed parameters names in zedsrc to match the names used in the ZED SDK.

  • Added a new integration to allow using a ZED camera with OpenNI2.

  • Added RGB, Depth and Gray streams.

  • Added camera configuration with YAML files.

  • Added NiTE2 compatibility.


For older releases and changelog, see the ZED SDK release archive.

SDK Downloads

The ZED SDK allows you to add depth, motion sensing and spatial AI to your application. Available as a standalone installer, it includes applications, tools and sample projects with source code. Please check out our GitHub page and SDK documentation for additional resources.

Standard installer

This version of the installer includes the standard dynamic libraries, tools and samples.

Full installer

This version of the installer includes the standard dynamic libraries, tools and samples but also the static libraries and their dependencies.

Maintenance mode versions

These versions are too old and no longer fully supported, the AI module is running older version of models and performance and accuracy can be significantly lower.

Standard installer (legacy)

This version of the installer includes the standard dynamic libraries, tools and samples.

Full installer (legacy)

This version of the installer includes the standard dynamic libraries, tools and samples but also the static libraries and their dependencies.


For older releases and changelog, see the ZED SDK release archive.


Build applications with ZED and your favorite tools and languages using these integrations. View all integrations


ZED World is a standalone application that allows you to experience mixed reality with stereo pass-through in VR headsets. Requires ZED Mini, Oculus Rift or HTC Vive.