STEREOLABS

ZED SDK

4.0Early Access

What's New

2023-08-10

We are excited to announce the release of ZED SDK 4.0, which introduces a range of new features and enhancements to our ZED cameras. Our latest update supports the ZED X and ZED X Mini cameras, designed specifically for autonomous mobile robots in indoor and outdoor environments. We are also introducing an improved NEURAL depth mode, which offers even more accurate depth maps in challenging situations such as low-light environments and textureless surfaces.

We are proud to introduce the new multi-camera Fusion API, which makes it easier than ever to fuse data coming from multiple cameras. This module handles time synchronization and geometric calibration issues, along with 360Β° data fusion with noisy data coming from multiple cameras and sensor sources. We believe that these updates will unlock even more potential for our users to create innovative applications that push the boundaries of what is possible with depth-sensing technology.

For those upgrading from SDK version 3.X to 4.0, we highly recommend checking out our migration guide to ensure a smooth transition.

Here's a closer look at some of the new features in ZED SDK 4.0:

4.0 (Early Access)

New Features

  • ZED X

    • Added support for the new ZED X and ZED X Mini cameras.
    • Introduced new VIDEO_SETTINGS, RESOLUTION, and sl::InputType parameters to enable users to use the same code for both GMSL or USB 3.0 cameras without any added complexity.
  • Multi-Camera Fusion

    • Introduced new Multi-Camera Fusion API. The new module allows seamless synchronization and integration of data from multiple cameras in real time, providing more accurate and reliable information than a single camera setup. Additionally, the Fusion module offers redundancy in case of camera failure or occlusions, making it a reliable solution for critical applications.
    • The new API can be found in the header sl/Fusion.hpp as sl::Fusion API. In 4.0 EA (Early Access), multi-camera fusion supports multi-camera capture, calibration, and body tracking fusion. All SDK capabilities will be added to the Fusion API in the final 4.0 release.
  • Geo-tracking

    • Introduced new Geo-tracking API for accurate global location tracking. During a geo-tracking session, the API can constantly update the device's position in the real world by combining data from an external GPS and ZED camera odometry as it moves, delivering latitude and longitude with centimeter-level accuracy.
    • By fusing visual odometry with GPS data, we can compensate for GPS dropouts in challenging outdoor environments and provide more accurate and reliable positioning information in real time.
  • Depth Perception

    • Improved NEURAL depth mode which is now more robust to challenging situations such as low-light, heavy compression, noise, and textureless areas such as interior plain walls, overexposed areas, and exterior sky.
    • Our new ZEDnet Gen 2 AI model now powers the depth sensing module for stereo depth perception, providing enhanced performance. The Neural depth in 4.0 offers a glimpse of what's to come, as we plan to roll out an even more robust model later in the year.
  • Body Tracking

    • Introduced new Body Tracking Gen 2 module. The new module employs ML to infer up to 38 landmarks of a body from a single frame. This new module goes beyond the existing pose models that are trained on only 17 key points and enable localization of a new topology of 38 human body key points, making it ideal for advanced body tracking use cases.
    • Added a new Body Tracking models:
      • BODY_38: New body model with feet and simplified hands. The body fitting now provides accurate orientations for the feet and hands, with full orientations.
  • Object Detection

    • Introduced Object Detection as a standalone module, separate from Body Tracking.
    • Added Concurrent execution of different AI models. Users can now run multiple instances of Object Detection and/or Body Tracking simultaneously. For example, body tracking could be run concurrently with a ball object detection model. The ZED SDK’s object detection model could also run in parallel with a user-defined custom object detection model. There are no limits on the number or type of models that can be run. A new instance_id parameter has been added for every object detection function.
    • The new concurrent execution of Object Detection and Body Tracking should enable the creation of accurate digital twins of real-world environments.
  • Platform Support

    • Added support for CUDA 11.8. Support for CUDA 12.0 will follow soon.
    • Added support for Python 3.11.
    • Added support for the new JetPack 5.1 (L4T 35.2). Support for JetPack 5.1.1 (L4T 35.3) will follow soon.
    • Dropped support for older JetPack versions (L4T 32.3 to 32.6).

4.0.7

SDK

  • Added L4T35.4 compatibility.
  • Automatically repair corrupted SVOs.
  • Reduced Spatial Mapping computing time.
  • Improved USB reliability on Linux.
  • Added automatic ROI exclusion of static objects.
  • Timestamp and dates are now logged in UTC format.
  • Confidence ranges are now better on ZED X.
  • Improved the newly released Positional Tracking Mode QUALITY to make it work faster and with all depth modes.
  • Added isCameraSettingSupported method for camera video control support.

Fusion

  • Added a stopPublishing method to Fusion API.
  • Reduced Camera grab time when publishing data to Fusion.
  • Added synchronized images and depth retrieval from within the Fusion API.

Bug Fixes

  • Fixed incorrect GNSS calibration when the master camera is not a the origin.
  • Fixed the fusion blocking when the publisher closes and reopens.
  • Optionally detect and filter corrupted frames through an InitParameter.
  • Fixed crash in SDK quill logger in super-verbose.
  • Fixed Positional Tracking convergence to identity that happened too easily.
  • Fixed Fusion.getPosition returning bad output when multiple cameras are setup.

Tools

  • Allowed ZED Calibration to calibrate recent ZED Minis.
  • Fixed all tools user interfaces on 4k screens.
  • Improved ZED Diagnostics GMSL reports.
  • Added a UI in ZED360 to add remote senders from IP address and port.
  • Added additional pop-up messages in ZED360 to improve user experience

Samples

  • Reworked Python GNSS sample.
  • Updated multi-camera playback sample to synchronize SVOs.

Wrappers

  • Updated ROS Wrapper with the latest improvements recently released on ROS 2 Wrapper.
  • Added support for POSITIONAL_TRACKING_MODE to ROS 2 Wrapper.
  • Fixed Cython 3.0.x compatibility.
  • Fixed the difference of quality of the Body Tracking module in Live link and Unreal Engine Plugin.

4.0.6

SDK

  • Improved GNSS-VIO Calibration and fusion.
    • Added several functions to make it easier to use: Fusion::getCurrentGNSSCalibrationSTD / Fusion::getGeoTrackingCalibration.
    • Added new options to control its behavior: target_translation_uncertainty / target_yaw_uncertainty / enable_rolling_calibration. Added new GNSS_CALIBRATION_STATE to get more details about its current state.
    • Added timestamp into GeoPose.
  • Added function Fusion::getCurrentTimeStamp to retrieve the timestamp of the current processed data.
  • Added Spatial Mapping module into Fusion API. You can now map you environment with multiple ZED at the same time.
  • Improved Spatial Mapping accuracy, better reconstruction of shapes, improved detail when getting closer.
  • Added new attribute sl::CameraParameters::focal_length_metric that store the real focal length in millimeter.
  • Added sl::PlaneDetectionParameters to better control the plane at hit function.
  • Improved GMSL image capture pipeline, requires less resources and improve the latency.
  • Improved GMSL SVGA resolution stability.
  • Added support for specific carrier board (that are not using muxer before deserializer).
  • Switch USB to max priority on linux (previously "2"=INTERACTIVE, Now "3" = RECORD).
  • Improve USB stability on Linux.
  • Add GMSL autorecovery.

Bug Fixes

  • Fixed random issue when opening simultaneously multiple cameras.
  • Fixed random camera motion sensor not detected during camera opening.
  • Fixed timeout issue and automatic recovery when using 2 ZED X simultaneously.
  • Fixed SVO decoding random issue on H26X lossless SVO on Jetsons.
  • Fixed SVO recording missing IMU data for ZED X.
  • Fixed missing parameters in sl::Camera::getInitParameters in SVO playback mode.

Tools

  • Improved ZEDfu usability, added more parameters to better fit your needs.
  • Fixed point cloud aliasing in some 3D views.
  • Added tooltip in ZED360 to guide users through ZED Hub setup.
  • Added an option in ZEDExplorer to open the desired id (ZEDExplorer -i 1 for example)
  • Added arguments in SVOEditor (-start-timestamp and -end-timestamp) to cut an SVO with a start and end timestamp.

Samples

  • Added Spatial Mapping multi camera using Fusion API.
  • Updates Geotracking samples to match the new API functionalities.

Wrappers

  • Python samples now match C++ ones.
  • Fixed ZED X ratio that is wrong when retrieved from the ROS 2 wrapper medium mode.

Plugin

  • Release GStreamer v4.0.x package.

4.0.5

SDK

  • Introduced low ZED X camera framerate, adding 15FPS for all available resolutions. Requires Jetson driver update (v0.5.4)
  • Added a new InitParameter::grab_compute_capping_fps parameter to set an upper computation frequency limit to the grab function. This can be useful to get a known constant fixed rate or limit the computation load while keeping a short exposure time by setting a high camera capture framerate for instance.
  • Improved Python installer for Windows where the SSL certificates could prevent the correct wrapper installation. The script is now trying to update the SSL certificates if needed before retrying.

Bug Fixes

  • Resolve a bug that causes the GMSL IMU orientation to become invalid (not a number or not a valid quaternion) in some cases. This could also lead to SVO file reading issue since the frames were considered corrupted and skipped.

Tools

  • Fixed depth stabilization inconsistent results in ZEDDepthViewer while in playback mode, specifically when stopping or navigating to non-sequential frames.

4.0.4

SDK

  • Introducing a new PositionalTrackingParameters::POSITIONAL_TRACKING_MODE. By default the STANDARD mode is enabled and identical to the previous implementation. The new QUALITY mode enables higher accuracy, especially in challenging cases such as low-feature environments or repetitive textures. Please note that it requires more computation and is for now only available with ULTRA depth mode.
  • Improved Custom Object Detection module with better error logging and smarter diagnostic. Added security to avoid getting stuck when no depth is computed, invalid box input, invalid multi-instance id, and added an internal timeout when no data are available.
  • Added color output to Spatial Mapping Mesh
  • Added stability counter parameter in Spatial Mapping module. This defines the number of times each point has to be seen to be integrated into the spatial model. Decreasing the value will increase reactivity and integration speed at the cost of accuracy and noise increase.
  • Improved Camera::setRegionOfInterest function, it now accepts multiple channel images and provides a warning when the format is invalid instead of undefined behavior. Please note that it’s internally converted to grayscale images and only values equals to 0 are ignored.
  • Added new parameter BodyTrackingRuntimeParameters::skeleton_smoothing to configure the smoothing of the fitted skeleton, which ranges from 0 (low smoothing) to 1 (high smoothing).

Bug Fixes

  • Fixed an issue that could occur with NEURAL depth mode where black vertical lines would corrupt the depth map.
  • Fixed regression in object detection module with MULTI_CLASS_BOX* models on Jetson L4T 32.7, introduced in 4.0.3.

Samples

  • Added a new modern YOLOs C++ sample for Custom Object detection module that supports YOLOV8, YOLOV6 (v3) and YOLOV5 with TensorRT using ONNX model
  • Added a new YOLOv8 Python sample for Custom Object detection module that uses Pytorch and YOLOv8 native python package `ultralytics`

Wrappers

  • Docker images are now available for SDK 4.X.

4.0.3

SDK

  • Added GMSL support for Nvidia Jetson Jetpack 35.3.
  • Fixed new ZED mini rectification that could lead to an image artifact.
  • Improved body fitting for BODY_FORMAT::38, it is now smoother and avoid the model to be stuck in a non usual position.
  • Reduced body jittering when standing still.
  • Improved Object Detection accuracy.
  • Improved AI overall loading time.

Tools

  • Fixed ZED Diagnostic CUDA 12 report issue.

Samples

  • Fixed python custom detection sample.

Wrappers

  • Updated Matlab wrapper to support SDK 4.X

4.0.2

SDK

  • Added new sl::FUSION_ERROR_CODE to provide more detailed feedback while using the Geo-tracking module.
  • Updated default streaming bitrate to keep a constant quality regardless of the selected resolution/fps/encoder.

Bug Fixes

  • Fixed Geo-tracking module initialization.
  • Fixed an issue leading to inconsistent hip positions with the body tracking Fusion module.
  • Fixed body tracking ACTION_STATE parameter when body tracking is not activated.

Support

  • Added CUDA 12.1 support for Windows.

Tools

  • Updated ZED Explorer command line options.
  • Added a function to enable camera streaming directly from ZED Explorer.
  • Fixed random ZED360 calibration issues leading to unstable results.
  • Fixed ZED360 issue when trying to open ZED X camera.

Samples

  • Reworked samples to clearly expose all of the available modules of the ZED SDK.

Wrappers

  • Added missing functions in Python interface related to body tracking related.

Known issues

  • Calibration tool does not support new ZED Mini with fisheye lenses.

4.0.1

Bug Fixes

  • Fixed body tracking fusion with BODY_FORMAT::BODY_34.
  • Fixed inconsistent behavior of BodyTrackingFusionRuntimeParameters::enable_body_fitting.
  • Fixed BodyTrackingFusionRuntimeParameters::skeleton_smoothing to be applied to BODY_FORMAT::BODY_38.
  • Camera::getCameraSettings and Camera::setCameraSettings now return ERROR_CODE for non available settings (particularly with ZED X cameras).
  • Fixed logger file creation without write permission.

Support

  • Jetson running on Jetpack 5.1 (L4T 35.2) now supports ZED X cameras.
  • Added ZED SDK support for Jetpack 5.1.1 (L4T 35.3), ZED X support coming soon.
  • Added CUDA 12.0 partial support, CUDA 12.1 general support coming soon.

Tools

  • Fixed Calibration tool on Windows.

Samples

  • Added Geo Tracking samples.

Known issues

  • Calibration tool does not support new ZED Mini with fisheye lenses.

Major 4.0 API Changes

Video

  • Added new camera models MODEL::ZED_X and MODEL::ZED_XM.
  • Added new input type INPUT_TYPE::GMSL field to sl::DeviceProperties for GMSL cameras (support for GMSL cameras is only available on hosts with Nvidia Jetson SoC).
  • Added INPUT_TYPE input_type field to sl::DeviceProperties struct.
  • Added Camera reboot non-blocking behavior. This is enabled using the new InitParameters::async_grab_camera_recovery parameter. When the asynchronous automatic camera recovery is enabled and there's an issue with the communication with the camera, the grab() will exit after a short period of time, and return the new ERROR_CODE::CAMERA_REBOOTING error code. The recovery will run in the background until proper communication is restored. The default and previous ZED SDK behavior is synchronous, the grab() function is blocking and will return only once the camera communication is restored or the timeout has been reached.
  • setFromCameraID and setFromCameraSerialNumber methods from sl::InputType now take an optional BUS_TYPE to choose between USB or GMSL cameras. When unspecified, the method searches for available USB cameras first, then searches for GMSL.
  • Added new methods getInputType(), getConfiguration(), and isInit() for sl::InputType.
  • Added a scale utility method to sl::CameraParameters to easily convert camera intrinsic parameters for a given resolution.
  • Added RESOLUTION::HD1200 (1920x1200) and RESOLUTION::SVGA (960x600) resolutions for ZED X and ZED X Mini cameras.
  • Added RESOLUTION::AUTO resolution which sets RESOLUTION::HD720 resolution for USB cameras, and RESOLUTION::HD1200 resolution for GMSL cameras.
  • Added new parameters in VIDEO_SETTINGS for GMSL cameras only:
    • EXPOSURE_TIME: Image sensor exposure time in ms
    • ANALOG_GAIN: Analog sensor gain in dB
    • DIGITAL_GAIN: Digital ISP gain in dB
    • AUTO_EXPOSURE_TIME_RANGE: Defines the range of the exposure time in automatic control
    • AUTO_ANALOG_GAIN_RANGE: Defines the range of sensor gain in automatic control. Min/Max range between [1000 - 16000] mdB.
    • AUTO_DIGITAL_GAIN_RANGE: Defines the range of digital ISP gain in automatic control.
    • EXPOSURE_COMPENSATION: Exposure target compensation made after AE. Reduces the overall illumination by a factor of F-stops. values range is [0 - 100] (mapped between [-2.0, 2.0]).
    • DENOISING: Defines the level of denoising applied on both left and right images. values range is [0 - 100].
  • Camera::getCameraSettings now returns an ERROR_CODE and uses a reference to retrieve the value.
  • Camera::setCameraSettings now returns an ERROR_CODE instead of void.

Depth

  • SENSING_MODE has been removed from the API. To get the same behavior, use the new sl::RuntimeParameters.enable_fill_mode parameter.
  • Improved Fused Point Cloud extraction time by 2.5x.
  • Improved Positional Tracking loop closure time.
  • Improved Windows AI model download error handling: when there's no internet access or if the server can't be reached the exact error is now displayed. It will also auto-retry an alternative link if the primary server can't be reached.

Multi-Camera Fusion API

  • Added sl::Fusion API in a new header sl/Fusion.hpp. In 4.0 EA (Early Access), multi-camera fusion supports multi-camera capture, calibration, and body tracking fusion. All SDK capabilities will be added to the Fusion API in the final 4.0 release.
  • The Fusion API introduces publishers and subscribers to exchange data between source cameras and the fusion module.
  • A publisher will perform its computations and publish its data using the startPublishing() method.
  • A subscriber will use the subscribe() method from the sl::Fusion class to connect to all of the publishers and start retrieving data. The sl::Fusion.process() method takes care of all the heavy lifting of synchronizing and fusing the data, and produces the enhanced body tracking information in real-time.
  • Local workflow: sl::Fusion is available for a setup with multiple cameras on a single host, using intra-process communication. The publishers and the single subscriber run on the same machine in this configuration.
  • Distributed workflow: sl::Fusion is available for a setup with multiple cameras on multiple hosts, communicating on the local network, enabled by ZED Hub. Each publisher and the single subscriber run on separate machines such as ZED Boxes, in order to distribute computations with devices on the edge.

Geo-tracking API

  • The Geo-tracking API is a new module that brings global-scale location tracking to the ZED SDK. It improves the ZED camera's positional tracking capabilities by incorporating additional GNSS data to achieve global positioning. This is done by utilizing the new sl::Fusion.ingestGNSSdata method, which integrates the GNSS data with the existing tracking data to create a fused position. The resulting position can be accessed both locally and globally using the getPosition and getGeoPose methods, respectively. With this capability, users can achieve highly accurate positioning information that can be used in a variety of applications.
  • Added a GeoPose structure containing the tracked position in global world reference.
  • Added a GNSSData structure containing information about GNSS input data.
  • Added structures ECEF, LatLng, and UTM for different GPS/GNSS coordinate systems.
  • Added getGeoPose and getPosition methods in sl::Fusion that return the fused position in the local and global world frames.
  • Added Geo2Cameraand Camera2Geo methods in sl::Fusion to convert from a position in the global world reference (world map) frame to a position in the local world frame, and vice-versa.

Body Tracking

  • Body Tracking is now a standalone module, with its own specific parameters and functions.
  • The new BODY_TRACKING_MODEL enum contains the following models:
    • HUMAN_BODY_FAST: Keypoints based, specific to human skeleton, real-time performance even on Jetson or low-end GPU cards.
    • HUMAN_BODY_MEDIUM: Keypoints based, specific to human skeletons, compromise between accuracy and speed.
    • HUMAN_BODY_ACCURATE: Keypoints based, specific to human skeletons, state-of-the-art accuracy, requires powerful GPU.
    • The model now depends both on the accuracy wanted both also the BODY_FORMAT (see below)
  • Changed BODY_FORMAT namings from POSE_XX to BODY_XX. The POSE_18 and POSE_34 formats have been renamed to BODY_18 and BODY_34 respectively.
  • Compared to ZED SDK 3.8, the HUMAN_BODY_FAST and HUMAN_BODY_ACCURATE models when using BODY_18 and BODY_34 are slower but more precise. Up to 50% more precise for the fast model with a runtime difference of around 2 ms on Jetson Orin.
  • Added a new human body formats in BODY_FORMAT:
    • BODY_38: Adds additional key points to BODY_34, specifically on the feet and hands. The body fitting now provides accurate orientations for the feet and hands.
  • BODY_18 and BODY_34: The previous body models are still available
  • Added the possibility to use multiple body tracking models at once using the instance_id parameter in sl::BodyTrackingParameters (and in related methods such as Camera::getBodyTrackingParameters, Camera::disableBodyTracking, ...).
  • Added a BODY_KEYPOINTS_SELECTION enum to filter key points for specific use cases. Currently available options include BODY_KEYPOINTS_SELECTION::FULL and BODY_KEYPOINTS_SELECTION::UPPER_BODY.
  • Analogous to object detection, body-tracking objects now have their own methods and parameters exposed separately:
    • Added a new sl::BodyData class which contains detection and tracking information on detected human bodies, such as keypoint, joint, and skeleton root information.
    • Added a new Camera::retrieveBodies method to retrieve human body detection data after a Camera::grab call, similar to Camera::retrieveObjects.
    • Added a new sl::Bodies class which is returned from Camera::retrieveBodies, similar to sl::Objects.
    • Added a new sl::BodyTrackingParameters and sl::BodyTrackingRuntimeParameters classes which define all of the parameters related to body tracking.
  • Renamed BODY_PARTS to BODY_18_PARTS.
  • Renamed BODY_PARTS_POSE_34 to BODY_34_PARTS.
  • Added BODY_38_PARTS which list all key points for new body format.
  • Renamed BODY_BONES to BODY_18_BONES.
  • Renamed BODY_BONES_POSE_34 to BODY_34_BONES.
  • Added BODY_38_BONES bones containers for new body format.

Object Detection

  • Object detection models have been separated into object detection models and human body detection models, in two separate classes: OBJECT_DETECTION_MODEL and BODY_TRACKING_MODEL respectively.
  • The new OBJECT_DETECTION_MODEL enum contains the following models:
    • MULTI_CLASS_BOX_FAST : Any objects, bounding box based.
    • MULTI_CLASS_BOX_MEDIUM : Any objects, bounding box based, a compromise between accuracy and speed.
    • MULTI_CLASS_BOX_ACCURATE : Any objects, bounding box based, more accurate but slower than the base model.
    • PERSON_HEAD_BOX_FAST : Bounding Box detector specialized in the person's head, particularly well suited for crowded environments, the person localization is also improved.
    • PERSON_HEAD_BOX_ACCURATE : Bounding Box detector specialized in the person's head, particularly well suited for crowded environments, the person localization is also improved, more accurate but slower than the base model.
    • CUSTOM_BOX_OBJECTS : For external inference, by using your own custom model and/or frameworks. This mode disables the internal inference engine, the 2D bounding box detection must be provided.
  • Added the possibility to use multiple object detection models instances at once using the instance_id parameter in sl::ObjectDetectionParameters (and in related methods such as Camera::getObjectDetectionParameters, Camera::disableObjectDetection , ...).
  • Renamed MULTI_CLASS_BOX to MULTI_CLASS_BOX_FAST in OBJECT_DETECTION_MODEL .
  • Renamed PERSON_HEAD_BOX to PERSON_HEAD_BOX_FAST in OBJECT_DETECTION_MODEL .
  • Renamed enable_mask_output to enable_segmentation in sl::ObjectDetectionParameters .
  • Removed keypoint_2d, keypoint, keypoint_confidence, local_position_per_joint, local_orientation_per_joint, and global_root_orientation from sl::ObjectData . These attributes have been moved to the sl::BodyData class for Body Tracking.

Samples

  • Added Multi Camera recording sample.
  • Added new Fusion samples to get started with the new Fusion API and fuse body tracking from multiple ZED Cameras.
  • Added new Geo-tracking samples, a recording, and a playback sample for offline data recording and visualization.
  • Updated Body Tracking samples with new AI models.
  • All samples are available in your ZED installation folder and on our Github.

Integrations

    Python, C#, and C wrappers

    Unity

    • Added support for ZED SDK 4.0.
    • Added support of BODY_38 in the Body Tracking scene.
    • The Fusion API is not available in the Unity plugin for the moment.
      • Use Live Link for Unity to fuse skeletons from multiple cameras in Unity.
    • Get started with our Unity plugin alongside its documentation

    Live Link for Unity

    • Introducing a new sample that sends Body tracking data from a C++ sample to Unity via UDP. This sample supports single-camera setups as well as multiple-camera setups using the Fusion API.
    • Get started with our Live Link for Unity sample alongside its documentation.

    Unreal Engine

    • Added support for ZED SDK 4.0.
    • Added support of BODY_38 in the Body Tracking level.
    • The Fusion API is not available in the UE5 plugin for the moment.
      • Use Live Link for Unreal Engine 5 to fuse skeletons from multiple cameras in UE5.
    • Get started with our Unreal Engine 5 plugin alongside its documentation
    • Deprecated the custom Unreal Engine 4 build integration. It will not support ZED SDK 4.0.

    Live Link for Unreal Engine 5

    ROS 2

    • Added support for ZED X and ZED X Mini.
    • Added support for ZED SDK 4.0. Fusion API integration coming soon with native multi-camera support.
    • Improved Positional Tracking module.
      • Added the parameter pos_tracking.set_as_static for applications with a static camera monitoring a robotics environment.
      • New message on topic ~/pose/status with the current status of the pose from the ZED SDK.
      • New message on topic ~/odom/status with the current status of the odometry from the ZED SDK.
    • Added support for the Geo-tracking API fusion module.
    • Added support for the new Body Tracking module.
      • The RVIZ2 plugin supports the new BODY_38 skeleton models.
    • Improved Object Detection module.
      • Added the parameter object_detection.allow_reduced_precision_inference to allow inference to run at a lower precision to improve runtime and memory usage.
      • Added the parameter object_detection.max_range to define an upper-depth range for detections.
    • Added examples of Dockerfile to create Docker images for ROS 2 Humble on Desktop PC and NVIDIA Jetson devices.
    • New examples and tutorials.
    • Other new features and improvements. Full changelog available on GitHub.

    Known issues

    • Maxwell and Kepler GPUs (compute capabilities 35, 50, and 52) are not supported using CUDA 11.x, only with CUDA 10.2

    Legacy

    For older releases and changelog, see the ZED SDK release archive.

    SDK Downloads

    The ZED SDK allows you to add depth, motion sensing and spatial AI to your application. Available as a standalone installer, it includes applications, tools and sample projects with source code. Please check out our GitHub page and SDK documentation for additional resources.

    ZED X Drivers
    Maintenance mode versions (legacy)

    These versions may be dropped in future release

    Legacy

    For older releases and changelog, see the ZED SDK release archive.

    Integrations

    Build applications with ZED and your favorite tools and languages using these integrations. View all integrations

    App

    ZED World is a standalone application that allows you to experience mixed reality with stereo pass-through in VR headsets. Requires ZED Mini, Oculus Rift or HTC Vive.