These versions may be dropped in future release
These drivers enable ZED X to function on a wide range of NVIDIA Jetson devices (Xavier, Orin).
Aug 10, 2023
We are excited to announce the release of ZED SDK 4.0, which introduces a range of new features and enhancements to our ZED cameras. Our latest update supports the ZED X and ZED X Mini cameras, designed specifically for autonomous mobile robots in indoor and outdoor environments. We are also introducing an improved
NEURAL depth mode, which offers even more accurate depth maps in challenging situations such as low-light environments and textureless surfaces.
We are proud to introduce the new multi-camera Fusion API, which makes it easier than ever to fuse data coming from multiple cameras. This module handles time synchronization and geometric calibration issues, along with 360° data fusion with noisy data coming from multiple cameras and sensor sources. We believe that these updates will unlock even more potential for our users to create innovative applications that push the boundaries of what is possible with depth-sensing technology.
For those upgrading from SDK version 3.X to 4.0, we highly recommend checking out our migration guide to ensure a smooth transition.
Added support for the new ZED X and ZED X Mini cameras.
sl::InputType parameters to enable users to use the same code for both GMSL or USB 3.0 cameras without any added complexity.
Introduced new Multi-Camera Fusion API. The new module allows seamless synchronization and integration of data from multiple cameras in real time, providing more accurate and reliable information than a single camera setup. Additionally, the Fusion module offers redundancy in case of camera failure or occlusions, making it a reliable solution for critical applications.
The new API can be found in the header
sl::Fusion API. In 4.0 EA (Early Access), multi-camera fusion supports multi-camera capture, calibration, and body tracking fusion. All SDK capabilities will be added to the Fusion API in the final 4.0 release.
Camera2Geo returning a wrong geoposition when position is not in
sl::Camera::reboot function, which was consistently returning
CAMERA_NOT_DETECTED on Windows.
sl::Camera::resetPositionalTracking to keep the previously state of the parameter
retrieve_measure and fusion
retrieve_image functions in the API.
sl::Camera::startRegionOfInterestAutoDetection function is used to initiate the detection. With enough motion and frames grabbed, it will be automatically applied by default and also available with
sl::RegionOfInterestParameters have been added to fine-tune the detection.
QUALITY to make it work faster and with all depth modes.
sl::Camera::isCameraSettingSupported method for camera video control support.
sl::InitParameters::enable_image_validity_check. It is performed in the
sl::Camera::grab() method and requires some computations. If an issue is found, the method will output a warning as the new
sl::ERROR_CODE::CORRUPTED_FRAME. It's currently able to catch USB corrupted frames such as green or purple-tinted frames or buffer corruption. It could also interpret widely different left and right images as invalid.
sl::Fusion::stopPublishing method to the Fusion API.
sl::Fusion::getPosition returning incorrect output when multiple cameras are set up.
POSITIONAL_TRACKING_MODE to the ROS 2 Wrapper.
enable_rolling_calibration. Added new
GNSS_CALIBRATION_STATE to get more details about its current state.
sl::Fusion::getCurrentTimeStamp to retrieve the timestamp of the current processed data.
sl::CameraParameters::focal_length_metric that store the real focal length in millimeter.
sl::PlaneDetectionParameters to better control the plane at hit function.
INTERACTIVE, Now "3" =
sl::Camera::getInitParameters in SVO playback mode.
sl::InitParameter::grab_compute_capping_fps parameter to set an upper computation frequency limit to the grab function. This can be useful to get a known constant fixed rate or limit the computation load while keeping a short exposure time by setting a high camera capture framerate for instance.
sl::PositionalTrackingParameters::POSITIONAL_TRACKING_MODE. By default the
STANDARD mode is enabled and identical to the previous implementation. The new
QUALITY mode enables higher accuracy, especially in challenging cases such as low-feature environments or repetitive textures. Please note that it requires more computation and is for now only available with
ULTRA depth mode.
sl::Camera::setRegionOfInterest function, it now accepts multiple channel images and provides a warning when the format is invalid instead of undefined behavior. Please note that it’s internally converted to grayscale images and only values equals to 0 are ignored.
sl::BodyTrackingRuntimeParameters::skeleton_smoothing to configure the smoothing of the fitted skeleton, which ranges from 0 (low smoothing) to 1 (high smoothing).
NEURAL depth mode where black vertical lines would corrupt the depth map.
MULTI_CLASS_BOX* models on Jetson L4T 32.7, introduced in 4.0.3.
sl::BODY_FORMAT::38, it is now smoother and avoid the model to be stuck in a non usual position.
sl::FUSION_ERROR_CODE to provide more detailed feedback while using the Geo-tracking module.
ACTION_STATE parameter when body tracking is not activated.
sl::BodyTrackingFusionRuntimeParameters::skeleton_smoothing to be applied to
sl::Camera::setCameraSettings now return
sl::ERROR_CODE for non available settings (particularly with ZED X cameras).
INPUT_TYPE::GMSL field to
sl::DeviceProperties for GMSL cameras (support for GMSL cameras is only available on hosts with Nvidia Jetson SoC).
INPUT_TYPE input_type field to
InitParameters::async_grab_camera_recovery parameter. When the asynchronous automatic camera recovery is enabled and there's an issue with the communication with the camera, the
grab() will exit after a short period of time, and return the new
ERROR_CODE::CAMERA_REBOOTING error code. The recovery will run in the background until proper communication is restored. The default and previous ZED SDK behavior is synchronous, the
grab() function is blocking and will return only once the camera communication is restored or the timeout has been reached.
setFromCameraSerialNumber methods from
sl::InputType now take an optional
BUS_TYPE to choose between USB or GMSL cameras. When unspecified, the method searches for available USB cameras first, then searches for GMSL.
sl::CameraParameters to easily convert camera intrinsic parameters for a given resolution.
RESOLUTION::HD1200 (1920x1200) and
RESOLUTION::SVGA (960x600) resolutions for ZED X and ZED X Mini cameras.
RESOLUTION::AUTO resolution which sets
RESOLUTION::HD720 resolution for USB cameras, and
RESOLUTION::HD1200 resolution for GMSL cameras.
VIDEO_SETTINGS for GMSL cameras only:
EXPOSURE_TIME: Image sensor exposure time in ms
ANALOG_GAIN: Analog sensor gain in dB
DIGITAL_GAIN: Digital ISP gain in dB
AUTO_EXPOSURE_TIME_RANGE: Defines the range of the exposure time in automatic control
AUTO_ANALOG_GAIN_RANGE: Defines the range of sensor gain in automatic control. Min/Max range between [1000 - 16000] mdB.
AUTO_DIGITAL_GAIN_RANGE: Defines the range of digital ISP gain in automatic control.
EXPOSURE_COMPENSATION: Exposure target compensation made after AE. Reduces the overall illumination by a factor of F-stops. values range is [0 - 100] (mapped between [-2.0, 2.0]).
DENOISING: Defines the level of denoising applied on both left and right images. values range is [0 - 100].
Camera::getCameraSettings now returns an
ERROR_CODE and uses a reference to retrieve the value.
Camera::setCameraSettings now returns an
ERROR_CODE instead of void.
SENSING_MODE has been removed from the API. To get the same behavior, use the new
sl::Fusion API in a new header
sl/Fusion.hpp. In 4.0 EA (Early Access), multi-camera fusion supports multi-camera capture, calibration, and body tracking fusion. All SDK capabilities will be added to the Fusion API in the final 4.0 release.
subscribe() method from the
sl::Fusion class to connect to all of the publishers and start retrieving data. The
sl::Fusion.process() method takes care of all the heavy lifting of synchronizing and fusing the data, and produces the enhanced body tracking information in real-time.
sl::Fusion is available for a setup with multiple cameras on a single host, using intra-process communication. The publishers and the single subscriber run on the same machine in this configuration.
sl::Fusion is available for a setup with multiple cameras on multiple hosts, communicating on the local network, enabled by ZED Hub. Each publisher and the single subscriber run on separate machines such as ZED Boxes, in order to distribute computations with devices on the edge.
sl::Fusion.ingestGNSSdata method, which integrates the GNSS data with the existing tracking data to create a fused position. The resulting position can be accessed both locally and globally using the getPosition and
getGeoPose methods, respectively. With this capability, users can achieve highly accurate positioning information that can be used in a variety of applications.
GeoPose structure containing the tracked position in global world reference.
GNSSData structure containing information about GNSS input data.
UTM for different GPS/GNSS coordinate systems.
getPosition methods in
sl::Fusion that return the fused position in the local and global world frames.
Camera2Geo methods in
sl::Fusion to convert from a position in the global world reference (world map) frame to a position in the local world frame, and vice-versa.
BODY_TRACKING_MODEL enum contains the following models:
HUMAN_BODY_FAST: Keypoints based, specific to human skeleton, real-time performance even on Jetson or low-end GPU cards.
HUMAN_BODY_MEDIUM: Keypoints based, specific to human skeletons, compromise between accuracy and speed.
HUMAN_BODY_ACCURATE: Keypoints based, specific to human skeletons, state-of-the-art accuracy, requires powerful GPU.
BODY_FORMAT (see below)
BODY_FORMAT namings from
POSE_34 formats have been renamed to
HUMAN_BODY_ACCURATE models when using
BODY_34 are slower but more precise. Up to 50% more precise for the fast model with a runtime difference of around 2 ms on Jetson Orin.
BODY_38: Adds additional key points to
BODY_34, specifically on the feet and hands. The body fitting now provides accurate orientations for the feet and hands.
BODY_34: The previous body models are still available
sl::BodyTrackingParameters (and in related methods such as
BODY_KEYPOINTS_SELECTION enum to filter key points for specific use cases. Currently available options include
sl::BodyData class which contains detection and tracking information on detected human bodies, such as keypoint, joint, and skeleton root information.
Camera::retrieveBodies method to retrieve human body detection data after a
Camera::grab call, similar to
sl::Bodies class which is returned from
Camera::retrieveBodies, similar to
sl::BodyTrackingRuntimeParameters classes which define all of the parameters related to body tracking.
BODY_38_PARTS which list all key points for new body format.
BODY_38_BONES bones containers for new body format.
OBJECT_DETECTION_MODEL enum contains the following models:
MULTI_CLASS_BOX_FAST : Any objects, bounding box based.
MULTI_CLASS_BOX_MEDIUM : Any objects, bounding box based, a compromise between accuracy and speed.
MULTI_CLASS_BOX_ACCURATE : Any objects, bounding box based, more accurate but slower than the base model.
PERSON_HEAD_BOX_FAST : Bounding Box detector specialized in the person's head, particularly well suited for crowded environments, the person localization is also improved.
PERSON_HEAD_BOX_ACCURATE : Bounding Box detector specialized in the person's head, particularly well suited for crowded environments, the person localization is also improved, more accurate but slower than the base model.
CUSTOM_BOX_OBJECTS : For external inference, by using your own custom model and/or frameworks. This mode disables the internal inference engine, the 2D bounding box detection must be provided.
instance_id parameter in
sl::ObjectDetectionParameters (and in related methods such as
Camera::disableObjectDetection , ...).
sl::ObjectData . These attributes have been moved to the
sl::BodyData class for Body Tracking.
All samples are available in your ZED installation folder and on our GitHub.
BODY_38 in the Body Tracking scene.
pos_tracking.set_as_static for applications with a static camera monitoring a robotics environment.
~/pose/status with the current status of the pose from the ZED SDK.
~/odom/status with the current status of the odometry from the ZED SDK.
BODY_38 skeleton models.
object_detection.allow_reduced_precision_inference to allow inference to run at a lower precision to improve runtime and memory usage.
object_detection.max_range to define an upper-depth range for detections.
Start building exciting new applications that recognize and understand your environment.