The ZED SDK allows you to add depth, motion sensing and spatial AI to your application. Available as a standalone installer, it includes applications, tools and sample projects with source code.
ZED SDK 5.2 delivers major performance gains on Jetson with up to 85% lower CPU load, improved GMSL driver reliability at 200 Hz IMU rate, and sharper images in low-resolution modes. It adds support for an advanced zero-copy NV12 interface on Jetson.
This release also introduces the new beta Sensors API (sl::Sensors), a unified interface for managing ZED cameras and Ouster LiDAR devices in a single pipeline — replacing the need for separate APIs and custom fusion code.
This release adds support for JetPack 7.1 / L4T 38.4, unlocking hardware video encoding and decoding on Jetson Thor. Alongside these platform updates, version 5.2 brings important improvements to positional tracking robustness and the Python API, as well as numerous other bug fixes and feature enhancements across the SDK.
Mar 2, 2026
Camera::retrieveImage and CameraOne::retrieveImage with NV12 views and when using default resolution: it now correctly uses the camera resolution instead of requesting the user to manually set it.Sensors::enableObjectDetection to work without explicitly setting fused_objects_group_name in sl::ObjectDetectionSensorsParameters. A default group name is now used automatically when the field is left empty, simplifying setup for single fused group configurations. Setting a custom name is still required when using multiple fused groups.RawBuffer::getWidth() and RawBuffer::getHeight() always returning 0 when using Camera::retrieveImage(RawBuffer&). The buffer metadata now correctly reports the camera capture resolution.Camera::grab() when corrupted frame detector was enabled.Fixed an issue leading to the left and right buffer being swapped in RawBuffer retrieval for ZED X camera.CameraOne::retrieveImage behavior for VIEW::LEFT_UNRECTIFIED, which previously returned the rectified image in 5.2.0.Fixed an issue where GMSL2 cameras could stay in state sl::CAMERA_STATE::REBOOTING indefinitely.CustomObjectDetectionRuntimeParameters per-class properties not working with custom object detection models (ingestCustomBoxObjects and ingestCustomMaskObjects). Per-class properties (confidence threshold, is_grounded, tracking parameters, class filtering, etc.) are now correctly applied when using CUSTOM_BOX_OBJECTS with retrieveCustomObjects.Sensors::enableObjectDetection to return SENSORS_ERROR_CODE::INVALID_FUNCTION_PARAMETERS when using Sensors::enableObjectDetection without adding at least one valid camera beforehand. If at least one camera was added, the behavior remains the same: leaving ObjectDetectionSensorsParameters::sensors_ids empty still enables object detection with all available cameras.1×300×6) and one-to-many output (1×(nc+4)×8400) are handled by the existing custom ONNX detector pipeline.sl::POSITIONAL_TRACKING_MODE::GEN_3 by up to 80%, specifically when enable_area_memory is disabled.enable_2d_ground_mode was enabled.Camera::saveAreaMap() that could occur when closing the camera while a save operation was still in progress. Thread synchronization and pointer safety have been strengthened to prevent any concurrent access issues.enable_imu_fusion was set to false.InitParameters, RuntimeParameters, Mat, Pose, InputType, etc.). This enables multiprocessing.Process, copy.deepcopy(), joblib, and other serialization frameworks to work with pyzed objects out of the box.Camera, CameraOne, Fusion, Sensors, Lidar) raise a clear TypeError when pickle is attempted, with a message guiding users to re-open devices in child processes.InitFusionParameters.sdk_gpu_id property referencing wrong internal member.InitParameters default value of enable_image_validity_check to True to match the C++ default value.InitParameters default value of enable_image_validity_check to True to match the C++ default value.--resolution, --frequency, --input and --serial_number CLI options being ignored when selecting a camera for recording, streaming, firmware update, or GUI launch.AUTO, which resolves to the best default per camera model (e.g. HD720 for USB cameras, HD1200 for GMSL), consistent with sl::InitParameters defaults. Applies to both CLI and GUI.--resolution and --frequency CLI options being parsed but never applied.Depth mode.sensors_api/sensor_placer/cpp) by adding interactive point-pair matching: Shift+Click the same physical feature on two sensor point clouds to automatically compute and apply the optimal alignment transform.Feb 10, 2026
getCudaStream() to sl::CameraOne to match the sl::Camera API, enabling unified CUDA stream management.sdk_gpu_id to InitParametersOne.grab() now returns CORRUPTED_FRAME only for actual image corruption (green/purple images). Other quality issues (lens obstruction, stereo mismatch, blur, low light) now return SUCCESS.sl::CameraConfiguration::fps not correctly returning the user-requested frame rate on Windows.InitParameters::async_image_retrieval parameter; it no longer has any effect.sl::Camera::retrieveTensor and sl::CameraOne::retrieveTensor to retrieve an sl::Tensor, containing the input image pre-processed for inference with SVO or live camera. This method works with sl::TensorParameters to define input options for deep learning inference. Refer to Tutorial 12 for usage examples.convertCoordinateSystem and to accept as an argument, facilitating operations on GPU memory objects.For older releases and changelog, see the ZED SDK release archive.
convertUnitcudaStream_tsl::MatapplyTransform to apply a rotation and translation to a point cloud matrix.InputType::setFromGMSLPort(int gmsl_port) to open GMSL cameras based on their physical connection — useful for static production rigs where wiring remains constant even if serial numbers change.NV12 to MAT_TYPE and LEFT_NV12_UNRECTIFIED / RIGHT_NV12_UNRECTIFIED to VIEW. These new views enable requesting an NV12 sl::Mat via Camera::retrieveImage or CameraOne::retrieveImage.For Advanced Users Only: Introduced the RawBuffer API, enabling zero-copy access to native camera capture buffers (NvBufSurface / Argus) in NV12 format.
RawBuffer goes out of scope.SL_ENABLE_ADVANCED_CAPTURE_API at compilation.CAMERA_DISCONNECTED error (requires driver 1.4.0+).CAMERA_DISCONNECTED error (requires driver 1.4.0+).getStreamingDeviceList() static method to CameraOne.Camera::getStreamingDeviceList() would incorrectly report CameraOne devices.camera_model property to StreamingProperties.sl::Camera::setSVOPosition and sl::CameraOne::setSVOPosition methods.setSVOPosition that could prevent seeking to the final frame of an SVO file.sl::KeyFrame class to expose internal frames used for graph and map creation.ObjectTrackingParameters: velocity_smoothing_factor: Tunes the "jitter" of bounding box velocity.min_velocity_threshold: Clamps low-speed movement to zero.prediction_timeout_s: Duration to predict a path after losing sight of an object.min_confirmation_time_s: Required "alive" time before a track is validated.The Sensors API (sl::Sensors) is a major new addition that provides a single, unified interface for managing heterogeneous sensor systems — ZED stereo cameras, ZED One monocular cameras, and Ouster Lidar devices — eliminating the need for separate APIs and manual coordination. This is an addition to the existing API, the original Camera and CameraOne classes are still supported and improved.
sensors.add(). Previously, Lidar integration required external libraries and custom fusion code.sensors.add() call works for any ZED model (ZED 2, ZED 2i, ZED X, ZED X One). The SDK auto-detects the camera type — no need to choose between sl::Camera and sl::CameraOne.sensors.setSensorPose() and retrieve data in any reference frame (SENSOR, BASELINK, or WORLD). Point clouds and images are automatically transformed.read() + grab() separation now works across all sensors simultaneously, enabling custom inference between acquisition and SDK processing.BatchedData<T> (e.g., sensors.retrieveImage(), sensors.retrieveMeasure()), returning a map of sensor identifiers to results.sensors.syncSVO() automatically aligns SVO and OSF files to a common start timestamp for synchronized playback.sensors_ids and instance_module_id, with separate fusion groups per sensor set.sensors.enableRecording() call records all cameras (SVO) and Lidars (OSF) with synchronized timestamps.sensors.getProcessErrorCodes() returns per-sensor error diagnostics using BatchedData<sl::ERROR_CODE>.sensors.getHealthStatus() provides per-sensor health, temperature, and connection status.| Use Case | Recommendation |
| Single ZED camera | Continue using sl::Camera |
| Single ZED One | Continue using sl::CameraOne |
| Multi-camera (cameras only) | sl::Fusion still supported, sl::Sensors recommended |
| Any setup with Lidar | Use sl::Sensors (required) or the new sl::Lidar class |
| New multi-sensor projects | Use sl::Sensors |
ZED_SDK_H265_FALLBACK_MODE:0 (Default): Fallback to H.2641: Force x265 CPU encoding (High latency, intended for offline use).2: Return an error (No fallback).from_string methods to most enums.check_ai_model_status, download_ai_model, optimize_ai_model.get_current_timestamp and read_fusion_configuration, and parameter override_gravity to subscribe.get_communication_parameters.get_camera_settings_range, get_svo_position_at_timestamp.blob_from_images.get_coordinate_transform_conversion_3f, get_coordinate_transform_conversion_4f, convert_coordinate_system_transform, convert_coordinate_system_mat, get_unit_scale, convert_unit_transform, convert_unit_mat, compute_rotation_matrix_from_gravity.is_contained_in_resolution.MESH_CREATION and TYPE_OF_INPUT_TYPE.np.array typing issues.identity and zeros for Matrix classes are now correctly mapped as @staticmethod.sl.Input.get_type to return the new TYPE_OF_INPUT_TYPE.None defaults.--install_path and ZED_DIR environment variables.-l flag to list Serial Number, ID, and GMSL Port.sl::Sensors API with ZED cameras, ZED One cameras, and LiDAR — includes auto-detection, JSON configuration, SVO/OSF playback and recording, object detection, body tracking, reference frame modes, and OpenGL/Rerun viewers.sensors_api/sensor_placer/cpp) for interactive multi-sensor placement and extrinsic calibration with live 3D point cloud visualization, floor plane detection, IMU gravity alignment, and JSON config save/load.sl::Tensor and sl::TensorParameters API for deep learning inference input preparation via Camera::retrieveTensor().sl::Mat + blobFromImage() to the new sl::Tensor API with retrieveTensor().ObjectTrackingParameters with velocity_smoothing_factor and min_velocity_threshold to birds-eye viewer and custom detector samples.GEN_1 to GEN_3 in positional tracking samples; added TUM trajectory export and keyframe visualization.getRetrieveMeasureResolution() for optimal resolution selection.