The ZED SDK allows you to add depth, motion sensing and spatial AI to your application. Available as a standalone installer, it includes applications, tools and sample projects with source code.
ZED SDK 5.2 delivers major performance gains on Jetson with up to 85% lower CPU load, improved GMSL driver reliability at 200 Hz IMU rate, and sharper images in low-resolution modes. It adds support for an advanced zero-copy NV12 interface on Jetson.
This release also introduces the new beta Sensors API (sl::Sensors), a unified interface for managing ZED cameras and Ouster LiDAR devices in a single pipeline — replacing the need for separate APIs and custom fusion code.
This release adds support for JetPack 7.1 / L4T 38.4, unlocking hardware video encoding and decoding on Jetson Thor. Alongside these platform updates, version 5.2 brings important improvements to positional tracking robustness and the Python API, as well as numerous other bug fixes and feature enhancements across the SDK.
Feb 10, 2026
getCudaStream() to sl::CameraOne to match the sl::Camera API, enabling unified CUDA stream management.sdk_gpu_id to InitParametersOne.For older releases and changelog, see the ZED SDK release archive.
grab()CORRUPTED_FRAMESUCCESSsl::CameraConfiguration::fps not correctly returning the user-requested frame rate on Windows.InitParameters::async_image_retrieval parameter; it no longer has any effect.sl::Camera::retrieveTensor and sl::CameraOne::retrieveTensor to retrieve an sl::Tensor, containing the input image pre-processed for inference with SVO or live camera. This method works with sl::TensorParameters to define input options for deep learning inference. Refer to Tutorial 12 for usage examples.convertCoordinateSystem and convertUnit to accept cudaStream_t as an argument, facilitating operations on GPU memory sl::Mat objects.applyTransform to apply a rotation and translation to a point cloud matrix.InputType::setFromGMSLPort(int gmsl_port) to open GMSL cameras based on their physical connection — useful for static production rigs where wiring remains constant even if serial numbers change.NV12 to MAT_TYPE and LEFT_NV12_UNRECTIFIED / RIGHT_NV12_UNRECTIFIED to VIEW. These new views enable requesting an NV12 sl::Mat via Camera::retrieveImage or CameraOne::retrieveImage.For Advanced Users Only: Introduced the RawBuffer API, enabling zero-copy access to native camera capture buffers (NvBufSurface / Argus) in NV12 format.
RawBuffer goes out of scope.SL_ENABLE_ADVANCED_CAPTURE_API at compilation.CAMERA_DISCONNECTED error (requires driver 1.4.0+).CAMERA_DISCONNECTED error (requires driver 1.4.0+).getStreamingDeviceList() static method to CameraOne.Camera::getStreamingDeviceList() would incorrectly report CameraOne devices.camera_model property to StreamingProperties.sl::Camera::setSVOPosition and sl::CameraOne::setSVOPosition methods.setSVOPosition that could prevent seeking to the final frame of an SVO file.sl::KeyFrame class to expose internal frames used for graph and map creation.ObjectTrackingParameters: velocity_smoothing_factor: Tunes the "jitter" of bounding box velocity.min_velocity_threshold: Clamps low-speed movement to zero.prediction_timeout_s: Duration to predict a path after losing sight of an object.min_confirmation_time_s: Required "alive" time before a track is validated.The Sensors API (sl::Sensors) is a major new addition that provides a single, unified interface for managing heterogeneous sensor systems — ZED stereo cameras, ZED One monocular cameras, and Ouster Lidar devices — eliminating the need for separate APIs and manual coordination. This is an addition to the existing API, the original Camera and CameraOne classes are still supported and improved.
sensors.add(). Previously, Lidar integration required external libraries and custom fusion code.sensors.add() call works for any ZED model (ZED 2, ZED 2i, ZED X, ZED X One). The SDK auto-detects the camera type — no need to choose between sl::Camera and sl::CameraOne.sensors.setSensorPose() and retrieve data in any reference frame (SENSOR, BASELINK, or WORLD). Point clouds and images are automatically transformed.read() + grab() separation now works across all sensors simultaneously, enabling custom inference between acquisition and SDK processing.BatchedData<T> (e.g., sensors.retrieveImage(), sensors.retrieveMeasure()), returning a map of sensor identifiers to results.sensors.syncSVO() automatically aligns SVO and OSF files to a common start timestamp for synchronized playback.sensors_ids and instance_module_id, with separate fusion groups per sensor set.sensors.enableRecording() call records all cameras (SVO) and Lidars (OSF) with synchronized timestamps.sensors.getProcessErrorCodes() returns per-sensor error diagnostics using BatchedData<sl::ERROR_CODE>.sensors.getHealthStatus() provides per-sensor health, temperature, and connection status.| Use Case | Recommendation |
| Single ZED camera | Continue using sl::Camera |
| Single ZED One | Continue using sl::CameraOne |
| Multi-camera (cameras only) | sl::Fusion still supported, sl::Sensors recommended |
| Any setup with Lidar | Use sl::Sensors (required) or the new sl::Lidar class |
| New multi-sensor projects | Use sl::Sensors |
ZED_SDK_H265_FALLBACK_MODE:0 (Default): Fallback to H.2641: Force x265 CPU encoding (High latency, intended for offline use).2: Return an error (No fallback).from_string methods to most enums.check_ai_model_status, download_ai_model, optimize_ai_model.get_current_timestamp and read_fusion_configuration, and parameter override_gravity to subscribe.get_communication_parameters.get_camera_settings_range, get_svo_position_at_timestamp.blob_from_images.get_coordinate_transform_conversion_3f, get_coordinate_transform_conversion_4f, convert_coordinate_system_transform, convert_coordinate_system_mat, get_unit_scale, convert_unit_transform, convert_unit_mat, compute_rotation_matrix_from_gravity.is_contained_in_resolution.MESH_CREATION and TYPE_OF_INPUT_TYPE.np.array typing issues.identity and zeros for Matrix classes are now correctly mapped as @staticmethod.sl.Input.get_type to return the new TYPE_OF_INPUT_TYPE.None defaults.--install_path and ZED_DIR environment variables.-l flag to list Serial Number, ID, and GMSL Port.sl::Sensors API with ZED cameras, ZED One cameras, and LiDAR — includes auto-detection, JSON configuration, SVO/OSF playback and recording, object detection, body tracking, reference frame modes, and OpenGL/Rerun viewers.sensors_api/sensor_placer/cpp) for interactive multi-sensor placement and extrinsic calibration with live 3D point cloud visualization, floor plane detection, IMU gravity alignment, and JSON config save/load.sl::Tensor and sl::TensorParameters API for deep learning inference input preparation via Camera::retrieveTensor().sl::Mat + blobFromImage() to the new sl::Tensor API with retrieveTensor().ObjectTrackingParameters with velocity_smoothing_factor and min_velocity_threshold to birds-eye viewer and custom detector samples.GEN_1 to GEN_3 in positional tracking samples; added TUM trajectory export and keyframe visualization.getRetrieveMeasureResolution() for optimal resolution selection.