Depth Modes

Stereolabs AI Depth leverages advanced neural networks to generate high-quality depth maps from stereo images, delivering reliable results even in challenging scenarios. Compared to traditional approaches, AI Depth provides superior accuracy in low-texture and low-light environments. This makes it especially well-suited for applications such as robotics, augmented reality (AR), and 3D mapping, where dependable depth perception is critical.

The ZED SDK provides multiple AI-powered depth modes, allowing you to tailor depth sensing to your application’s requirements. Each mode offers a different balance of accuracy, range, and computational speed, so you can optimize for precision, performance, or a mix of both depending on your use case.

NEURAL #

The NEURAL depth mode uses AI-powered disparity estimation to deliver a strong balance of depth accuracy and processing speed. It is ideal for applications that require reliable depth perception without sacrificing real-time performance.

Neural Depth Computational Performances on embedded devices #

CamerasFPSCPU (%)GPU (%)
130526
230650
4302653
CamerasFPSCPU (%)GPU (%)
130959
2232094
4103996
CamerasFPSCPU (%)GPU (%)
1122290
252593
4OOMOOMOOM
CamerasFPSCPU (%)GPU (%)
1301488
2182098
493898
CamerasFPSCPU (%)GPU (%)
1301270
2171896
463598

📌 Note: performance obtained with ZED SDK v5.0.1 RC, ZED X Driver v1.3.0, and ZED X camera using the multicamera code example available on GitHub.

Neural Depth Accuracy (ZED X) #

Distance Range (m)Mean ErrorStandard Deviation*
[0.3 - 4]< 1 %Low
[4 - 6]< 2.5%Low
[6 - 9 ]< 4%Medium
[10 - 12 ]< 6%High

(*) A lower standard deviation indicates more stable and accurate depth estimation, resulting in smoother and more reliable 3D point clouds. Higher deviation can lead to noise and distortion, producing wavy or unstable point clouds.

Enabling the NEURAL depth mode in the API #

// Set depth mode in NEURAL
InitParameters init_parameters;
init_parameters.depth_mode = DEPTH_MODE::NEURAL;
# Set depth mode in NEURAL
init_parameters = sl.InitParameters()
init_parameters.depth_mode = sl.DEPTH_MODE.NEURAL
// Set depth mode in NEURAL
InitParameters init_parameters = new InitParameters();
init_parameters.depthMode = DEPTH_MODE.NEURAL;

NEURAL LIGHT #

The NEURAL_LIGHT depth mode provides AI-powered disparity estimation optimized for speed and efficiency. It enables real-time depth sensing with lower computational load, making it ideal for multi-camera setups and applications where fast processing is prioritized over maximum depth accuracy.

Neural Light Depth Computational Performances on embedded devices #

CamerasFPSCPU (%)GPU (%)
130511
230623
4302246
CamerasFPSCPU (%)GPU (%)
130223
230547
4301481
CamerasFPSCPU (%)GPU (%)
1302480
2132780
4OOMOOMOOM
CamerasFPSCPU (%)GPU (%)
1301436
2302564
4214584
CamerasFPSCPU (%)GPU (%)
1301528
2302555
4194090

📌 Note: performance obtained with ZED SDK v5.0.1 RC, ZED X Driver v1.3.0, and ZED X camera using the multicamera code example available on GitHub.

Neural Depth Accuracy (ZED X) #

Distance Range (m)Mean ErrorStandard Deviation*
[0.3 - 3]< 1 %Low
[3 - 5]< 3%Medium
[5 - 12 ]< 8%High

(*) A lower standard deviation indicates more stable and accurate depth estimation, resulting in smoother and more reliable 3D point clouds. Higher deviation can lead to noise and distortion, producing wavy or unstable point clouds.

Enabling the NEURAL LIGHT depth mode in the API #

// Set depth mode in NEURAL_LIGHT
InitParameters init_parameters;
init_parameters.depth_mode = DEPTH_MODE::NEURAL_LIGHT;
# Set depth mode in NEURAL_LIGHT
init_parameters = sl.InitParameters()
init_parameters.depth_mode = sl.DEPTH_MODE.NEURAL_LIGHT
// Set depth mode in NEURAL_LIGHT
InitParameters init_parameters = new InitParameters();
init_parameters.depthMode = DEPTH_MODE.NEURAL_LIGHT;

NEURAL PLUS #

The NEURAL_PLUS depth mode provides the highest depth accuracy and detail among all AI-powered modes. It is designed for applications that demand maximum precision and robustness, such as advanced robotics, inspection, and 3D reconstruction. While it requires more computational resources and delivers lower frame rates compared to other modes, NEURAL_PLUS excels in challenging environments and when capturing fine object details is critical.

Neural Plus Depth Computational Performances on embedded devices #

CamerasFPSCPU (%)GPU (%)
129790
2171190
482197
CamerasFPSCPU (%)GPU (%)
112292
25498
421498
CamerasFPSCPU (%)GPU (%)
132097
21.32598
4OOMOOMOOM
CamerasFPSCPU (%)GPU (%)
181295
241897
423598
CamerasFPSCPU (%)GPU (%)
181094
231998
413498

📌 Note: performance obtained with ZED SDK v5.0.1 RC, ZED X Driver v1.3.0, and ZED X camera using the multicamera code example available on GitHub.

Neural Plus Accuracy (ZED X) #

Distance Range (m)Mean ErrorStandard Deviation*
[0.3 - 9]< 1 %Low
[9 - 12]< 2%Medium

(*) A lower standard deviation indicates more stable and accurate depth estimation, resulting in smoother and more reliable 3D point clouds. Higher deviation can lead to noise and distortion, producing wavy or unstable point clouds.

Enabling the NEURAL PLUS depth mode in the API #

// Set depth mode in NEURAL_PLUS
InitParameters init_parameters;
init_parameters.depth_mode = DEPTH_MODE::NEURAL_PLUS;
# Set depth mode in NEURAL_PLUS
init_parameters = sl.InitParameters()
init_parameters.depth_mode = sl.DEPTH_MODE.NEURAL_PLUS
// Set depth mode in NEURAL_PLUS
InitParameters init_parameters = new InitParameters();
init_parameters.depthMode = DEPTH_MODE.NEURAL_PLUS;

Depth Modes Comparison #

Depth ModeIdeal RangeBenefitsLimitations
NEURAL_LIGHT[0.3-5]
  • Fastest depth mode available
  • Best for multi camera setup
  • Suited for mid-range obstacle avoidance
  • Smallest ideal depth range
  • May miss small objects or objects details
  • Slightly less robust to environmental light changes than NEURAL
  • NEURAL[0.3-9]
  • Balanced depth and performance
  • Better object detail than NEURAL_LIGHT
  • Suitable for most multi-camera applications
  • Same robustness to environmental changes as NEURAL_PLUS
  • Slower than NEURAL_LIGHT
  • Less detail than NEURAL_PLUS
  • NEURAL_PLUS[0.3-12]
  • Highest object details available
  • Highest ideal depth range and stability
  • Best for detecting near, far, and small objects
  • Most robust to environmental changes (rain,sun) and light reflections
  • Slowest depth mode
  • May not be suited for multi camera setup
  • Note:

    • The depth range is highly dependent on the camera baseline and optics. A bigger baseline produces increased depth range. Here, tests were conducted with a ZED X GS (lens of 2 mm) whose stereo baseline is of 120 mm.
    • Jetson Power Profile: Tests were conducted using MAXN without Super mode.