The ZED Node

To start a ZED ROS node you can use the command line

$ roslaunch zed_wrapped zed.launch

or

$ roslaunch zed_wrapped zedm.launch

if you own a ZED-M camera.

Published Topics

The ZED node publishes data to the following topics:

  • Left camera

    • /zed/zed_node/rgb/image_rect_color: Color rectified image (left RGB image by default)
    • /zed/zed_node/rgb_raw/image_raw_color: Color unrectified image (left RGB image by default)
    • /zed/zed_node/rgb/camera_info: Color camera calibration data
    • /zed/zed_node/rgb_raw/camera_info: Color unrectified camera calibration data
    • /zed/zed_node/left/image_rect_color: Left camera color rectified image
    • /zed/zed_node/left_raw/image_raw_color: Left camera color unrectified image
    • /zed/zed_node/left/camera_info: Left camera calibration data
    • /zed/zed_node/left_raw/camera_info: Left unrectified camera calibration data
  • Right camera

    • /zed/zed_node/right/image_rect_color: Color rectified right image
    • /zed/zed_node/right_raw/image_raw_color: Color unrectified right image
    • /zed/zed_node/right/camera_info: Right camera calibration data
    • /zed/zed_node/right_raw/camera_info: Right unrectified camera calibration data
  • Stereo pair

    • /zed/zed_node/stereo/image_rect_color: stereo rectified pair images side-by-side
    • /zed/zed_node/stereo_raw/image_raw_color: stereo unrectified pair images side-by-side

Note: to retrieve the camera parameters you can subscribe to the topics /zed/zed_node/left/camera_info,/zed/zed_node/right/camera_info,/zed/zed_node/left_raw/camera_infoand/zed/zed_node/right_raw/camera_info`

  • Depth and point cloud

    • /zed/zed_node/depth/depth_registered: Depth map image registered on left image (32-bit float in meters by default)
    • /zed/zed_node/depth/camera_info: Depth camera calibration data
    • /zed/zed_node/point_cloud/cloud_registered: Registered color point cloud
    • /zed/zed_node/confidence/confidence_image: Confidence image
    • /zed/zed_node/confidence/confidence_map: Confidence image (floating point values)
    • /zed/zed_node/disparity/disparity_image: Disparity image
  • Tracking

    • /zed/zed_node/odom: Absolute 3D position and orientation relative to the Odometry frame (pure visual odometry for ZED, visual-inertial for ZED-M)
    • /zed/zed_node/pose: Absolute 3D position and orientation relative to the Map frame (Sensor Fusion algorithm + SLAM)
    • /zed/zed_node/pose_with_covariance: Camera pose referred to Map frame with covariance
    • /zed/zed_node/path_odom: Sequence of camera odometry poses in Map frame
    • /zed/zed_node/path_map: Sequence of camera poses in Map frame
  • Mapping

    • /zed/zed_node/point_cloud/fused_cloud_registered: Fused color point cloud. Note: published only if mapping is enabled, see mapping/mapping_enabled parameter
  • Inertial Data

    • /zed/zed_node/imu/data: Accelerometer, gyroscope, and orientation data in Earth frame
    • /zed/zed_node/imu/data_raw: Accelerometer and gyroscope data in Earth frame
  • Diagnostic

    • /diagnostics

ZED parameters

You can specify the parameters to be used by the ZED node modifying the values in the files params/common.yaml, params/zed.yaml and params/zedm.yaml. The file params/zed.yaml contains the parameters for the ZED camera, the file params/zedm.yaml contains the parameters for the ZED-M camera, the file params/common.yaml contains the parameters common to both cameras.

General parameters

Parameters with prefix general.

Parameter Description Value
camera_model Type of Stereolabs camera zed: ZED, zedm: ZED-M
camera_flip Flip the camera data if it is mounted upsidedown true, false
zed_id Select a ZED camera by its ID. IDs are assigned by Ubuntu. Useful when multiple cameras are connected. ID is ignored if an SVO path is specified int, default 0
serial_number Select a ZED camera by its Serial Number int, default 0
resolution Select ZED camera resolution 0: HD2K, 1: HD1080, 2: HD720, 3: VGA
frame_rate Set ZED camera video framerate int
gpu_id Select a GPU device for depth computation int, default -1 (best device found)
base_frame Frame_id of the frame that indicates the reference base of the robot string, default=base_link
camera_frame Frame_id of the center of the camera string, default=zed_camera_center
left_camera_frame Frame_id of the left camera string, default=zed_left_camera_frame
left_camera_optical_frame Frame_id of the optics of the left camera string, default=zed_left_camera_optical_frame
right_camera_frame Frame_id of the right camera string, default=zed_right_camera_frame
right_camera_optical_frame Frame_id of the optics of the right camera string, default=zed_right_camera_optical_frame
verbose Enable/disable the verbosity of the SDK true, false
svo_compression Set SVO compression mode 0: RAW (no compression), 1: LOSSLESS (PNG/ZSTD), 2: LOSSY (JPEG), 3: AVCHD (H264 SDK v2.7), 4: HEVC (H265 SDK v2.7)

Video parameters

Parameters with prefix video.

Parameter Description Value
rgb_topic_root Namespace for RGB image topics string, default=rgb
right_topic_root Namespace for Right image topics string, default=right
left_topic_root Namespace for Left image topics string, default=left
stereo_topic_root Namespace for Stereo pair image topics string, default=stereo

Depth parameters

Parameters with prefix depth.

Parameter Description Value
quality Select depth map quality 0: NONE, 1: PERFORMANCE, 2: MEDIUM, 3: QUALITY, 4: ULTRA
sensing_mode Select depth sensing mode 0: STANDARD, 1: FILL
depth_stabilization Enable depth stabilization. Stabilizing the depth requires an additional computation load as it enables tracking 0: disabled, 1: enabled
openni_depth_mode Convert 32bit depth in meters to 16bit in millimeters 0: 32bit float meters, 1: 16bit uchar millimeters
depth_topic_root Namespace for depth image topics string, default=depth
point_cloud_topic Topic for the pointcloud message string, default=point_cloud/cloud_registered
disparity_topic Topic for the disparity message string, default=disparity/disparity_image
confidence_root Namespace for depth image topics string, default=confidence
min_depth Minimum value allowed for depth measures Min: 0.3 (ZED) or 0.1 (ZED-M), Max: 3.0

Position parameters

Parameters with prefix tracking.

Parameter Description Value
publish_tf Enable/disable publish TF frames true, false
publish_map_tf Enable/disable publish map TF frame true, false
world_frame The name of the static World reference frame string, default=map
map_frame Frame_id of the pose message string, default=map
odometry_frame Frame_id of the odom message string, default=odom
odometry_db Path of the file that contains saved space visual information string, default=``
pose_smoothing Enable smooth pose correction for small drift correction 0: disabled, 1: enabled
spatial_memory Enable Loop Closing, disable covariance information in pose messages true, false
floor_alignment Indicates if the floor must be used as origin for height measures true, false
initial_base_pose Initial reference pose vector, default=[0.0,0.0,0.0, 0.0,0.0,0.0] -> [X, Y, Z, R, P, Y]
pose_topic Topic for the camera pose message string, default=pose
publish_pose_covariance Enable/disable publish of the pose/odom covariance matrices true, false
fixed_covariance Disable/enable dynamic covariance calculation by ZED SDK true, false
fixed_cov_value Value on the diagonal of the covariance if fixed_covariance float, default=1e-6
odometry_topic Topic for the camera odometry message string, default=odom
init_odom_with_first_valid_pose Indicates if the odometry must be initialized with the first valid pose received by the tracking algorithm true, false
path_pub_rate Frequency (Hz) of publishing of the path messages float, default=2.0
path_max_count Maximum number of poses kept in the pose arrays (-1 for infinite) int, default=-1

Mapping parameters

Parameters with prefix mapping.

Note: the mapping module requires SDK v2.8 or higher

Parameter Description Value
mapping_enabled Enable/disable the mapping module true, false
resolution Resolution of the fused point cloud 0: HIGH, 1: MEDIUM, 2: LOW
fused_pointcloud_freq Publishing frequency (Hz) of the 3D map as fused point cloud double, default=1.0

Inertial parameters (only ZED-M)

Parameters with prefix imu.

Parameter Description Value
imu_frame Frame_if for the inertial messages string, default=zed_imu_link
imu_topic_root Namespace for inertial topics string, default=imu
imu_pub_rate Frequency (Hz) of publishing of the IMU messages float, default=100.0 (max 800.0)
imu_timestamp_sync Synchronize IMU message timestamp with latest received frame true, false

Dynamic parameters

The ZED node lets you reconfigure these parameters dynamically:

Parameter Description Value
auto_exposure Enable the auto exposure true, false
exposure Exposure value if auto-exposure is false int [0,100]
gain Gain value if auto-exposure is false int [0,100]
confidence Confidence threshold (lower values = more filtering) int [0,100]
mat_resize_factor Image resize factor float [0.01,1.0]
max_depth Maximum range allowed for depth. Values beyond this limit will be reported as TOO_FAR float, [0.01,20.0]
point_cloud_freq Frequency of the pointcloud publishing (equal or minor to frame_rate value) float [0.1,60.0]

To modify a dynamic parameter, you can use the GUI provided by the rqt stack:

$ rosrun rqt_reconfigure rqt_reconfigure

Transform frame

The ZED ROS wrapper broadcasts multiple coordinate frames that each provide information about the camera’s position and orientation. If needed, the reference frames can be changed in the launch file.

  • base_frame is the current position and orientation of the reference base of the robot
  • zed_camera_center is the current position and orientation of ZED, determined by visual odometry and the tracking module
  • zed_right_camera is the position and orientation of the ZED’s right camera
  • zed_right_camera_optical is the position and orientation of the ZED’s right camera optical frame
  • zed_left_camera is the position and orientation of the ZED’s left camera
  • zed_left_camera_optical is the position and orientation of the ZED’s left camera optical frame
  • imu_link is the origin of the inertial data frame (ZED-M only)

For RVIZ compatibilty, the root frame map_frame is called map. The TF tree generated by the zed_wrapper reflects the standard descripted in REP105. The odometry frame is updated using only the “visual odometry” information. The map frame is updated using the Tracking algorithm provided by the Stereolabs SDK, fusing the inertial information from the IMU sensor if using a ZED-M camera.

map_frame (`map`)
└─odometry_frame (`odom`)
   └─base_frame (`base_link`)
      └─camera_frame (`zed_camera_center`)
      |  └─left_camera_frame (`zed_left_camera_frame`)
      |  |     └─left_camera_optical_frame (`zed_left_camera_optical_frame`)
      |  └─right_camera_frame (`zed_right_camera_frame`)
      |        └─right_camera_optical_frame (`zed_right_camera_optical_frame`)
      └─imu_frame (`imu_link`) (*only ZED-M*)

ZED-M

The ZED-M provides the same information as the ZED, plus the inertial data from the IMU sensor. The IMU data are used internally to generate the pose in the Map frame with the Tracking sensor fusion algorithm.

Note: The initial pose in Odometry frame can be set to the first pose received by the Tracking module by setting the parameter init_odom_with_first_valid_pose to true.

Services

The ZED node provides the following services:

  • set_initial_pose: Restarts the Tracking module setting the initial pose of the camera to the value passed as vector parameter -> [X, Y, Z, R, P, Y]
  • reset_tracking: Restarts the Tracking module setting the initial pose to the value available in the param server
  • reset_odometry: Resets the odometry values eliminating the drift due to the Visual Odometry algorithm, setting the new odometry value to the latest camera pose received from the tracking module
  • set_led_status: Sets the status of the blue led -> True: LED ON, False: LED OFF (At least FW 1523 and SDK v2.8 is required)
  • toggle_led: Toggle the status of the blue led, returning the new status (At least FW 1523 and SDK v2.8 is required)
  • start_svo_recording: Start recording an SVO file. If no filename is provided the default zed.svo is used. If no path is provided with the filename the default recording folder is ~/.ros/
  • stop_svo_recording: Stop an active SVO recording
  • start_remote_stream: Start streaming over network to allow processing of ZED data on a remote machine (At least FW 1523 and SDK v2.8 is required)
  • stop_remote_stream: Stop streaming over network (At least FW 1523 and SDK v2.8 is required)

Note: Currently the H26x SVO compression uses the resolution and the real FPS to calculate the bitrate. This feature can lead to some issues of quality when FPS is low. On Nvidia Jetson TX1 and Jetson TX2, the FPS is quite low at the beginning of the grab loop, therefore it could be better to wait for some grab calls before calling start_svo_recording so that the camera FPS is stabilized at the requested value (15fps or more).

Remote SVO streaming

With SDK v2.8 has been introduced the capability to acquire ZED data and to stream it on a remote machine over network. This features is useful when the ZED is connected to a machine with limited CUDA capabilities while a high definition analysis is required. Starting the streaming without activating any other feature (i.e. depth processing, positional tracking, point cloud publishing, …) requires only the power for H264 or H265 compression.

To start SVO streaming the service start_remote_stream must be called with the following parameters:

  • codec (def. 0): Defines the codec used for streaming (0: AVCHD [H264], 1: HEVC [H265]) Note: If HEVC (H265) is used, make sure the recieving host is compatible with HEVC decoding (basically a pascal NVIDIA card). If not, prefer to use AVCHD (H264) since every compatible NVIDIA card supports AVCHD decoding
  • port (def. 30000): Defines the PORT the data will be streamed on. Note: port must be an even number. Any odd number will be rejected. uint16 port=30000
  • bitrate (def. 2000): Defines the streaming BITRATE in Kbits/s
  • gop_size (def. -1): Defines the GOP SIZE in frame unit. Note: if value is set to -1, the gop size will match 2 seconds, depending on camera fps. The gop size determines the maximum distance between IDR/I-frames. Very high GOP size will result in slightly more efficient compression, especially on static scene. But it can result in more latency if IDR/I-frame packet are lost during streaming. Maximum allowed value is 256 (frames).
  • adaptative_bitrate (def. False): Enable/Disable adaptive bitrate. Note: Bitrate will be adjusted regarding the number of packet loss during streaming. If activated, bitrate can vary between [bitrate/4, bitrate]. Currently, the adaptive bitrate only works when “sending” device is a NVIDIA jetson (X1,X2,Xavier,Nano)

Receive a remote stream

To acquire the streaming on a remote machine start a ZED node with the following command:

$ roslaunch zed_wrapper zed.launch stream:=<sender_IP>:<port> 

For example:

$ roslaunch zed_wrapper zed.launch stream:=192.168.1.127:30000

Note: the stream contains only visual information. Using a ZED-M camera the inertial topic will not be available if not subscribed using ROS standard methods.

Mapping

The ZED node provides a basilar mapping module publishing a 3D map of the environment as 3D color point cloud (/zed/zed_node/point_cloud/fused_cloud_registered). Mapping is disabled by default as it’s an heavy consuming process, it can be enabled setting to true the parameter mapping/mapping_enabled in the file params/common.yaml. To reduce the computational power requirements use MEDIUM or LOW mapping resolution and decrease the frequency of the point cloud topic publishing.

Diagnostic

The ZED node publishes diagnostic information that can be used by the robotics system using a diagnostic_aggregator node.

Using the Runtime monitor plugin of rqt it is possible to get all the diagnostic information and check that the node is working as expected:

A brief explanation of each field:

  • Component: name of the diagnostic component
  • Message: summary of the status of the ZED node
  • HardwareID: Model of the ZED camera and its serial number
  • Capture: grabbing frequency (if video or depth data are subscribed) and the percentage respect to the camera frame rate
  • Processing time: time in seconds spent to elaborate data and the time limit to achieve max frame rate
  • Depth status: indicates if the depth processing is performed
  • Point Cloud: point cloud publishing frequency (if there is at least a subscriber) and the percentage respect to the camera frame rate
  • Floor Detection: if the floor detection is enabled, indicates if the floor has been detected and the camera position correctly initialized
  • Tracking status: indicates the status of the tracking, if enabled
  • IMU: the publishing frequency of the IMU topics, if the camera is the ZED Mini and there is at least a subscriber
  • SVO Recording: indicates if the SVO recording is active
  • SVO Compression time: average time spent on frame compressing
  • SVO Compression ratio: average frame compression ratio

Using multiple ZEDs

It is possible to use multiple ZED cameras with ROS. Simply launch the node with the zed_multi_cam.launch file:

$ roslaunch zed_wrapper zed_multi_cam.launch

Assigning a GPU to a camera

To improve performance, you can specify the gpu_id of the graphic card that will be used for the depth computation in the launch file. The default value (-1) will select the GPU with the highest number of CUDA cores. When using multiple ZEDs, you can assign each camera to a GPU to increase performance.

Limitations

Performance

This wrapper lets you quickly prototype applications and interface the ZED with other sensors and packages available in ROS. However, the ROS layer introduces significant latency and a performance hit. If performance is a major concern for your application, please consider using the ZED SDK library.

Multiple ZEDs

The ZED camera uses the maximum bandwidth provided by USB 3.0 to output video. When using multiple ZEDs, you may need to reduce camera framerate and resolution to avoid corrupted frames (green or purple frames). You can also use multiple GPUs to load-balance computations and improve performance.