The ZED Node

To start a ZED ROS node you can use the command line

$ roslaunch zed_wrapped zed.launch

Published Topics

The ZED node publishes data to the following topics:

  • Left camera

    • zed/rgb/image_rect_color: Color rectified image (left RGB image by default)
    • zed/rgb/image_raw_color: Color unrectified image (left RGB image by default)
    • zed/rgb/camera_info: Color camera calibration data
    • zed/left/image_rect_color: Left camera color rectified image
    • zed/left/image_raw_color: Left camera color unrectified image
    • zed/left/camera_info: Left camera calibration data
  • Right camera

    • zed/right/image_rect_color: Color rectified right image
    • zed/right/image_raw_color: Color unrectified right image
    • zed/right/camera_info: Right camera calibration data
  • Depth and point cloud

    • zed/depth/depth_registered: Depth map image registered on left image (32-bit float in meters by default)
    • zed/depth/camera_info: Depth camera calibration data
    • zed/point_cloud/cloud_registered: Registered color point cloud
    • zed/confidence/confidence_image: Confidence image
    • zed/confidence/confidence_map: Confidence image (floating point values)
    • zed/disparity/disparity_image: Disparity image
  • Tracking

    • zed/odom: Absolute 3D position and orientation relative to the Odometry frame (pure visual odometry for ZED, visual-inertial for ZED Mini)
    • zed/pose: Absolute 3D position and orientation relative to the Map frame (Sensor Fusion algorithm + SLAM)
    • zed/pose_with_covariance: Camera pose referred to Map frame with covariance (if spatial_memory is false in launch parameters)
    • zed/path_odom: Sequence of camera odometry poses in Map frame
    • zed/path_map: Sequence of camera poses in Map frame
  • Inertial Data

    • zed/imu/data: Accelerometer, gyroscope, and orientation data in Earth frame
    • zed/imu/data_raw: Accelerometer and gyroscope data in Earth frame
  • Diagnostic

    • /diagnostics

Launch file parameters

Specify your launch parameters in the zed_camera.launch file available here.

General parameters

Parameter Description Value
camera_model Type of Stereolabs camera 0: ZED, 1: ZED Mini
publish_tf Enable/disable publish TF frames true, false
publish_map_tf Enable/disable publish map TF frame true, false
camera_flip Flip the camera data if it is mounted upsidedown true, false
svo_file Specify SVO filename string, default=”
zed_id Select a ZED camera by its ID. IDs are assigned by Ubuntu. Useful when multiple cameras are connected. ID is ignored if an SVO path is specified int, default ‘0’
serial_number Select a ZED camera by its Serial Number int, default ‘0’
resolution Select ZED camera resolution ‘0’: HD2K, ‘1’: HD1080, ‘2’: HD720, ‘3’: VGA
verbose Enable/disable the verbosity of the SDK true, false
mat_resize_factor Image resize factor float [0.01,1.0]
frame_rate Set ZED camera video framerate int
gpu_id Select a GPU device for depth computation int, default ‘-1’ (best device found)

Video parameters

Parameter Description Value
gain Gain value if auto-exposure is false int [0,100]
exposure Exposure value if auto-exposure is false int [0,100]
auto_exposure Enable the auto exposure true, false
rgb_topic Topic for the RGB image message string, default=rgb/image_rect_color
rgb_raw_topic Topic for the raw RGB image message string, default=rgb/image_raw_color
rgb_cam_info_topic Topic for the RGB camera parameters string, default=rgb/camera_info
rgb_cam_info_raw_topic Topic for the raw RGB camera parameters string, default=rgb/camera_info_raw
left_topic Topic for the Left image message string, default=left/image_rect_color
left_raw_topic Topic for the raw Left image message string, default=left/image_raw_color
left_cam_info_topic Topic for the Left camera parameters string, default=left/camera_info
left_cam_info_raw_topic Topic for the raw Left camera parameters string, default=left/camera_info_raw
right_topic Topic for the Right image message string, default=right/image_rect_color
right_raw_topic Topic for the raw Right image message string, default=right/image_raw_color
right_cam_info_topic Topic for the Right camera parameters string, default=right/camera_info
right_cam_info_raw_topic Topic for the raw Right camera parameters string, default=right/camera_info_raw

Depth parameters

Parameter Description Value
quality Select depth map quality ‘0’: NONE, ‘1’: PERFORMANCE, ‘2’: MEDIUM, ‘3’: QUALITY, ‘4’: ULTRA
sensing_mode Select depth sensing mode ‘0’: STANDARD, ‘1’: FILL
confidence Confidence threshold (lower values = more filtering) int [0,100]
max_depth Maximum range allowed for depth. Values beyond this limit will be reported as TOO_FAR float, [0.01,20.0]
depth_stabilization Enable depth stabilization. Stabilizing the depth requires an additional computation load as it enables tracking 0: disabled, 1: enabled
depth_topic Topic for the depth image message string, default=depth/depth_registered
depth_cam_info_topic Topic for the depth camera parameters string, default=depth/camera_info
point_cloud_topic Topic for the pointcloud message string, default=point_cloud/cloud_registered
disparity_topic Topic for the disparity message string, default=disparity/disparity_image
confidence_img_topic Topic for the confidence image message (Visualization) string, default=confidence/confidence_image
confidence_map_topic Topic for the confidence map message (Elaboration) string, default=confidence/confidence_map

Position parameters

Parameter Description Value
pose_frame Frame_id of the pose message string, default=‘map’
odometry_frame Frame_id of the odom message string, default=‘odom’
base_frame Frame_id of the frame that indicates the center of the camera string, default=‘zed_camera_center’
left_camera_frame Frame_id of the left camera string, default=‘zed_left_camera_frame’
left_camera_optical_frame Frame_id of the optics of the left camera string, default=‘zed_left_camera_optical_frame’
right_camera_frame Frame_id of the right camera string, default=‘zed_right_camera_frame’
right_camera_optical_frame Frame_id of the optics of the right camera string, default=‘zed_right_camera_optical_frame’
imu_frame Frame_id of the inertial sensor (only ZED mini) string, default=‘imu_link’
odometry_db Path of the file that contains saved space visual information string, default=”
openni_depth_mode Convert 32bit depth in meters to 16bit in millimeters ‘0’: 32bit float meters, ‘1’: 16bit uchar millimeters
pose_smoothing Enable smooth pose correction for small drift correction 0: disabled, 1: enabled
spatial_memory Enable Loop Closing algorithm, disable covariance information in pose messages true, false
floor_alignment Indicates if the floor must be used as origin for height measures true, false
initial_tracking_pose Initial reference pose vector, default=’[0.0,0.0,0.0, 0.0,0.0,0.0]’ -> [X, Y, Z, R, P, Y]
pose_topic Topic for the camera pose message string, default=‘pose’
odometry_topic Topic for the camera odometry message string, default=‘odom’
init_odom_with_first_valid_pose Indicates if the odometry must be initialized with the first valid pose received by the tracking algorithm true, false
imu_topic Topic for the IMU message (only ZED mini) string, default=‘imu/data’
imu_raw_topic Topic for the rawIMU message (only ZED mini) string, default=‘imu/data_raw’
imu_pub_rate Frequency (Hz) of publishing of the IMU messages float, default=500.0
imu_timestamp_sync Indicates if the timestamp of the IMU topics is always referred to the last grabbed frame (true) or to the data acquisition time (false) true, false
path_pub_rate Frequency (Hz) of publishing of the path messages float, default=2.0
path_max_count Maximum number of poses kept in the pose arrays (-1 for infinite) int, default=-1
publish_pose_covariance Enable/disable publish of the pose/odom covariance matrices true, false

Dynamic parameters

The ZED node lets you reconfigure these parameters dynamically:

  • confidence: Confidence threshold, the lower the better
  • auto_exposure: Enable/Disable auto control of exposure and gain
  • exposure: Exposure value when manual controlled (auto_exposure=0)
  • gain: Gain value when manual controlled (auto_exposure=0)
  • mat_resize_factor: Image/Measures resize factor
  • max_depth: the maximum depth range

To modify a dynamic parameter, you can use the GUI provided by the rqt stack:

$ rosrun rqt_reconfigure rqt_reconfigure

Transform frame

The ZED ROS wrapper broadcasts multiple coordinate frames that each provide information about the camera’s position and orientation. If needed, the reference frames can be changed in the launch file.

  • zed_camera_center is the current position and orientation of ZED, determined by visual odometry and the tracking algorithm
  • zed_right_camera is the position and orientation of the ZED’s right camera
  • zed_right_camera_optical is the position and orientation of the ZED’s right camera optical frame
  • zed_left_camera is the position and orientation of the ZED’s left camera
  • zed_left_camera_optical is the position and orientation of the ZED’s left camera optical frame
  • imu_link is the origin of the inertial data frame (ZED Mini only)

For RVIZ compatibilty, the root frame pose_frame is called map. The TF tree generated by the zed_wrapper reflects the standard descripted in REP105. The odometry frame is updated using only the “visual odometry” information. The map frame is updated using the Tracking algorithm provided by the Stereolabs SDK, fusing the inertial information from the IMU sensor if using a ZED Mini camera.

pose_frame (`map`)
└─odometry_frame (`odom`)
    └─base_frame (`zed_camera_center`)
      └─left_camera_frame (`zed_left_camera_frame`)
            └─left_camera_optical_frame (`zed_left_camera_optical_frame`)
      └─right_camera_frame (`zed_right_camera_frame`)
            └─right_camera_optical_frame (`zed_right_camera_optical_frame`)
    └─imu_frame (`imu_link`) (*only ZED Mini*)

ZED Mini

The ZED Mini provides the same information as the ZED, plus the inertial data from the IMU sensor. The IMU data are used internally to generate the pose in the Map frame with the Tracking sensor fusion algorithm.

Note: The initial pose in Odometry frame can be set to the first pose received by the Tracking algorithm by setting the parameter init_odom_with_imu to true.

Services

The ZED node provides the following services:

  • set_initial_pose: Restarts the Tracking algorithm setting the initial pose of the camera to the value passed as vector parameter -> [X, Y, Z, R, P, Y]
  • reset_tracking: Restarts the Tracking algorithm setting the initial pose to the value available in the param server
  • reset_odometry: Resets the odometry values eliminating the drift due to the Visual Odometry algorithm, setting the new odometry value to the latest camera pose received from the tracking algorithm

Diagnostic

The ZED node publishes diagnostic information that can be used by the robotics system using a diagnostic_aggregator node.

Using the Runtime monitor plugin of rqt it is possible to get all the diagnostic information and check that the node is working as expected:

A brief explanation of each field:

  • Component: name of the diagnostic component
  • Message: summary of the status of the ZED node
  • HardwareID: Model of the ZED camera and its serial number
  • Capture: grabbing frequency (if video or depth data are subscribed) and the percentage respect to the camera frame rate
  • Processing time: time in seconds spent to elaborate data and the time limit to achieve max frame rate
  • Depth status: indicates if the depth processing is performed
  • Point Cloud: point cloud publishing frequency (if there is at least a subscriber) and the percentage respect to the camera frame rate
  • Floor Detection: if the floor detection is enabled, indicates if the floor has been detected and the camera position correctly initialized
  • Tracking status: indicates the status of the tracking, if enabled
  • IMU: the publishing frequency of the IMU topics, if the camera is the ZED Mini and there is at least a subscriber

Using multiple ZEDs

It is possible to use multiple ZED cameras with ROS. Simply launch the node with the zed_multi_cam.launch file:

$ roslaunch zed_wrapper zed_multi_cam.launch

Assigning a GPU to a camera

To improve performance, you can specify the gpu_id of the graphic card that will be used for the depth computation in the launch file. The default value (-1) will select the GPU with the highest number of CUDA cores. When using multiple ZEDs, you can assign each camera to a GPU to increase performance.

Limitations

Performance

This wrapper lets you quickly prototype applications and interface the ZED with other sensors and packages available in ROS. However, the ROS layer introduces significant latency and a performance hit. If performance is a major concern for your application, please consider using the ZED SDK library.

Multiple ZEDs

The ZED camera uses the maximum bandwidth provided by USB 3.0 to output video. When using multiple ZEDs, you may need to reduce camera framerate and resolution to avoid corrupted frames (green or purple frames). You can also use multiple GPUs to load-balance computations and improve performance.