ROS 2 - ZED Node

The easiest way to start a ZED ROS 2 node is by using the command line:

ros2 launch zed_wrapper zed_camera.launch.py camera_model:=<camera model>

Published Topics #

The ZED node publishes data to the following topics:

  • Left camera

    • ~/left/camera_info: Left camera calibration data.
    • ~/left/image_rect_color: Left camera color rectified image.
    • ~/left_gray/image_rect_gray: Left camera gray rectified image.
    • ~/left_raw/camera_info: Left camera raw calibration data.
    • ~/left_raw/image_rect_color: Left camera color unrectified image.
    • ~/left_raw_gray/image_rect_gray: Left camera gray unrectified image.
    • ~/rgb/camera_info: RGB calibration data.
    • ~/rgb/image_rect_color: RGB color rectified image.
    • ~/rgb_gray/image_rect_gray: RGB gray rectified image.
    • ~/rgb_raw/camera_info: RGB raw calibration data.
    • ~/rgb_raw/image_rect_color: RGB color unrectified image.
    • ~/rgb_raw_gray/image_rect_gray: RGB gray unrectified image.
  • Right camera

    • ~/right/camera_info: Right camera calibration data.
    • ~/right/image_rect_color: Right camera color rectified image.
    • ~/right_gray/image_rect_gray: Right camera gray rectified image.
    • ~/right_raw/camera_info: Right camera raw calibration data.
    • ~/right_raw/image_rect_color: Right camera color unrectified image.
    • ~/right_raw_gray/image_rect_gray: Right camera gray unrectified image.
  • Stereo pair

    • ~/stereo/image_rect_color: side-by-side left/right rectified stereo pair. Calibration data are available in the topics: ~/left/camera_info and ~/right/camera_info.
    • ~/stereo_raw/image_raw_color: side-by-side left/right unrectified stereo pair. Calibration data are available in the topics: ~/left_raw/camera_info and ~/right_raw/camera_info.
  • Depth and point cloud

    • ~/depth/camera_info: Depth camera calibration data.
    • ~/depth/depth_registered: Depth map image registered on left image.
    • ~/depth/depth_info: minimum and maximum depth information (custom message).
    • ~/point_cloud/cloud_registered: Registered color point cloud.
    • ~/confidence/confidence_map: Confidence image (doubleing point values).
    • ~/disparity/disparity_image: Disparity image.
  • Sensors data

    • ~/left_cam_imu_transform: Transform from the left camera sensor to the IMU sensor.
    • ~/imu/data: Accelerometer, gyroscope, and orientation data in Earth frame.
    • ~/imu/data_raw: Accelerometer and gyroscope data in Earth frame.
    • ~/imu/mag: Raw magnetometer data (only ZED 2 and ZED 2i).
    • ~/atm_press: atmospheric pressure (only ZED 2 and ZED 2i).
    • ~/temperature/imu: temperature measured by the IMU sensor.
    • ~/temperature/left: temperature measured near the left CMOS sensor (only ZED 2 and ZED 2i).
    • ~/temperature/right: temperature measured near the right CMOS sensor (only ZED 2 and ZED 2i).
  • Positional tracking

    • ~/pose: Absolute 3D position and orientation relative to the Map frame (Sensor Fusion algorithm + SLAM).
    • ~/pose/status: the status of the positional tracking module (custom message).
    • ~/pose_with_covariance: Camera pose referred to Map frame with covariance.
    • ~/odom: Absolute 3D position and orientation relative to the Odometry frame (pure visual odometry for ZED, visual-inertial odometry for ZED Mini).
    • ~/odom/status: the status of the odometry from the positional tracking module (custom message).
    • ~/path_map: Sequence of camera poses in Map frame.
    • ~/path_odom: Sequence of camera odometry poses in Map frame.
  • Geo Tracking

    • ~/pose/filtered: the GNSS fused pose of the robot
    • ~/pose/filtered/status: the status of the GNSS fused pose (custom message).
    • ~/geo_pose/: the Latitude/Longitude pose of the robot
    • ~/geo_pose/status: the status of the Latitude/Longitude pose (custom message).
  • 3D Mapping

    • ~/mapping/fused_cloud: Fused point cloud created when the enable_mapping service is called.
  • Object Detection

  • Body Tracking

  • Plane Detection

    • ~/plane: Detected plane (custom message).
    • ~/plane_marker: Detected plane object to be displayed on RVIZ 2.

Note: Each topic has a common prefix created by the launch file as ~ = /<namespace>/<node_name>/, where <namespace> is replaced with the value of the parameter camera_name and <node_name> is the name of the node, e.g. zed_node.

Image Transport #

The ROS 2 wrapper supports the stack image_transport introduced with ROS 2 Crystal Clemmys.

All the image topics are published using the image_transport::CameraPublisher object, that correctly associates the sensor_msgs::msg::CameraInfo message to the relative sensor_msg::msg::Image message and creates the relative compressed image streams.

QoS profiles #

All the topics are configured by default to use the following ROS 2 QoS profile:

  • Reliability: RELIABLE
  • History (Depth): KEEP_LAST (10)
  • Durability: VOLATILE
  • Lifespan: Infinite
  • Deadline: Infinite
  • Liveliness: AUTOMATIC
  • Liveliness lease duration: Infinite

| Note: Read the official ROS 2 documentation about the QoS and the relatived settings for details.

The QoS settings can be modified by changing the relative parameters in the YAML files, as described in this official ROS 2 design document.

When creating a subscriber, be sure to use a compatible QOS profile according to the following tables:

Compatibility of QoS durability profiles:

PublisherSubscriberConnectionResult
VolatileVolatileYesVolatile
VolatileTransient localNo-
Transient localVolatileYesVolatile
Transient localTransient localYesTransient local

Compatibility of QoS reliability profiles:

PublisherSubscriberConnectionResult
Best effortBest effortYesBest effort
Best effortReliableNo-
ReliableBest effortYesBest effort
ReliableReliableYesReliable

In order for a connection to be made, all of the policies that affect compatibility must be compatible. For instance, if a publisher-subscriber pair has compatible reliability QoS profiles, but incompatible durability QoS profiles, the connection will not be made.

| Note: for a detailed list of all the compatibility settings, please refer to the official ROS 2 documentation.

Configuration parameters #

Specify your launch parameters in the common.yaml, zed.yaml, zedm.yaml, zed2.yaml, zed2i.yaml, zedx.yaml, zedxm.yaml, and virtual files available in the folder zed_wrapper/config.

General parameters #

Namespace: general

common.yaml

ParameterDescriptionValue
camera_timeout_secCamera timeout (sec) if communication failsint
camera_max_reconnectReconnection temptatives after timeout, before shutting downint
camera_flipFlip the camera data if it is mounted upsidedowntrue, false
serial_numberSelect a ZED camera by its Serial Numberint
pub_resolutionThe resolution used for output. ‘NATIVE’ to use the same general.grab_resolution - CUSTOM to apply the general.pub_downscale_factor downscale factory to reduce bandwidth in transmissionstring,‘NATIVE’‘SVGA’, ‘VGA’, ‘LOW’
pub_downscale_factorRescale factor used to rescale image before publishing when ‘pub_resolution’ is ‘CUSTOM’double
pub_frame_rate*Set the video/depth publish ratedouble
gpu_idSelect a GPU device for depth computationint
optional_opencv_calibration_fileOptional path where the ZED SDK can find a file containing the calibration information of the camera computed by OpenCV.string

* Dynamic parameter

zed.yaml, zedm.yaml, zed2.yaml, zed2i.yaml, zedx.yaml, zedxm.yaml, and virtual.yaml

ParameterDescriptionValue
camera_modelType of Stereolabs camerastring, zed, zedm, zed2, zed2i, zedx, zedxm, virtual
camera_nameUser name for the camera, can be different from camera modelstring
grab_resolutionThe native camera grab resolution.string, ‘HD2K’, ‘HD1200’,‘HD1080’, ‘HD720’, ‘SVGA’, ‘VGA’, ‘AUTO’
grab_frame_rateZED SDK internal grabbing rateint, ‘15’,‘30’,‘60’,‘90’,‘100’,‘120’

SVO input parameters #

Namespace: svo

common.yaml

ParameterDescriptionValue
svo_pathSpecify SVO filenamestring, Note: relative file paths are not allowed
svo_loopRestart the SVO if the end of the file is reachedtrue, false
svo_realtimeif true SVO will be played trying to respect the original framerate eventually skipping frames, otherwise every frame will be processed respecting the pub_frame_rate settingtrue, false

Streaming input parameters #

Namespace: stream

common.yaml

ParameterDescriptionValue
stream_addressAddress of the local streaming serverstring
stream_portPort of the local streaming server (only odd values are valid)int

Simulation input parameters #

Namespace: simulation

common.yaml

ParameterDescriptionValue
sim_enabledEnable simulation inputtrue, false
sim_addressAddress of the simulation serverstring
sim_addressPort of the simulation server (only odd values are valid)int

Video parameters #

Namespace: video

common.yaml

ParameterDescriptionValue
brightness*Image brightnessint, [0,8]
contrast*Image contrastint, [0,8]
hue*Image hueint, [0,11]
saturation*Image saturationint, [0,8]
sharpness*Image sharpnessint, [0,8]
gamma*Image gammaint, [0,8]
auto_exposure_gain*Enable the auto exposure and auto gaintrue, false
exposure*Exposure value if auto-exposure is falseint [0,100]
gain*Gain value if auto-exposure is falseint [0,100]
auto_whitebalance*Enable the auto white balancetrue, false
whitebalance_temperature*White balance temperatureint [28,65]

* Dynamic parameter

zedx.yaml, zedxm.yaml, and virtual.yaml

ParameterDescriptionValue
exposure_time*Defines the real exposure time in microseconds. Recommended to control manual exposure (instead of video.exposure setting)int, [28,30000]
auto_exposure_time_range_min*Defines the minimum range of exposure auto control in micro secondsint
auto_exposure_time_range_max*Defines the maximum range of exposure auto control in micro secondsint
exposure_compensation*Defines the Exposure-target compensation made after auto exposure. Reduces the overall illumination target by factor of F-stops.int, [0 - 100]
analog_gain*Defines the real analog gain (sensor) in mDB. Recommended to control manual sensor gain (instead of video.gain setting)int, [1000-16000].
auto_analog_gain_range_min*Defines the minimum range of sensor gain in automatic controlint
auto_analog_gain_range_max*Defines the maximum range of sensor gain in automatic controlint
digital_gain*Defines the real digital gain (ISP) as a factor. Recommended to control manual ISP gain (instead of video.gain setting)int, [1-256]
auto_digital_gain_range_min*Defines the minimum range of digital ISP gain in automatic controlint
auto_digital_gain_range_max*Defines the maximum range of digital ISP gain in automatic controlint
denoising*Defines the level of denoising applied on both left and right imagesint, [0-100]

* Dynamic parameter

Region of Interest parameters #

Namespace: region_of_interest

common.yaml

ParameterDescriptionValue
automatic_roiEnable the automatic ROI generation to automatically detect part of the robot in the FoV and remove them from the processing. Note: if enabled the value of manual_polygon is ignoredtrue, false
depth_far_threshold_metersFiltering how far object in the ROI should be considered, this is useful for a vehicle for instancedouble
image_height_ratio_cutoffBy default consider only the lower half of the image, can be useful to filter out the skydouble
manual_polygonA polygon defining the ROI where the ZED SDK perform the processing ignoring the rest. Coordinates must be normalized to ‘1.0’ to be resolution independent.string
apply_to_depthApply ROI to depth processingtrue, false
apply_to_positional_trackingApply ROI to positional tracking processingtrue, false
apply_to_object_detectionApply ROI to object detection processingtrue, false
apply_to_body_trackingApply ROI to body tracking processingtrue, false
apply_to_spatial_mappingApply ROI to spatial mapping processingtrue, false

Depth parameters #

Namespace: depth

common.yaml

ParameterDescriptionValue
depth_modeSelect depth map qualitystring, ‘NONE’, ‘PERFORMANCE’, ‘QUALITY’, ‘ULTRA’, ‘NEURAL’
depth_stabilizationEnable depth stabilization. Stabilizing the depth requires an additional computation load as it enables trackingint, [0,100]
openni_depth_modeEnable/disable the depth OpenNI format (16 bit, millimeters)int, [0,1]
point_cloud_freq*frequency of the pointcloud publishing (equal or less to grab_frame_rate value)double
depth_confidence*Depth confidence threshold (lower values = more filtering)int [0,100]
depth_texture_conf*Depth tectureconfidence threshold (lower values = more filtering)int [0,100]
remove_saturated_areas*If true color saturated areas are not included in the depth maptrue, false

Note: To disable depth calculation set quality to NONE. All the other parameters relative to depth will be ignored and only the video streams and sensors data will be published.

* Dynamic parameter

zed.yaml, zedm.yaml, zed2.yaml, zed2i.yaml, zedx.yaml, zedxm.yaml, and virtual.yaml

ParameterDescriptionValue
min_depthMinimum depth value (from the camera) that will be computeddouble
max_depthMaximum range allowed for depth. Values beyond this limit will be reported as TOO_FARdouble, ]0.0,20.0]

* Dynamic parameter

Positional tracking parameters (Odometry and Localization) #

Namespace: pos_tracking

common.yaml

ParameterDescriptionValue
pos_tracking_enabledEnable positional tracking when the node startstrue, false
pos_tracking_modeUse GEN_1 for old behaviors, GEN_2 for the new algorithmstring, ‘GEN_1’, ‘GEN_2’
imu_fusionEnable IMU fusion. When set to false, only the optical odometry will be used.true, false
publish_tfEnable publishing of transform odom -> base_link and all the other transformstrue, false
publish_map_tfEnable publishing of transform map -> odom. Only valid if publish_tf is truetrue, false
map_frameFrame_id of the pose messagestring
odometry_frameFrame_id of the odom messagestring
area_memory_db_pathThe full path of the space memory DBstring
area_memoryEnable Loop Closing algorithmtrue, false
reset_odom_with_loop_closureEnable the automatic reset of the odometry when a loop closure event is detected.true, false
depth_min_rangeSet this value for removing fixed zones of the robot in the FoV of the camerafrom the visual odometry evaluationdouble
set_as_staticIf ’true’ the camera will be static and not move in the environmenttrue, false
set_gravity_as_originIf ’true’ align the positional tracking world to imu gravity measurement. Keep the yaw from the user initial pose.true, false
floor_alignmentIndicates if the floor must be used as origin for height measurestrue, false
initial_base_poseInitial reference pose. Useful if the base_link does not start from the originarray
path_pub_rateFrequency (Hz) of publishing of the path messagesdouble
path_max_countMaximum number of poses kept in the pose arrays (-1 for infinite)int
two_d_modeEnable if robot moves on a planar surface. Forces Z to fixed_z_value, roll and pitch to zerotrue, false
fixed_z_valueFixed value assigned to Z coordinate if two_d_mode is truedouble
transform_time_offsetThe value added to the timestamp of map->odom and odom->base_link transform being generateddouble

Global Localization (GNSS Fusion) #

Namespace: gnss_fusion

common.yaml

ParameterDescriptionValue
gnss_fusion_enabledEnable the fusion of ‘sensor_msg/NavSatFix’ message information into pose datatrue, false
gnss_fix_topicName of the GNSS topic of type NavSatFix to subscribe [Default: “/gps/fix”]string
gnss_zero_altitudeSet to true to ignore GNSS altitude informationtrue, false
h_covariance_mulMultiplier factor to be applied to horizontal covariance of the received fix (plane X/Y)double
v_covariance_mulMultiplier factor to be applied to vertical covariance of the received fix (Z axis)double
gnss_frameThe TF frame of the GNSS sensor [Default: “gnss_link”]string
publish_utm_tfPublish utm -> map TFtrue, false
broadcast_utm_transform_as_parent_frameIf ’true’ publish utm -> map TF, otherwise map -> utmtrue, false
enable_reinitializationDetermines whether reinitialization should be performed between GNSS and VIO fusion when a significant disparity is detected between GNSS data and the current fusion data. It becomes particularly crucial during prolonged GNSS signal loss scenarios.true, false
enable_rolling_calibrationIf this parameter is set to true, the fusion algorithm will used a rough VIO / GNSS calibration at first and then refine it. This allow you to quickly get a fused positiontrue, false
enable_translation_uncertainty_targetWhen this parameter is enabled (set to true), the calibration process between GNSS and VIO accounts for the uncertainty in the determined translation, thereby facilitating the calibration termination. The maximum allowable uncertainty is controlled by the ’target_translation_uncertainty’ parametertrue, false
gnss_vio_reinit_thresholdDetermines the threshold for GNSS/VIO reinitialization. If the fused position deviates beyond out of the region defined by the product of the GNSS covariance and the gnss_vio_reinit_threshold, a reinitialization will be triggereddouble
target_translation_uncertaintyDefines the target translation uncertainty at which the calibration process between GNSS and VIO concludes. By default, the threshold is set at 10 centimeters.double
target_yaw_uncertaintyDefines the target yaw uncertainty at which the calibration process between GNSS and VIO concludes. The unit of this parameter is in radian. By default, the threshold is set at 0.1 radiansdouble

Mapping parameters #

Namespace: mapping

common.yaml

ParameterDescriptionValue
mapping_enabledEnable/disable the mapping moduletrue, false
resolutionResolution of the fused point cloud [0.01, 0.2]double, default=0.1
max_mapping_rangeMaximum depth range while mapping in meters (-1 for automatic calculation) [2.0, 20.0]double
fused_pointcloud_freqPublishing frequency (Hz) of the 3D map as fused point clouddouble
clicked_point_topicTopic published by Rviz when a point of the cloud is clicked. Used for plane detectionstring, default "/clicked_point"
### Sensors parameters

Namespace: sensors

common.yaml

ParameterDescriptionValue
publish_imu_tfEnable publishing of transform imu_link -> left_camera_frame.true, false
sensors_image_syncSynchronize Sensors messages with latest published video/depth messagetrue, false
sensors_pub_rate:Frequency of publishing of sensors data. MAX: 400. - MIN: grab ratedouble

Object detection parameters #

Namespace: object_detection

common.yaml

ParameterDescriptionValue
od_enabledAutomatically enables Object Detection when the node startstrue, false
modelDetection modelstring, ‘MULTI_CLASS_BOX_FAST’, ‘MULTI_CLASS_BOX_MEDIUM’, ‘MULTI_CLASS_BOX_ACCURATE’, ‘PERSON_HEAD_BOX_FAST’, ‘PERSON_HEAD_BOX_ACCURATE’
allow_reduced_precision_inferenceAllow inference to run at a lower precision to improve runtime and memory usagetrue, false
max_rangeDefines a upper depth range for detections [m]double
confidence_threshold*Minimum value of the detection confidence of an objectdouble, [0,99]
prediction_timeoutDuring this time [sec], the object will have OK state even if it is not detected. Set this parameter to 0 to disable SDK predictionsdouble
filtering_modeDefines the filtering mode that should be applied to raw detections.‘0’: NONE - ‘1’: NMS3D - ‘2’: NMS3D_PER_CLASS
mc_people*Enable people detection for MULTI_CLASS_BOX and MULTI_CLASS_BOX_ACCURATE modelstrue, false
mc_vehicle*Enable vehicles detection for MULTI_CLASS_BOX and MULTI_CLASS_BOX_ACCURATE modelstrue, false
mc_bag*Enable bags detection for MULTI_CLASS_BOX and MULTI_CLASS_BOX_ACCURATE modelstrue, false
mc_animal*Enable animals detection for MULTI_CLASS_BOX and MULTI_CLASS_BOX_ACCURATE modelstrue, false
mc_electronics*Enable electronic devices detection for MULTI_CLASS_BOX and MULTI_CLASS_BOX_ACCURATE modelstrue, false
mc_fruit_vegetable*Enable fruits and vegetables detection for MULTI_CLASS_BOX and MULTI_CLASS_BOX_ACCURATE modelstrue, false
mc_sport*Enable sport related object detection for MULTI_CLASS_BOX and MULTI_CLASS_BOX_ACCURATE modelstrue, false

* Dynamic parameter

Body Tracking parameters #

Namespace: body_tracking

common.yaml

ParameterDescriptionValue
bt_enabledTrue to enable Body Trackingtrue, false
modelDetection modelstring, ‘HUMAN_BODY_FAST’, ‘HUMAN_BODY_MEDIUM’, ‘HUMAN_BODY_ACCURATE’
body_formatSkeleton formatstring, ‘BODY_18’,‘BODY_34’,‘BODY_38’,‘BODY_70’
allow_reduced_precision_inferenceAllow inference to run at a lower precision to improve runtime and memory usagetrue, false
max_rangeDefines a upper depth range for detections [m]double
body_kp_selectionDefines the body selection outputted by the sdk when retrieveBodies is calledstring, ‘FULL’, ‘UPPER_BODY’
enable_body_fittingDefines if the body fitting will be appliedtrue, false
enable_trackingDefines if the object detection will track objects across images flowtrue, false
prediction_timeout_sDuring this time [sec], the skeleton will have OK state even if it is not detected. Set this parameter to 0 to disable SDK predictionsdouble
confidence_threshold*Minimum value of the detection confidence of skeleton key pointsdouble, [0,99]
minimum_keypoints_threshold*Minimum number of skeleton key points to be detected for a valid skeletonint

* Dynamic parameter

Body Tracking parameters #

Namespace: body_tracking

common.yaml

ParameterDescriptionValue
stream_enabledEnable the streaming server when the camera is opentrue, false
codecDifferent encoding types for image streamingstring, ‘H264’, ‘H265’
portPort used for streaming. Port must be an even number. Any odd number will be rejected.int
bitrateStreaming bitrate (in Kbits/s) used for streaming.int, [1000 - 60000]
gop_sizeThe GOP size determines the maximum distance between IDR/I-frames. Very high GOP size will result in slightly more efficient compression, especially on static scenes. But latency will increase.int, [max 256]
adaptative_bitrateBitrate will be adjusted depending the number of packet dropped during streaming. If activated, the bitrate can vary between [bitrate/4, bitrate].true, false
chunk_sizeStream buffers are divided into X number of chunks where each chunk is chunk_size bytes long. You can lower chunk_size value if network generates a lot of packet lost: this will generates more chunk for a single image, but each chunk sent will be lighter to avoid inside-chunk corruption. Increasing this value can decrease latency.int, [1024 - 65000]
target_framerateFramerate for the streaming output. This framerate must be below or equal to the camera framerate. Allowed framerates are 15, 30, 60 or 100 if possible. Any other values will be discarded and camera FPS will be taken.int

Debug parameters #

Namespace: debug

common.yaml

ParameterDescriptionValue
sdk_verboseSet the verbose level of the ZED SDKint, 0 -> disable
debug_commonEnable general debug log outputstrue, false
debug_simEnable simulation debug log outputstrue, false
debug_video_depthEnable video/depth debug log outputstrue, false
debug_camera_controlsEnable camera controls debug log outputstrue, false
debug_point_cloudEnable point cloud debug log outputstrue, false
debug_positional_trackingEnable positional tracking debug log outputstrue, false
debug_gnssEnable GNSS fusion debug log outputstrue, false
debug_sensorsEnable sensors debug log outputstrue, false
debug_mappingEnable mapping debug log outputstrue, false
debug_terrain_mappingEnable terrain mapping debug log outputstrue, false
debug_object_detectionEnable object detection debug log outputstrue, false
debug_body_trackingEnable body tracking debug log outputstrue, false
debug_roiEnable region of interest debug log outputstrue, false
debug_streamingEnable streaming debug log outputstrue, false
debug_advancedEnable advanced debug log outputstrue, false

Dynamic parameters #

The ZED node lets you reconfigure many parameters dynamically during the execution of the node. All the dynamic parameters are indicated with a * in the list above and with the tag [DYNAMIC] in the comments of the YAML files.

You can set the parameters using the CLI command ros2 param set, e.g.:

$ ros2 param set /zed/zed_node depth.confidence 80

if the parameter is set successfully, you will get a confirmation message:

Set parameter successful

In the case you tried to set a parameter that’s not dynamically reconfigurable, or you specified an invalid value, you will get this type of error:

$ ros2 param set /zed/zed_node depth.confidence 150
Set parameter failed

and the ZED node will report a warning message explaining the error type:

1538556595.265117561: [zed.zed_node] [WARN]	The param 'depth.confidence' requires an INTEGER value in the range ]0,100]

You can also use the Configuration -> Dynamic Reconfigure plugin of the rqt tool to dynamically set parameters by using a graphic interface.

Transform frames #

The ZED ROS 2 wrapper broadcasts multiple coordinate frames that each provides information about the camera’s position and orientation. If needed, the reference frames can be changed in the launch file.

  • <camera_name>_camera_link is the current position and orientation of the ZED base center. It corresponds to the bottom central fixing hole.
  • <camera_name>_camera_center is the current position and orientation of ZED middle baseline, determined by visual odometry and the tracking module.
  • <camera_name>_right_camera is the position and orientation of the ZED’s right camera.
  • <camera_name>_right_camera_optical is the position and orientation of the ZED’s right camera optical frame.
  • <camera_name>_left_camera is the position and orientation of the ZED’s left camera.
  • <camera_name>_left_camera_optical is the position and orientation of the ZED’s left camera optical frame.
  • <camera_name>_imu_link is the origin of the inertial data frame (not available with ZED).
  • <camera_name>_mag_link is the origin of the magnetometer frame (ZED2/ZED2i only).
  • <camera_name>_baro_link is the origin of the barometer frame (ZED2/ZED2i only).
  • <camera_name>_temp_left_link is the origin of the left temperature frame (ZED2/ZED2i only).
  • <camera_name>_temp_right_link is the origin of the right temperature frame (ZED2/ZED2i only).

For RVIZ 2 compatibilty, the root frame map_frame is called map. The TF tree generated by the zed_wrapper reflects the standard descripted in REP105. The odometry frame is updated using only the “visual inertial odometry” information (loop closure is not applied). The map frame is updated using the Positional Tracking algorithm provided by the Stereolabs SDK, fusing the inertial information from the IMU sensor, and applying loop closure information.

map_frame (`map`)
└─odometry_frame (`odom`)
    └─camera_link (`<camera_name>_camera_link`)
      └─camera_frame (`<camera_name>_camera_center`)
        └─left_camera_frame (`<camera_name>_left_camera_frame`)
        | └─left_camera_optical_frame (`<camera_name>_left_camera_optical_frame`)
        | └─imu_frame (`<camera_name>_imu_link`)
        └─right_camera_frame (`<camera_name>_right_camera_frame`) 
          └─right_camera_optical_frame (`<camera_name>_right_camera_optical_frame`)

When the Geo Tracking module is enabled the ZED Node TF Tree reflects the following diagram:

Services #

The ZED node provides the following services:

  • ~/reset_odometry: Resets the odometry values eliminating the drift due to the Visual Odometry algorithm. The new odometry value is set to the latest camera pose received from the tracking algorithm.
  • ~/reset_pos_tracking: Restarts the Tracking algorithm setting the initial pose to the value available in the param server or to the latest pose set with the service set_pose.
  • ~/set_pose: Restarts the Tracking algorithm setting the initial pose of the camera to the value passed as vector parameter -> [X, Y, Z, R, P, Y]
  • ~/enable_obj_det: enable/disable object detection.
  • ~/enable_body_trk: enable/disable body tracking.
  • ~/enable_mapping: enable/disable spatial mapping.
  • ~/enable_streaming: enable/disable a local streaming server.
  • ~/start_svo_rec: Start recording an SVO file. If no filename is provided the default zed.svo is used. If no path is provided with the filename the default recording folder is ~/.ros/
  • ~/stop_svo_rec: Stop an active SVO recording.
  • ~/toggle_svo_pause: set/reset SVO playing pause. Note: Only available if general.svo_realtime is false and if an SVO as been chosen as input source.
  • ~/set_roi: Set the Region of Interest to the described polygon.
  • ~/reset_roi: Reset the Region of Interest to the full image frame.
  • ~/toLL: converts from map coordinate frame to Latitude/Longitude Note: Only available if GNSS fusion is enabled
  • ~/fromLL: converts from Latitude/Longitude to map coordinate frame Note: Only available if GNSS fusion is enabled

Services can be called using the rqt graphical tool by enabling the plugin Plugins -> Services -> Service caller.

It is possible to call a service also by using the CLI command $ ros2 service call.

For example to start Object Detection for the ZED2 camera:

$ ros2 service call /zed/zed_node/enable_obj_det std_srvs/srv/SetBool data:\ true\

and the service server will reply:

waiting for service to become available...
requester: making request: std_srvs.srv.SetBool_Request(data=True)

response:
std_srvs.srv.SetBool_Response(success=True, message='Object Detection started')

Assigning a GPU to a camera #

To improve performance, you can specify the gpu_id of the graphic card that will be used for the depth computation in the common.yaml configuration file. The default value (-1) will select the GPU with the highest number of CUDA cores. When using multiple ZEDs, you can assign each camera to a GPU to increase performance.