Create and use Docker images for ZED and ROS 2

You will find examples about how to build and run such a container on our GitHub.

You should modify the Dockerfile to put your application inside it and start it automatically with the container by overriding the CMD.

Build the Docker images #

Choose a name for the image and replace <image_tag> with it, e.g. zed-ubuntu22.04-cuda11.7-ros2-humble

Note: You can find the Docker images pre-built for the latest version of the master branch in the Docker hub stereolabs/zedbot.

Release image #

The Release image internally clones the master branch of this repository to build the ZED ROS 2 Wrapper code.

Build the image for desktop:

docker build -t "<image_tag>" -f Dockerfile.u22-cu117-humble-release .

or build the image for NVIDIA® Jetson:

docker build -t "<image_tag>" -f Dockerfile.l4t35_1-humble-release .

Devel image #

The devel image internally includes the source code of the current branch, so you can modify it. For this reason we must first copy the source to a temporary folder reachable while building the Docker image. The folder can be removed when the Docker image is ready.

Create a temporary tmp_sources folder for the sources and copy the files:

mkdir -p ./tmp_sources
cp -r ../zed* ./tmp_sources

Build the image for desktop:

docker build -t "<image_tag>" -f Dockerfile.u22-cu117-humble-devel .

or build the image for NVIDIA® Jetson:

docker build -t "<image_tag>" -f Dockerfile.l4t35_1-humble-devel .

Remove the temporary sources to avoid future compiling issues:

rm -r ./tmp_sources

Note: It is important that the name of the temporary folder is tmp_sources because it is used in the Dockerfile we provide.

Run the Docker image #

NVIDIA® runtime #

NVIDIA® drivers must be accessible from the Docker image to run the ZED SDK code on the GPU. You’ll need :

  • The nvidia container runtime installed, following this guide
  • A specific docker runtime environment with -gpus all or -e NVIDIA_DRIVER_CAPABILITIES=all
  • Docker privileged mode with --privileged

Volumes #

A few volumes should also be shared with the host.

  • /usr/local/zed/resources:/usr/local/zed/resources if you plan to use the AI module of the ZED SDK (Object Detection, Skeleton Tracking, NEURAL depth) we suggest binding mounting a folder to avoid downloading and optimizing the AI models each time the Docker image is restarted. The first time you use the AI model inside the Docker image, it will be downloaded and optimized in the local bound-mounted folder, and stored there for the next runs.
  • /dev:/dev to share the video devices
  • For GMSL cameras (ZED X) you’ll also need
    • /tmp:/tmp
    • /var/nvidia/nvcam/settings/:/var/nvidia/nvcam/settings/
    • /etc/systemd/system/zed_x_daemon.service:/etc/systemd/system/zed_x_daemon.service

Start the Docker container #

The following command starts an interactive session:

docker run --runtime nvidia -it --privileged --ipc=host --pid=host -e NVIDIA_DRIVER_CAPABILITIES=all -e DISPLAY \
  -v /dev:/dev -v /tmp/.X11-unix/:/tmp/.X11-unix \
  -v ${HOME}/zed_docker_ai/:/usr/local/zed/resources/ \

For GMSL cameras

docker run --runtime nvidia -it --privileged --ipc=host --pid=host -e NVIDIA_DRIVER_CAPABILITIES=all -e DISPLAY \
  -v /dev:/dev \
  -v /tmp:/tmp \
  -v /var/nvidia/nvcam/settings/:/var/nvidia/nvcam/settings/ \
  -v /etc/systemd/system/zed_x_daemon.service:/etc/systemd/system/zed_x_daemon.service \
  -v ${HOME}/zed_docker_ai/:/usr/local/zed/resources/ \