Orchestrate containers

You have the flexibility to choose any orchestrator to manage your containers running the ZED SDK.

We provide examples for two popular orchestrators: Docker Compose and Kubernetes.

Docker Compose #

Docker compose is quite easy to set up. You need to make sure that every docker runtime parameters that was mentioned in the documentation is in the docker-compose.yaml file too:

  • The privileged mode.
  • The nvidia runtime.
  • The NVIDIA_DRIVER_CAPABILITIES.
  • The shared volumes.

Once you have completed the necessary setup steps, you are ready to proceed. You can try it out using the example pod we provide.

USB camera example
version: '2.3'
services: 
  your_app:
    image: <your-image>
    runtime: nvidia
    environment:
     - NVIDIA_DRIVER_CAPABILITIES=all
    privileged: true
    volumes:
      - /usr/local/zed/resources:/usr/local/zed/resources
      - /usr/local/zed/settings:/usr/local/zed/settings
      - /dev:/dev
GMSL camera example
version: '2.3'
services: 
  your_app:
    image: <your-image>
    runtime: nvidia
    environment:
     - NVIDIA_DRIVER_CAPABILITIES=all
    privileged: true
    volumes:
      - /usr/local/zed/resources:/usr/local/zed/resources
      - /usr/local/zed/settings:/usr/local/zed/settings
      - /var/nvidia/nvcam/settings/:/var/nvidia/nvcam/settings/
      - /etc/systemd/system/zed_x_daemon.service:/etc/systemd/system/zed_x_daemon.service
      - /tmp/:/tmp/
      - /dev:/dev

Kubernetes #

To use kubernetes, we advise that you setup a cluster with k3s. To add NVIDIA® Container Runtime Support to k3s, you need a few steps described here.

  • Ensure the nvidia container runtime is properly installed. You can checking it in the /etc/docker/daemon.json file:
{
    "default-runtime": "nvidia",
    "runtimes": {
        "nvidia": {
            "args": [],
            "path": "nvidia-container-runtime"
        }
    }
}
  • add a RuntimeClass in your kubernetes configuration
apiVersion: node.k8s.io/v1
kind: RuntimeClass
metadata:
  name: nvidia
handler: nvidia
  • fill the runtime class in the target pod
spec:
  runtimeClassName: nvidia
  • You may need to add a system-wide pod in the configuration: sudo k3s kubectl create -f https://raw.githubusercontent.com/NVIDIA/k8s-device-plugin/v0.14.1/nvidia-device-plugin.yml

Once you have completed the necessary setup steps, you are ready to proceed. You can try it out using the example pod we provide.

Example pod configuration file for USB cameras
apiVersion: apps/v1
kind: Deployment
metadata:
  name: zed-sample
spec:
  replicas: 1
  selector:
    matchLabels:
      app: your-app
  template:
    metadata:
      labels:
        app: your-app
    spec:
      containers:
        - name: your-container
          image: <your_image>
          securityContext:
            privileged: true
          env:
            - name: NVIDIA_DRIVER_CAPABILITIES
              value: "all"
          volumeMounts:
            - name: dev
              mountPath: dev

          command: ["tail", "-f", "/dev/null"]
          resources:
            limits:
              nvidia.com/gpu: 1 # Assuming you want to allocate one GPU
            requests:
              nvidia.com/gpu: '1'
      volumes:
        - name: dev
          hostPath:
            path: /dev

For GMSL cameras, add the same additional shared volumes as before.