Using the ZED Camera With OpenCV

Note: This is for ZED SDK 1.2 only. Please see the latest OpenCV guide here.

Introduction

In this tutorial, you will learn how to use the ZED SDK to capture and display color and depth images from your ZED.

The code of this tutorial is a simplified version of the sample “ZED with OpenCV” available on our GitHub page.

depth_grab

Prerequisites

Before starting this tutorial, make sure you have OpenCV installed and read our tutorial “How to Build an App with the ZED SDK” which will help you set up your coding environment and build applications with the ZED SDK.

Code

Here’s the complete source code :


/**********************************
**Using ZED with OpenCV
**********************************/

#include <iostream>;

// OpenCV
#include <opencv2/core/core.hpp>;
#include <opencv2/highgui/highgui.hpp>;
#include <opencv2/imgproc/imgproc.hpp>;

// ZED
#include <zed/Camera.hpp>;

// Input from keyboard
char keyboard = ' ';

int main(int argc, char** argv)
{
// Initialize ZED color stream in HD and depth in Performance mode
sl::zed::Camera* zed = new sl::zed::Camera(sl::zed::HD1080);
sl::zed::ERRCODE err = zed->init(sl::zed::MODE::PERFORMANCE, 0, true);

// Quit if an error occurred
if (err != sl::zed::SUCCESS) {
std::cout << "Unable to init the ZED:" << errcode2str(err) << std::endl;
delete zed;
return 1;
}

// Initialize color image and depth
int width = zed->getImageSize().width;
int height = zed->getImageSize().height;
cv::Mat image(height, width, CV_8UC4,1);
cv::Mat depth(height, width, CV_8UC4,1);

// Create OpenCV windows
cv::namedWindow("Image", cv::WINDOW_AUTOSIZE);
cv::namedWindow("Depth", cv::WINDOW_AUTOSIZE);

// Settings for windows
cv::Size displaySize(720, 404);
cv::Mat imageDisplay(displaySize, CV_8UC4);
cv::Mat depthDisplay(displaySize, CV_8UC4);

// Loop until 'q' is pressed
while (keyboard != 'q') {

 // Grab frame and compute depth in FILL sensing mode
 if (!zed->grab(sl::zed::SENSING_MODE::FILL))
 {

 // Retrieve left color image
 sl::zed::Mat left = zed->retrieveImage(sl::zed::SIDE::LEFT);
 memcpy(image.data,left.data,width*height*4*sizeof(uchar));

// Retrieve depth map
 sl::zed::Mat depthmap = zed->normalizeMeasure(sl::zed::MEASURE::DEPTH);
 memcpy(depth.data,depthmap.data,width*height*4*sizeof(uchar));

 // Display image in OpenCV window
 cv::resize(image, imageDisplay, displaySize);
 cv::imshow("Image", imageDisplay);

 // Display depth map in OpenCV window
 cv::resize(depth, depthDisplay, displaySize);
 cv::imshow("Depth", depthDisplay);
 }

keyboard = cv::waitKey(30);

}

delete zed;

}

Explanation

Let’s break down the code piece by piece.

Create a main C++ file and include standard headers for I/O, OpenCV (core, highgui and imgproc), and ZED (Camera.hpp). Then create your standard main function:

#include <iostream>;

// OpenCV
#include <opencv2/core/core.hpp>;
#include <opencv2/highgui/highgui.hpp>;
#include <opencv2/imgproc/imgproc.hpp>;

// ZED
#include <zed/Camera.hpp>;

int main(int argc, char** argv)
{
}

Do not forget to add the links to OpenCV (headers and libs) and ZED SDK (headers and libs).

1. Initialize the ZED camera

Let’s create our ZED Camera object and select its resolution. Here, we will work in Full HD (HD1080). Each image will have a 1920×1080 resolution. Please note that there are four resolutions and framerate available, as detailed in our Developer’s guide. Then, we initialize the ZED with its depth computation mode (QUALITY, MEDIUM, PERFORMANCE or NONE), device number and verbosity.

// Initialize ZED color stream in HD and depth in Performance mode
sl::zed::Camera* zed = new sl::zed::Camera(sl::zed::HD1080);
sl::zed::ERRCODE err = zed->init(sl::zed::MODE::PERFORMANCE, 0, true);

// Quit if an error occurred
if (err != sl::zed::SUCCESS) {
std::cout << "Unable to init the ZED:" << errcode2str(err) << std::endl;
delete zed;
return 1;
}
2. Create OpenCV windows

Now, assuming the initialization was correct, we can create our OpenCV matrix and OpenCV windows with the correct resolutions and types.

We use the width and height of the ZED frames to create the OpenCV matrix with those parameters. Because we want to display the color image and normalized depth image, we have to create each matrix in 8 bits / 4 channels format (CV_8UC4):

// Initialize color image and depth
int width = zed->getImageSize().width;
int height = zed->getImageSize().height;
cv::Mat image(height, width, CV_8UC4,1);
cv::Mat depth(height, width, CV_8UC4,1);

On the display side, we will create smaller windows to make sure they fit on our screen.
Let’s choose a 720 x 404 size and create the OpenCV windows.

// Create OpenCV windows
cv::namedWindow("Image", cv::WINDOW_AUTOSIZE);
cv::namedWindow("Depth", cv::WINDOW_AUTOSIZE);

// Settings for windows
cv::Size displaySize(720, 404);
cv::Mat imageDisplay(displaySize, CV_8UC4);
cv::Mat depthDisplay(displaySize, CV_8UC4);
3. Capture and display color image and depth

Let’s write our main loop. The simplest way is to create a ‘while’ loop and handle all the events with OpenCV.

To grab an image and depth, we first need to select a sensing mode. There are two sensing modes available: STANDARD and FILL. STANDARD mode is suited for applications such as collision avoidance and navigation, while FILL mode is targeted at augmented reality and computational imaging where a fully dense depth map is needed. Here, we decide to use the FILL sensing mode.

// Grab frame and compute depth in FILL sensing mode
 grab(sl::zed::SENSING_MODE::FILL))

Then, we just retrieve the color image and depth map and copy the buffers in our two OpenCV matrix. As you can see, we can directly copy the buffer of the zed::Mat into the buffer of cv::Mat :

//get left image and copy to opencv data
sl::zed::Mat left = zed->retrieveImage(sl::zed::SIDE::LEFT);
memcpy(image.data,left.data,width*height*4*sizeof(uchar));

//get depth image and copy to opencv data
sl::zed::Mat depthmap= zed->normalizeMeasure(sl::zed::MEASURE::DEPTH);
memcpy(depth.data,depthmap.data,width*height*4*sizeof(uchar));

Now, our OpenCV matrix contain the correct buffers. We can use standard OpenCV functions to resize the windows and display them on our screen:

// show image in the OpenCV window
cv::resize(image, imageDisplay, displaySize);
cv::imshow("Image", imageDisplay);

// show depth map in the OpenCV window
cv::resize(depth, depthDisplay, displaySize);
cv::imshow("Depth", depthDisplay);

And that’s all we need! You can now build and run the application. You should see on your screen the two following windows: “Depth” and “Image”.

depth_grab

 

 

  • Guido

    Hi,
    how can i get the intrinsic calibration matrix of zed cameras?

  • 刘灿

    Hi,
    I want to use retrieveMeasure function to get unnormalized disparity instead of normalizeMeasure. According to your comments, I’ve also changed 8UC4 format to 32FC1 format, but I cannot get correct unscaled disparity. I don’t know where I’m wrong. Can anybody help me?

  • noora

    hey, i have connected the zed camera to jetson tk1 to view the depth
    map, but unfortunately i faced a large delay around 2 seconds between
    reality and the recorded video which effects my project. do you have any
    idea how to decease the delay?
    i’ll appreciate your help.
    thank you.

    • Syamprasad

      Have you figured out anything to increase the frame rate?

    • Aishanou Rait

      Yes I also face a delay in frame refresh. Is there a way to avoid it?

  • נתנאל בן-חיים

    hi, i want to convert svo file to mjpeg file or avi file
    is that possible?

    • Thomas Chow

      Yes. I have used the command under windows 10 as follows:-
      > “ZED SVO Converter” -f ZED_filename.SVO -v –filename.mp4

  • Mike

    Hi!
    When I write a simple program such as the one below with an empty main:
    //standard includes
    #include
    #include
    #include

    //opencv includes
    #include
    #include
    #include
    #include

    //ZED Includes
    #include

    using namespace sl::zed;
    using namespace std;

    //Main function

    int main(int argc, char **argv) {

    return 0;
    }

    I get the following error:
    1>mySVOReaderMain.obj : error LNK2019: unresolved external symbol “public: __thiscall sl::zed::Camera::Camera(enum sl::zed::ZEDResolution_mode,float,int)” (??0Camera@zed@sl@@QAE@W4ZEDResolution_mode@12@MH@Z) referenced in function “public: void __thiscall sl::zed::Camera::`default constructor closure'(void)” (??_FCamera@zed@sl@@QAEXXZ)

    Any idea how to fix this?

  • Miguel Angel Orozco López

    Hi, I’m using the method “writeDepthAs” to save depth Image as png file. And now I want to know de distance from every pixel of those png files. How I can do this, what is the right method???

  • Jim Zhan

    Hello,

    I am trying to use OpenCV-python to read color and depth image from the ZED camera.
    Is there any guideline on how to read the images?
    If OpenCV-python is not able to do it, how can I use Python to read the two images?

    Thank you!

  • Kim D

    Hi, i have a error about OpenCV example that contain STEREO_LEFT and STEREO_RIGHT.
    How can i exchange STEREO_LEFT and STEREO RIGHT to something equal?

  • Pablo Ramón Soria

    Hi,

    I am using the camera with OpenCV and I want to use it with higher resolution than the default one. Is it possible without using the ZED SDK?

    I am doing the following:

    mCamera->set(CV_CAP_PROP_FRAME_HEIGHT, 4416);

    the method returns 1, which means that it is done properly, but the resolution keeps 1280×480

    Thanks in advance

  • Koirala Anand

    the samples inside Zed sdk folder can be compiled without error and the solution can be built. But when trying to build the source from GitHub the cmake compiler gives the error as OpenCv directory not found. what i can notice is that the sdk samples use opencv2.4 but the GitHub source requires opencv3.1 which i have… set to correct path and even if i set the absolute path. can anyone help me…

    • Koirala Anand

      Open cv directory problem fixed when you just add a line of code like this SET(OpenCV_DIR C:/opencv/build) on the first line of the CmakeLists.txt file. C:/openCV/build is the path to the opencv build folder for your installation. use the ‘/’ slash instead of ” slash. but when building the solution in Visual studio I am getting some syntax problems (namespace) Can someone help me?

  • Koirala Anand

    I am having namespace problems for Zed files when building solution in visual studio 2015… for the github sources… Can anyone help?