Camera Class Reference

This class is the main interface with the camera and the SDK features, suche as: video, depth, tracking, mapping, and more.
Find more information in the detailed description below.
. More...

General Functions

 Camera ()
 Default constructor which creates an empty Camera object.
Parameters will be set when calling open(init_param) with the desired InitParameters . More...
 
 ~Camera ()
 The Camera destructor will call the close() function and clear the memory previously allocated by the object. More...
 
ERROR_CODE open (InitParameters init_parameters=InitParameters())
 Opens the ZED camera from the provided InitParameter.
This function will also check the hardware requirements and run a self-calibration. More...
 
bool isOpened ()
 Reports if the camera has been successfully opened. It has the same behavior as checking if open() returns SUCCESS. More...
 
void close ()
 If open() has been called, this function will close the connection to the camera (or the SVO file) and free the corresponding memory. More...
 
ERROR_CODE grab (RuntimeParameters rt_parameters=RuntimeParameters())
 This function will grab the latest images from the camera, rectify them, and compute the measurements based on the RuntimeParameters provided (depth, point cloud, tracking, etc.)
As measures are created in this function, its execution can last a few milliseconds, depending on your parameters and your hardware.
The exact duration will mostly depend on the following parameters: More...
 
CameraInformation getCameraInformation (Resolution image_size=Resolution(0, 0))
 Returns the calibration parameters, serial number and other information about the camera being used. More...
 
CUcontext getCUDAContext ()
 Gets the Camera-created CUDA context for sharing it with other CUDA-capable libraries. This can be useful for sharing GPU memories. More...
 
ERROR_CODE findPlaneAtHit (sl::uint2 coord, sl::Plane &plane)
 Checks the plane at the given left image coordinates. More...
 
ERROR_CODE findFloorPlane (sl::Plane &floorPlane, sl::Transform &resetTrackingFloorFrame, float floor_height_prior=INVALID_VALUE, sl::Rotation world_orientation_prior=sl::Matrix3f::zeros(), float floor_height_prior_tolerance=INVALID_VALUE)
 Detect the floor plane of the scene. More...
 
 Camera (const Camera &)=delete
 The Camera object cannot be copied. Therfore, its copy constructor is disabled. If you need to share a Camera instance across several threads or object, please consider using a pointer. More...
 
Video
Resolution getResolution ()
 Returns the size of the grabbed images. More...
 
ERROR_CODE retrieveImage (Mat &mat, VIEW view=VIEW_LEFT, MEM type=MEM_CPU, int width=0, int height=0)
 Retrieves images from the camera (or SVO file). More...
 
int getCameraSettings (CAMERA_SETTINGS settings)
 Returns the current value of the requested camera setting. (gain, brightness, hue, exposure, etc.) More...
 
void setCameraSettings (CAMERA_SETTINGS settings, int value, bool use_default=false)
 Sets the value of the requested camera setting. (gain, brightness, hue, exposure, etc.) More...
 
float getCameraFPS ()
 Returns the framerate of the camera. More...
 
void setCameraFPS (int desired_fps)
 Sets a new target frame rate for the camera. More...
 
float getCurrentFPS ()
 Returns the current framerate at which the grab() method is successfully called. More...
 
timeStamp getTimestamp (sl::TIME_REFERENCE reference_time)
 Returns the timestamp in the requested TIME_REFERENCE. More...
 
timeStamp getCameraTimestamp ()
 This function has been deprecated. Please refer to getTimestamp(TIME_REFERENCE_IMAGE) which has the exact same behavior. More...
 
timeStamp getCurrentTimestamp ()
 This function has been deprecated. Please refer to getTimestamp(TIME_REFERENCE_CURRENT) which has the exact same behavior. More...
 
unsigned int getFrameDroppedCount ()
 Returns the number of frames dropped since grab() was called for the first time. More...
 
int getSVOPosition ()
 Returns the current playback position in the SVO file. More...
 
void setSVOPosition (int frame_number)
 Sets the playback cursor to the desired frame number in the SVO file. More...
 
int getSVONumberOfFrames ()
 Returns the number of frames in the SVO file. More...
 
SELF_CALIBRATION_STATE getSelfCalibrationState ()
 Returns the current status of the self-calibration. More...
 
void resetSelfCalibration ()
 Resets the camera's self calibration. This function can be called at any time after the open() function. More...
 
Depth Sensing
ERROR_CODE retrieveMeasure (Mat &mat, MEASURE measure=MEASURE_DEPTH, MEM type=MEM_CPU, int width=0, int height=0)
 Computed measures, like depth, point cloud, or normals, can be retrieved using this method. More...
 
float getDepthMaxRangeValue ()
 Returns the current maximum distance of depth estimation. More...
 
void setDepthMaxRangeValue (float depth_max_range)
 Sets the maximum distance of depth estimation (All values beyond this limit will be reported as TOO_FAR). More...
 
float getDepthMinRangeValue ()
 Returns the closest measurable distance by the camera, according to the camera and the depth map parameters. More...
 
int getConfidenceThreshold ()
 Returns the current confidence threshold value applied to the depth map. More...
 
void setConfidenceThreshold (int conf_threshold_value)
 Sets a threshold to reject depth values based on their confidence. More...
 
Positional Tracking
ERROR_CODE enableTracking (TrackingParameters tracking_parameters=TrackingParameters())
 Initializes and starts the positional tracking processes. More...
 
TRACKING_STATE getPosition (Pose &camera_pose, REFERENCE_FRAME reference_frame=REFERENCE_FRAME_WORLD)
 Retrieves the estimated position and orientation of the camera in the specified reference frame. More...
 
ERROR_CODE saveCurrentArea (String area_file_path)
 Saves the current area learning file. The file will contain spatial memory data generated by the tracking. More...
 
AREA_EXPORT_STATE getAreaExportState ()
 Returns the state of the spatial memory export process. More...
 
ERROR_CODE resetTracking (const Transform &path)
 Resets the tracking, and re-initializes the position with the given transformation matrix. More...
 
void disableTracking (String area_file_path="")
 Disables the positional tracking. More...
 
bool isTrackingEnabled ()
 Tells if the tracking module is enabled. More...
 
ERROR_CODE getIMUData (IMUData &imu_data, TIME_REFERENCE reference_time)
 Retrieves the IMU Data at a specific time reference. More...
 
ERROR_CODE setIMUPrior (const sl::Transform &transform)
 Set an optionnal IMU orientation hint that will be used to assist the tracking during the next grab(). More...
 
Spatial Mapping
ERROR_CODE enableSpatialMapping (SpatialMappingParameters spatial_mapping_parameters=SpatialMappingParameters())
 Initializes and starts the spatial mapping processes. More...
 
SPATIAL_MAPPING_STATE getSpatialMappingState ()
 Returns the current spatial mapping state. More...
 
void requestMeshAsync ())
 Starts the mesh generation process in a non blocking thread from the spatial mapping process. More...
 
ERROR_CODE getMeshRequestStatusAsync ())
 Returns the mesh generation status. Useful after calling requestMeshAsync to know if you can call retrieveMeshAsync. More...
 
ERROR_CODE retrieveMeshAsync (Mesh &mesh))
 Retrieves the current generated mesh. More...
 
ERROR_CODE extractWholeMesh (Mesh &mesh))
 Extracts the current mesh from the spatial mapping process. More...
 
void requestSpatialMapAsync ()
 Starts the spatial map generation process in a non blocking thread from the spatial mapping process. More...
 
ERROR_CODE getSpatialMapRequestStatusAsync ()
 Returns the spatial map generation status. This status allows to know if the mesh can be retrieved by calling retrieveSpatialMapAsync. More...
 
ERROR_CODE retrieveSpatialMapAsync (Mesh &mesh)
 Retrieves the current generated spatial map. More...
 
ERROR_CODE retrieveSpatialMapAsync (FusedPointCloud &fpc)
 Retrieves the current generated spatial map. More...
 
ERROR_CODE extractWholeSpatialMap (Mesh &mesh)
 Extracts the current spatial map from the spatial mapping process. More...
 
ERROR_CODE extractWholeSpatialMap (FusedPointCloud &fpc)
 Extracts the current spatial map from the spatial mapping process. More...
 
void pauseSpatialMapping (bool status)
 Pauses or resumes the spatial mapping processes. More...
 
void disableSpatialMapping ()
 Disables the spatial mapping process. More...
 
Recording
ERROR_CODE enableRecording (String video_filename, SVO_COMPRESSION_MODE compression_mode=SVO_COMPRESSION_MODE_LOSSLESS)
 Creates an SVO file to be filled by record(). More...
 
RecordingState record ()
 Records the current frame provided by grab() into the file. More...
 
bool isRecordingEnabled ()
 Tells if a recording is in progress. More...
 
void disableRecording ()
 Disables the recording initiated by enableRecording() and closes the generated file. More...
 
Streaming
ERROR_CODE enableStreaming (StreamingParameters streaming_parameters=StreamingParameters())
 Creates a streaming pipeline. More...
 
void disableStreaming ()
 Disables the streaming initiated by enableStreaming() More...
 
bool isStreamingEnabled ()
 Tells if the streaming is running (true) or still initializing (false) More...
 

Static Functions

static String getSDKVersion ()
 Returns the version of the currently installed ZED SDK. More...
 
static int isZEDconnected ()
 Returns the number of connected cameras. More...
 
static std::vector< sl::DevicePropertiesgetDeviceList ()
 List all the connected devices with their associated information. More...
 
static std::vector< sl::StreamingPropertiesgetStreamingDeviceList ()
 List all the streaming devices with their associated information. More...
 
static ERROR_CODE sticktoCPUCore (int cpu_core)
 Only for Nvidia Jetson: Sticks the calling thread to a specific CPU core. More...
 

Detailed Description

This class is the main interface with the camera and the SDK features, suche as: video, depth, tracking, mapping, and more.
Find more information in the detailed description below.
.

A standard program will use the Camera class like this:

#include <sl/Camera.hpp>
using namespace sl;
int main(int argc, char **argv) {
// --- Initialize a Camera object and open the ZED
// Create a ZED camera object
Camera zed;
// Set configuration parameters
InitParameters init_params;
init_params.camera_resolution = RESOLUTION_HD720; // Use HD720 video mode
init_params.camera_fps = 60; // Set fps at 60
// Open the camera
ERROR_CODE err = zed.open(init_params);
if (err != SUCCESS) {
std::cout << toString(err) << std::endl;
exit(-1);
}
sl::RuntimeParameters runtime_param;
// --- Main loop grabing images and depth values
// Capture 50 frames and stop
int i = 0;
Mat image, depth;
while (i < 50) {
// Grab an image
if (zed.grab(runtime_param) == SUCCESS) { // A new image is available if grab() returns SUCCESS
//Display a pixel color
zed.retrieveImage(image, VIEW_LEFT); // Get the left image
sl::uchar4 centerRGB;
image.getValue<sl::uchar4>(image.getWidth() / 2, image.getHeight() / 2, &centerRGB);
std::cout << "Image " << i << " center pixel R:" << (int)centerRGB.r << " G:" << (int)centerRGB.g << " B:" << (int)centerRGB.b << std::endl;
//Display a pixel depth
zed.retrieveMeasure(depth, MEASURE_DEPTH); // Get the depth map
float centerDepth;
depth.getValue<float>(depth.getWidth() / 2, depth.getHeight() / 2, &centerDepth);
std::cout << "Image " << i << " center depth:" << centerDepth << std::endl;
i++;
}
}
// --- Close the Camera
zed.close();
return 0;
}

Constructor and Destructor

◆ Camera() [1/2]

Camera ( )

Default constructor which creates an empty Camera object.
Parameters will be set when calling open(init_param) with the desired InitParameters .

The Camera object can be created like this:

Camera zed;

or

Camera* zed = new Camera();

◆ ~Camera()

~Camera ( )

The Camera destructor will call the close() function and clear the memory previously allocated by the object.

◆ Camera() [2/2]

Camera ( const Camera )
delete

The Camera object cannot be copied. Therfore, its copy constructor is disabled. If you need to share a Camera instance across several threads or object, please consider using a pointer.

See also
Camera()

Functions

◆ open()

ERROR_CODE open ( InitParameters  init_parameters = InitParameters())

Opens the ZED camera from the provided InitParameter.
This function will also check the hardware requirements and run a self-calibration.

Parameters
init_parameters: a structure containing all the initial parameters. default : a preset of InitParameters.
Returns
An error code giving information about the internal process. If SUCCESS is returned, the camera is ready to use. Every other code indicates an error and the program should be stopped.

Here is the proper way to call this function:

Camera zed; // Create a ZED camera object
InitParameters init_params; // Set configuration parameters
init_params.camera_resolution = RESOLUTION_HD720; // Use HD720 video mode
init_params.camera_fps = 60; // Set fps at 60
// Open the camera
ERROR_CODE err = zed.open(init_params);
if (err != SUCCESS) {
std::cout << toString(err) << std::endl; // Display the error
exit(-1);
}
Note
If you are having issues opening a camera, the diagnostic tool provided in the SDK can help you identify to problems.
If this function is called on an already opened camera, close() will be called.
Windows: C:\Program Files (x86)\ZED SDK\tools\ZED Diagnostic.exe
Linux: /usr/local/zed/tools/ZED Diagnostic

◆ isOpened()

bool isOpened ( )
inline

Reports if the camera has been successfully opened. It has the same behavior as checking if open() returns SUCCESS.

Returns
true if the ZED is already setup, otherwise false.

◆ close()

void close ( )

If open() has been called, this function will close the connection to the camera (or the SVO file) and free the corresponding memory.

If open() wasn't called or failed, this function won't have any effects.

Note
If an asynchronous task is running within the Camera object, like saveCurrentArea(), this function will wait for its completion.
The open() function can then be called if needed.
Warning
If the CUDA context was created by open(), this function will destroy it. Please make sure to delete your GPU sl::Mat objects before the context is destroyed.

◆ grab()

ERROR_CODE grab ( RuntimeParameters  rt_parameters = RuntimeParameters())

This function will grab the latest images from the camera, rectify them, and compute the measurements based on the RuntimeParameters provided (depth, point cloud, tracking, etc.)
As measures are created in this function, its execution can last a few milliseconds, depending on your parameters and your hardware.
The exact duration will mostly depend on the following parameters:

If no images are available yet, ERROR_CODE_NOT_A_NEW_FRAME will be returned.
This function is meant to be called frequently in the main loop of your application.

Parameters
rt_parameters: a structure containing all the runtime parameters. default : a preset of RuntimeParameters.
Returns
Returning SUCCESS means that no problem was encountered. Returned errors can be displayed using toString(error)
// Set runtime parameters after opening the camera
RuntimeParameters runtime_param;
runtime_param.sensing_mode = SENSING_MODE_STANDARD; // Use STANDARD sensing mode
Mat image;
while (true) {
// Grab an image
if (zed.grab(runtime_param) == SUCCESS) { // A new image is available if grab() returns SUCCESS
zed.retrieveImage(image, VIEW_LEFT); // Get the left image
// Use the image for your application
}
}

◆ getCameraInformation()

CameraInformation getCameraInformation ( Resolution  image_size = Resolution(0, 0))

Returns the calibration parameters, serial number and other information about the camera being used.

As calibration parameters depend on the image resolution, you can provide a custom resolution as a parameter to get scaled information.
When reading an SVO file, the parameters will correspond to the camera used for recording.

Parameters
image_size: You can specify a size different from default image size to get the scaled camera information. default = (0,0) meaning original image size (aka getResolution() ).
Returns
CameraInformation containing the calibration parameters of the ZED, as well as serial number and firmware version.
Note
The returned parameters might vary between two execution due to the self-calibration being ran in the open() method.

◆ getCUDAContext()

CUcontext getCUDAContext ( )

Gets the Camera-created CUDA context for sharing it with other CUDA-capable libraries. This can be useful for sharing GPU memories.

If you're looking for the opposite mechanism, where an existing CUDA context is given to the Camera, please check InitParameters::sdk_cuda_ctx

Returns
The CUDA context used for GPU calls.

◆ getResolution()

Resolution getResolution ( )

Returns the size of the grabbed images.

In live mode it matches InitParameters::camera_resolution.
In SVO mode the recording resolution will be returned.
All the default retrieveImage() and retrieveMeasure() calls will generate an image matching this resolution.

Returns
The grabbed images resolution.

◆ retrieveImage()

ERROR_CODE retrieveImage ( Mat mat,
VIEW  view = VIEW_LEFT,
MEM  type = MEM_CPU,
int  width = 0,
int  height = 0 
)

Retrieves images from the camera (or SVO file).

Multiple images are available along with a view of various measures for display purposes.
Available images and views are listed here.
As an example, VIEW_DEPTH can be used to get a grayscale version of the depth map, but the actual depth values can be retrieved using retrieveMeasure().

Memory
By default, images are copied from GPU memory to CPU memory (RAM) when this function is called.
If your application can use GPU images, using the type parameter can increase performance by avoiding this copy.
If the provided Mat object is already allocated and matches the requested image format, memory won't be re-allocated.

Image size
By default, images are returned in the resolution provided by getResolution().
However, you can request custom resolutions. For example, requesting a smaller image can help you speed up your application.

Parameters
mat: [out] the Mat to store the image.
view: defines the image you want (see VIEW). default : VIEW_LEFT.
type: whether the image should be provided in CPU or GPU memory. default : MEM_CPU.
width: if specified, define the width of the output mat. If set to 0, the width of the ZED resolution will be taken. default : 0.
height: if specified, define the height of the output mat. If set to 0, the height of the ZED resolution will be taken. default : 0.
Returns
"SUCCESS" if the method succeeded, ERROR_CODE_INVALID_FUNCTION_PARAMETERS if the view mode requires a module not enabled (VIEW_DEPTH with DPEHT_MODE_NONE for example), ERROR_CODE_INVALID_RESOLUTION if the width/height is higher than camera.getResolution() or camra.getResolution() x 2 for side by side view mode (not supported), ERROR_CODE_FAILURE if another error occurred.
Note
As this function retrieves the images grabbed by the grab() function, it should be called afterward.
Mat leftImage, depthView; //create sl::Mat objects to store the images
while (true) {
// Grab an image
if (zed.grab() == SUCCESS) { // A new image is available if grab() returns SUCCESS
zed.retrieveImage(leftImage, VIEW_LEFT); // Get the rectified left image
zed.retrieveImage(depthView, VIEW_DEPTH); // Get a grayscale preview of the depth map
//Display the center pixel colors
sl::uchar4 leftCenter;
leftImage.getValue<sl::uchar4>(leftImage.getWidth() / 2, leftImage.getHeight() / 2, &leftCenter);
std::cout << "leftImage center pixel R:" << (int)leftCenter.r << " G:" << (int)leftCenter.g << " B:" << (int)leftCenter.b << std::endl;
sl::uchar4 depthCenter;
depthView.getValue<sl::uchar4>(depthView.getWidth() / 2, depthView.getHeight() / 2, &depthCenter);
std::cout << "depthView center pixel R:" << (int)depthCenter.r << " G:" << (int)depthCenter.g << " B:" << (int)depthCenter.b << std::endl;
}
}

◆ getCameraSettings()

int getCameraSettings ( CAMERA_SETTINGS  settings)

Returns the current value of the requested camera setting. (gain, brightness, hue, exposure, etc.)

Possible values (range) of each setting are available here.

Parameters
setting: the requested setting.
Returns
The current value for the corresponding setting. Returns -1 if encounters an error.
int gain = zed.getCameraSettings(CAMERA_SETTINGS_GAIN);
std::cout << "Current gain value: " << gain << std::endl;
Note
Works only if the camera is open in live mode. (Settings aren't exported in the SVO file format)

◆ setCameraSettings()

void setCameraSettings ( CAMERA_SETTINGS  settings,
int  value,
bool  use_default = false 
)

Sets the value of the requested camera setting. (gain, brightness, hue, exposure, etc.)

Possible values (range) of each setting are available here.

Parameters
settings: the setting to be set.
value: the value to set.
use_default: will set default (or automatic) value if set to true. If so, Value parameter will be ignored. default: false.
//Set the gain to 50
zed.setCameraSettings(CAMERA_SETTINGS_GAIN, 50, false);
Warning
Setting CAMERA_SETTINGS_EXPOSURE or CAMERA_SETTINGS_GAIN to default will automatically sets the other to default.
Note
Works only if the camera is open in live mode.

◆ getCameraFPS()

float getCameraFPS ( )

Returns the framerate of the camera.

In live mode, this value should match InitParameters::camera_fps.
When playing an SVO file, this value matches the requested framerate of the recording camera.

Returns
The framerate at wich the ZED is streaming its images (or the corresponding recorded value in SVO mode). Returns -1.f if it encounters an error.
Warning
The actual framerate (number of images retrieved per second) can be lower if the grab() function runs slower than the image stream or is called too often.
int camera_fps = zed.getCameraFPS();
std::cout << "Camera framerate: " << camera_fps << std::endl;
See also
getCurrentFPS()

◆ setCameraFPS()

void setCameraFPS ( int  desired_fps)

Sets a new target frame rate for the camera.

When a live camera is opened, this function allows you to override the value previously set in InitParameters::camera_fps.
It has no effect when playing back an SVO file.

Parameters
desired_fps: the new desired frame rate.
Deprecated:
This function is no more supported and can cause stability issues.
Warning
This function isn't thread safe and will be removed in a later version.
If you want to artificially reduce the camera's framerate, you can lower the frequency at which you call the grab() method.
If a not-supported framerate is requested, the closest available setting will be used.
Note
Works only if the camera is open in live mode.

◆ getCurrentFPS()

float getCurrentFPS ( )

Returns the current framerate at which the grab() method is successfully called.

The returned value is based on the difference of camera timestamps between two successful grab() calls.

Returns
The current SDK framerate
Warning
The returned framerate (number of images grabbed per second) can be lower than InitParameters::camera_fps if the grab() function runs slower than the image stream or is called too often.
int current_fps = zed.getCurrentFPS();
std::cout << "Current framerate: " << current_fps << std::endl;

◆ getTimestamp()

timeStamp getTimestamp ( sl::TIME_REFERENCE  reference_time)

Returns the timestamp in the requested TIME_REFERENCE.

  • When requesting the TIME_REFERENCE_IMAGE timestamp, the UNIX nanosecond timestamp of the latest grabbed image will be returned.
    This value corresponds to the time at which the entire image was available in the PC memory. As such, it ignores the communication time that corresponds to 2 or 3 frame-time based on the fps (ex: 33.3ms to 50ms at 60fps).
  • When requesting the TIME_REFERENCE_CURRENT timestamp, the current UNIX nanosecond timestamp is returned.


This function can also be used when playing back an SVO file.

Parameters
reference_time: The selected TIME_REFERENCE.
Returns
The timestamp in nanosecond. 0 if not available (SVO file without compression).
Note
As this function returns UNIX timestamps, the reference it uses is common accros several Camera instances.
This can help to organized the grabbed images in a multi-camera application.
timeStamp last_image_timestamp = zed.getTimestamp(TIME_REFERENCE_IMAGE);
timeStamp current_timestamp = zed.getTimestamp(TIME_REFERENCE_CURRENT);
std::cout << "Latest image timestamp: " << last_image_timestamp << "ns from Epoch." << std::endl;
std::cout << "Current timestamp: " << current_timestamp << "ns from Epoch." << std::endl;

◆ getCameraTimestamp()

timeStamp getCameraTimestamp ( )

This function has been deprecated. Please refer to getTimestamp(TIME_REFERENCE_IMAGE) which has the exact same behavior.

Deprecated:
See getTimestamp() with TIME_REFERENCE_IMAGE.

◆ getCurrentTimestamp()

timeStamp getCurrentTimestamp ( )

This function has been deprecated. Please refer to getTimestamp(TIME_REFERENCE_CURRENT) which has the exact same behavior.

Deprecated:
See getTimestamp() with TIME_REFERENCE_CURRENT.

◆ getFrameDroppedCount()

unsigned int getFrameDroppedCount ( )

Returns the number of frames dropped since grab() was called for the first time.

A dropped frame corresponds to a frame that never made it to the grab function.
This can happen if two frames were extracted from the camera when grab() is called. The older frame will be dropped so as to always use the latest (which minimizes latency).

Returns
The number of frames dropped since the first grab() call.

◆ getSVOPosition()

int getSVOPosition ( )

Returns the current playback position in the SVO file.

The position corresponds to the number of frames already read from the SVO file, starting from 0 to n.
Each grab() call increases this value by one (except when using InitParameters::svo_real_time_mode).

Returns
The current frame position in the SVO file. Returns -1 if the SDK is not reading an SVO.
Note
Works only if the camera is open in SVO playback mode.
See also
setSVOPosition() for an example.

◆ setSVOPosition()

void setSVOPosition ( int  frame_number)

Sets the playback cursor to the desired frame number in the SVO file.

This function allows you to move around within a played-back SVO file. After calling, the next call to grab() will read the provided frame number.

Parameters
frame_number: the number of the desired frame to be decoded.
Note
Works only if the camera is open in SVO playback mode.
#include <sl/Camera.hpp>
using namespace sl;
int main(int argc, char **argv) {
// Create a ZED camera object
Camera zed;
// Set configuration parameters
InitParameters init_params;
init_params.input.setFromSVOFile("path/to/my/file.svo");
// Open the camera
ERROR_CODE err = zed.open(init_params);
if (err != SUCCESS) {
std::cout << toString(err) << std::endl;
exit(-1);
}
// Loop between frame 0 and 50
int i = 0;
Mat leftImage;
while (zed.getSVOPosition() < zed.getSVONumberOfFrames()-1) {
std::cout << "Current frame: " << zed.getSVOPosition() << std::endl;
// Loop if we reached frame 50
if (zed.getSVOPosition() == 50)
// Grab an image
if (zed.grab() == SUCCESS) {
zed.retrieveImage(leftImage, VIEW_LEFT); // Get the rectified left image
// Use the image in your application
}
}
// Close the Camera
zed.close();
return 0;
}

◆ getSVONumberOfFrames()

int getSVONumberOfFrames ( )

Returns the number of frames in the SVO file.

Returns
The total number of frames in the SVO file (-1 if the SDK is not reading a SVO).
Note
Works only if the camera is open in SVO reading mode.
See also
setSVOPosition() for an example.

◆ getSelfCalibrationState()

SELF_CALIBRATION_STATE getSelfCalibrationState ( )

Returns the current status of the self-calibration.

When opening the camera, the ZED will self-calibrate itself to optimize the factory calibration.
As this process can run slightly slower than open(), this function allows you to check its status.
The self-calibration can be disabled using InitParameters::camera_disable_self_calib .

Returns
A status code giving information about the status of the self calibration.
Note
The first call to the grab() function will wait for the self-calibration to finish.
See also
SELF_CALIBRATION_STATE

◆ resetSelfCalibration()

void resetSelfCalibration ( )

Resets the camera's self calibration. This function can be called at any time after the open() function.

It will reset and optimize the calibration parameters against misalignment, convergence, and color mismatch. It can be called if the calibration file of the current camera has been updated while the application is running.

If the self-calibration didn't succeed, previous parameters will be used.

Note
The next call to the grab() function will wait for the self-calibration to finish.

◆ retrieveMeasure()

ERROR_CODE retrieveMeasure ( Mat mat,
MEASURE  measure = MEASURE_DEPTH,
MEM  type = MEM_CPU,
int  width = 0,
int  height = 0 
)

Computed measures, like depth, point cloud, or normals, can be retrieved using this method.


Multiple measures are available after a grab() call. A full list is available here.

Memory
By default, images are copied from GPU memory to CPU memory (RAM) when this function is called.
If your application can use GPU images, using the type parameter can increase performance by avoiding this copy.
If the provided Mat object is already allocated and matches the requested image format, memory won't be re-allocated.

Measure size
By default, measures are returned in the resolution provided by getResolution().
However, custom resolutions can be requested. For example, requesting a smaller measure can help you speed up your application.

Parameters
mat: [out] the Mat to store the measures.
measure: defines the measure you want. (see MEASURE), default : MEASURE_DEPTH
type: the type of the memory of provided mat that should by used. default : MEM_CPU.
width: if specified, define the width of the output mat. If set to 0, the width of the ZED resolution will be taken. default : 0
height: if specified, define the height of the output mat. If set to 0, the height of the ZED resolution will be taken. default : 0
Returns
"SUCCESS" if the method succeeded, ERROR_CODE_INVALID_FUNCTION_PARAMETERS if the view mode requires a module not enabled (VIEW_DEPTH with DPEHT_MODE_NONE for example), ERROR_CODE_INVALID_RESOLUTION if the width/height is higher than camera.getResolution() or camra.getResolution() x 2 for side by side view mode (not supported), ERROR_CODE_FAILURE if another error occurred.
Note
As this function retrieves the measures computed by the grab() function, it should be called after.

Measures containing "RIGHT" in their names, requires InitParameters::enable_right_side_measure to be enabled.
Mat depthMap, pointCloud;
int x = zed.getResolution().width / 2; // Center coordinates
int y = zed.getResolution().height / 2;
while (true) {
if (zed.grab() == SUCCESS) { // Grab an image
zed.retrieveMeasure(depthMap, MEASURE_DEPTH, MEM_CPU); // Get the depth map
// Read a depth value
float centerDepth = 0;
depthMap.getValue<float>(x, y, &centerDepth, MEM_CPU); // each depth map pixel is a float value
if (isnormal(centerDepth)) { // + Inf is "too far", -Inf is "too close", Nan is "unknown/occlusion"
std::cout << "Depth value at center: " << centerDepth << " " << init_params.coordinate_units << std::endl;
}
zed.retrieveMeasure(pointCloud, MEASURE_XYZRGBA, MEM_CPU);// Get the point cloud
// Read a point cloud value
sl::float4 pcValue;
pointCloud.getValue<sl::float4>(x, y, &pcValue); // each point cloud pixel contains 4 floats, so we are using a sl::float4
if (isnormal(pcValue.z)) {
std::cout << "Point cloud coordinates at center: X=" << pcValue.x << ", Y=" << pcValue.y << ", Z=" << pcValue.z << std::endl;
unsigned char color[sizeof(float)];
memcpy(color, &pcValue[3], sizeof(float));
std::cout << "Point cloud color at center: R=" << (int)color[0] << ", G=" << (int)color[1] << ", B=" << (int)color[2] << std::endl;
}
}
}

◆ getDepthMaxRangeValue()

float getDepthMaxRangeValue ( )

Returns the current maximum distance of depth estimation.

When estimating the depth, the SDK uses this upper limit to turn higher values into TOO_FAR ones.

Returns
The current maximum distance that can be computed in the defined UNIT.
Note
This method doesn't return the highest value of the current depth map, but the highest possible one.

◆ setDepthMaxRangeValue()

void setDepthMaxRangeValue ( float  depth_max_range)

Sets the maximum distance of depth estimation (All values beyond this limit will be reported as TOO_FAR).

This method can be used to extend or reduce the depth perception range. However, the depth accuracy decreases with distance.

Parameters
depth_max_range: maximum distance in the defined UNIT.
Note
Changing this value has no impact on performance and doesn't affect the positional tracking nor the spatial mapping. (Only the depth, point cloud, normals)

◆ getDepthMinRangeValue()

float getDepthMinRangeValue ( )

Returns the closest measurable distance by the camera, according to the camera and the depth map parameters.

When estimating the depth, the SDK uses this lower limit to turn lower values into TOO_CLOSE ones.

Returns
The minimum distance that can be computed in the defined UNIT.
Note
This method doesn't return the lowest value of the current depth map, but the lowest possible one.

◆ getConfidenceThreshold()

int getConfidenceThreshold ( )

Returns the current confidence threshold value applied to the depth map.

Each depth pixel has a corresponding confidence. (MEASURE_CONFIDENCE)
This function returns the value currently used to reject unconfident depth pixels.
By default, the confidence threshold is set at 100, meaning that no depth pixel will be rejected.

Returns
The current threshold value between 0 and 100.
See also
setConfidenceThreshold()

◆ setConfidenceThreshold()

void setConfidenceThreshold ( int  conf_threshold_value)

Sets a threshold to reject depth values based on their confidence.

Each depth pixel has a corresponding confidence. (MEASURE_CONFIDENCE)
A low confidence value means more confidence and precision (but less density). A high confidence means more density and less certainty.
- setConfidenceThreshold(100) will allow values from 0 to 100. (no filtering)
- setConfidenceThreshold(90) will allow values from 0 to 90. (filtering out the most unconfident pixels)
- setConfidenceThreshold(30) will allow values from 0 to 30. (keeping most confident pixels and lowering the density of the depth map)

Parameters
conf_threshold_value: a value in [1,100].

◆ enableTracking()

ERROR_CODE enableTracking ( TrackingParameters  tracking_parameters = TrackingParameters())

Initializes and starts the positional tracking processes.

This function allows you to enable the position estimation of the SDK. It only has to be called once in the camera's lifetime.
When enabled, the position will be update at each grab call.
Tracking-specific parameter can be set by providing TrackingParameters to this function.

Parameters
tracking_parameters: A structure containing all the TrackingParameters . default : a preset of TrackingParameters.
Returns
ERROR_CODE_FAILURE if the area_file_path file wasn't found, SUCCESS otherwise.
Warning
The positional tracking feature benefits from a high framerate. We found HD720@60fps to be the best compromise between image quality and framerate.
#include <sl/Camera.hpp>
using namespace sl;
int main(int argc, char **argv) {
// --- Initialize a Camera object and open the ZED
// Create a ZED camera object
Camera zed;
// Set configuration parameters
InitParameters init_params;
init_params.camera_resolution = RESOLUTION_HD720; // Use HD720 video mode
init_params.camera_fps = 60; // Set fps at 60
// Open the camera
ERROR_CODE err = zed.open(init_params);
if (err != SUCCESS) {
std::cout << toString(err) << std::endl;
exit(-1);
}
// Set tracking parameters
TrackingParameters track_params;
track_params.enable_spatial_memory = true;
// Enable positional tracking
err = zed.enableTracking(track_params);
if (err != SUCCESS) {
std::cout << "Tracking error: " << toString(err) << std::endl;
exit(-1);
}
// --- Main loop
while (true) {
if (zed.grab() == SUCCESS) { // Grab an image and computes the tracking
Pose cameraPose;
std::cout << "Camera position: X=" << cameraPose.getTranslation().x << " Y=" << cameraPose.getTranslation().y << " Z=" << cameraPose.getTranslation().z << std::endl;
}
}
// --- Close the Camera
zed.close();
return 0;
}

◆ getPosition()

TRACKING_STATE getPosition ( Pose camera_pose,
REFERENCE_FRAME  reference_frame = REFERENCE_FRAME_WORLD 
)

Retrieves the estimated position and orientation of the camera in the specified reference frame.


Using REFERENCE_FRAME_WORLD, the returned pose relates to the initial position of the camera. (TrackingParameters::initial_world_transform )
Using REFERENCE_FRAME_CAMERA, the returned pose relates to the previous position of the camera.

If the tracking has been initialized with TrackingParameters::enable_spatial_memory to true (default), this function can return TRACKING_STATE_SEARCHING.
This means that the tracking lost its link to the initial referential and is currently trying to relocate the camera. However, it will keep on providing position estimations.

Parameters
camera_pose[out]: the pose containing the position of the camera and other information (timestamp, confidence)
reference_frame: defines the reference from which you want the pose to be expressed. Default : REFERENCE_FRAME_WORLD.
Returns
The current state of the tracking process.


Extract Rotation Matrix : camera_pose.getRotation();
Extract Translation Vector: camera_pose.getTranslation();
Convert to Orientation / quaternion : camera_pose.getOrientation();

Note
The position is provided in the InitParameters::coordinate_system . See COORDINATE_SYSTEM for its physical origin.
Warning
This function requires the tracking to be enabled. enableTracking() .
// --- Main loop
while (true) {
if (zed.grab() == SUCCESS) { // Grab an image and computes the tracking
Pose cameraPose;
std::cout << "Camera position: X=" << cameraPose.getTranslation().x << " Y=" << cameraPose.getTranslation().y << " Z=" << cameraPose.getTranslation().z << std::endl;
std::cout << "Camera Euler rotation: X=" << cameraPose.getEulerAngles().x << " Y=" << cameraPose.getEulerAngles().y << " Z=" << cameraPose.getEulerAngles().z << std::endl;
std::cout << "Camera Rodrigues rotation: X=" << cameraPose.getRotationVector().x << " Y=" << cameraPose.getRotationVector().y << " Z=" << cameraPose.getRotationVector().z << std::endl;
std::cout << "Camera quaternion orientation: X=" << cameraPose.getOrientation().x << " Y=" << cameraPose.getOrientation().y << " Z=" << cameraPose.getOrientation().z << " W=" << cameraPose.getOrientation().w << std::endl;
std::cout << std::endl;
}
}

◆ saveCurrentArea()

ERROR_CODE saveCurrentArea ( String  area_file_path)

Saves the current area learning file. The file will contain spatial memory data generated by the tracking.

If the tracking has been initialized with TrackingParameters::enable_spatial_memory to true (default), the function allows you to export the spatial memory.
Reloading the exported file in a future session with TrackingParameters::area_file_path initialize the tracking within the same referential.
This function is asynchronous, and only triggers the file generation. You can use getAreaExportState() to get the export state. The positional tracking keeps running while exporting.

Parameters
area_file_path: save the spatial memory database in an '.area' file.
Returns
ERROR_CODE_FAILURE if the area_file_path file wasn't found, SUCCESS otherwise.
See also
getAreaExportState()
Note
Please note that this function will also flush the area database that was built / loaded.
Warning
If the camera wasn't moved during the tracking session, or not enough, the spatial memory won't be usable and the file won't be exported.
The getAreaExportState() function will return AREA_EXPORT_STATE_NOT_STARTED.
A few meters (~3m) of translation or a full rotation should be enough to get usable spatial memory.
However, as it should be used for relocation purposes, visiting a significant portion of the environment is recommended before exporting.
...
// --- Main loop
while (true) {
if (zed.grab() == SUCCESS) { // Grab an image and computes the tracking
Pose cameraPose;
}
}
// Export the spatial memory for future sessions
zed.saveCurrentArea("office.area"); // The actual file will be created asynchronously.
std::cout << zed.getAreaExportState() << std::endl;
// --- Close the Camera
zed.close(); // The close method will wait for the end of the file creation using getAreaExportState().
return 0;
}

◆ getAreaExportState()

AREA_EXPORT_STATE getAreaExportState ( )

Returns the state of the spatial memory export process.

As saveCurrentArea() only starts the exportation, this function allows you to know when the exportation finished or if it failed.

Returns
The current state of the spatial memory export process

◆ resetTracking()

ERROR_CODE resetTracking ( const Transform path)

Resets the tracking, and re-initializes the position with the given transformation matrix.

Parameters
path: Position of the camera in the world frame when the function is called. By default, it is set to identity.
Returns
ERROR_CODE_FAILURE if the area_file_path file wasn't found, SUCCESS otherwise.
Note
Please note that this function will also flush the accumulated or loaded spatial memory.

◆ disableTracking()

void disableTracking ( String  area_file_path = "")

Disables the positional tracking.

The positional tracking is immediately stopped. If a file path is given, saveCurrentArea(area_file_path) will be called asynchronously. See getAreaExportState() to get the exportation state.
If the tracking has been enabled, this function will automatically be called by close() .

Parameters
area_file_path: if set, saves the spatial memory into an '.area' file. default : (empty)
area_file_path is the name and path of the database, e.g. "path/to/file/myArea1.area".
Note
The '.area' database depends on the depth map SENSING_MODE chosen during the recording. The same mode must be used to reload the database.

◆ isTrackingEnabled()

bool isTrackingEnabled ( )

Tells if the tracking module is enabled.

◆ getIMUData()

ERROR_CODE getIMUData ( IMUData imu_data,
TIME_REFERENCE  reference_time 
)

Retrieves the IMU Data at a specific time reference.

Calling getIMUData with TIME_REFERENCE_CURRENT gives you the latest IMU data received. Getting all the data requires to call this function at 800Hz in a thread.
Calling getIMUData with TIME_REFERENCE_IMAGE gives you the IMU data at the time of the latest image grabbed.

IMUData object contains 2 category of data:
Time-fused pose estimation that can be accessed using:


Raw values from the IMU sensor:

Parameters
imu_data: [out] the IMUData that inherits from sl::Pose, containing the orientation of the IMU (pose in world reference frame) and other information (timestamp, raw imu data)
reference_time: defines the time reference from when you want the pose to be extracted.
Returns
SUCCESS if IMUData has been filled


Extract Rotation Matrix : imu_data.getRotation();
Extract Orientation / Quaternion: imu_data.getOrientation();

Note
: Translation is not provided when using the IMU only.
: The quaternion (fused data) is given in the specified COORDINATE_SYSTEM of InitParameters.
Warning
: Until v2.4, the IMU rotation (quaternion, rotation matrix, etc.) was expressed relative to the left camera frame. This means that the camera_imu_transform now given was already applied on the fused quaternion. Starting from v2.5, this transformation is given in getCameraInformation().camera_imu_transform and not applied anymore to the fused quaternion, to keep the data in the IMU frame. Therefore, to get the same values calculated in v2.4 with the ZED SDK v2.5 (and up), you will need to apply the transformation as shown in the example below :
// Example to get the same quaternion between v2.4 and v2.5
// SDK v2.4 :
sl::IMUData imudata;
zed.getIMUData(imudata);
sl::Orientation quaternionOnImage = imudata.getOrientation(); // quaternion is given in left camera frame
// SDK v2.5 and up
sl::IMUData imudata;
zed.getIMUData(imudata); // quaternion ( imudata.getOrientation() ) is given in IMU frame
sl::Transform cIMuMatrix_inv = sl::Transform::inverse(cIMuMatrix);
sl::Transform dataOnImage = cIMuMatrix * imudata.pose_data * cIMuMatrix_inv;
sl::Orientation quaternionOnImage = dataOnImage.getOrientation(); // quaternion is given in left camera frame
Warning
: In SVO reading mode, the TIME_REFERENCE_CURRENT is currently not available (yielding ERROR_CODE_INVALID_FUNCTION_PARAMETERS. Only the quaternion data at TIME_REFERENCE_IMAGE is available. Other values will be set to 0.

◆ setIMUPrior()

ERROR_CODE setIMUPrior ( const sl::Transform transform)

Set an optionnal IMU orientation hint that will be used to assist the tracking during the next grab().

This function can be used to assist the positional tracking rotation while using a ZED Mini.

Note
This function is only effective if a ZED Mini (ZED-M) is used.
It needs to be called before the grab() function.
Parameters
sl::Transformto be ingested into IMU fusion. Note that only the rotation is used.
Returns
SUCCESS if the transform has been passed, ERROR_CODE_INVALID_FUNCTION_CALL otherwise (such as when use with the ZED camera due to its lack of an IMU).

◆ enableSpatialMapping()

ERROR_CODE enableSpatialMapping ( SpatialMappingParameters  spatial_mapping_parameters = SpatialMappingParameters())

Initializes and starts the spatial mapping processes.

The spatial mapping will create a geometric representation of the scene based on both tracking data and 3D point clouds.
The resulting output can be a Mesh or a FusedPointCloud. It can be be obtained by calling extractWholeSpatialMap() or retrieveSpatialMapAsync(). Note that retrieveSpatialMapAsync() should be called after requestSpatialMapAsync().

Parameters
spatial_mapping_parameters: the structure containing all the specific parameters for the spatial mapping.
Default: a balanced parameter preset between geometric fidelity and output file size. For more information, see the SpatialMappingParameters documentation.
Returns
SUCCESS if everything went fine, ERROR_CODE_FAILURE otherwise.
Warning
The tracking (enableTracking() ) and the depth (RuntimeParameters::enable_depth ) needs to be enabled to use the spatial mapping.
The performance greatly depends on the spatial_mapping_parameters. \ Lower SpatialMappingParameters.range_meter and SpatialMappingParameters.resolution_meter for higher performance. If the mapping framerate is too slow in live mode, consider using an SVO file, or choose a lower mesh resolution.
Note
This features uses host memory (RAM) to store the 3D map. The maximum amount of available memory allowed can be tweaked using the SpatialMappingParameters.
Exeeding the maximum memory allowed immediately stops the mapping.
#ifndef NDEBUG
#error "Spatial mapping requires Release mode, not Debug."
#endif
#include <sl/Camera.hpp>
using namespace sl;
int main(int argc, char **argv) {
// Create a ZED camera object
Camera zed;
// Set initial parameters
InitParameters init_params;
init_params.camera_resolution = RESOLUTION_HD720; // Use HD720 video mode (default fps: 60)
init_params.coordinate_system = COORDINATE_SYSTEM_RIGHT_HANDED_Y_UP; // Use a right-handed Y-up coordinate system (The OpenGL one)
init_params.coordinate_units = UNIT_METER; // Set units in meters
// Open the camera
ERROR_CODE err = zed.open(init_params);
if (err != SUCCESS)
exit(-1);
// Positional tracking needs to be enabled before using spatial mapping
sl::TrackingParameters tracking_parameters;
err = zed.enableTracking(tracking_parameters);
if (err != SUCCESS)
exit(-1);
// Enable spatial mapping
sl::SpatialMappingParameters mapping_parameters;
err = zed.enableSpatialMapping(mapping_parameters);
if (err != SUCCESS)
exit(-1);
// Grab data during 500 frames
int i = 0;
sl::Mesh mesh; // Create a mesh object
while (i < 500) {
// For each new grab, mesh data is updated
if (zed.grab() == SUCCESS) {
// In the background, spatial mapping will use newly retrieved images, depth and pose to update the mesh
// Print spatial mapping state
std::cout << "Images captured: " << i << " / 500 || Spatial mapping state: " << spatialMappingState2str(mapping_state) << std::endl;
i++;
}
}
std::cout << std::endl;
// Extract, filter and save the mesh in a obj file
std::cout << "Extracting Mesh ..." << std::endl;
zed.extractWholeMesh(mesh); // Extract the whole mesh
std::cout << "Filtering Mesh ..." << std::endl;
mesh.filter(sl::MeshFilterParameters::MESH_FILTER_LOW); // Filter the mesh (remove unnecessary vertices and faces)
std::cout << "Saving Mesh in mesh.obj ..." << std::endl;
mesh.save("mesh.obj"); // Save the mesh in an obj file
// Disable tracking and mapping and close the camera
zed.close();
return 0;
}

◆ getSpatialMappingState()

SPATIAL_MAPPING_STATE getSpatialMappingState ( )

Returns the current spatial mapping state.

As the spatial mapping runs asynchronously, this function allows you to get reported errors or status info.

Returns
The current state of the spatial mapping process
See also
SPATIAL_MAPPING_STATE

◆ requestMeshAsync()

void requestMeshAsync ( )

Starts the mesh generation process in a non blocking thread from the spatial mapping process.

As Mesh generation can be take a long time depending on the mapping resolution and covered area, this function triggers the generation of a mesh without blocking the program. You can get info about the current mesh generation using getMeshRequestStatusAsync(), and retrieve the mesh using retrieveMeshAsync(...) .

Note
Only one mesh generation can be done at a time, consequently while the previous launch is not done every call will be ignored.
Deprecated:
Since SDK 2.8, use requestSpatialMapAsync instead
//Mesh is still generating
}
zed.retrieveMeshAsync(mesh);
std::cout << "Number of triangles in the mesh: " << mesh.getNumberOfTriangles() << std::endl;
}

◆ getMeshRequestStatusAsync()

ERROR_CODE getMeshRequestStatusAsync ( )

Returns the mesh generation status. Useful after calling requestMeshAsync to know if you can call retrieveMeshAsync.

Returns
SUCCESS if the mesh is ready and not yet retrieved, otherwise ERROR_CODE_FAILURE.
Deprecated:
Since SDK 2.8, use getSpatialMapRequestStatusAsync instead


See requestMeshAsync() for an example.

◆ retrieveMeshAsync()

ERROR_CODE retrieveMeshAsync ( Mesh mesh)

Retrieves the current generated mesh.

After calling requestMeshAsync() , this function allows you to retrieve the generated mesh. The mesh will only be available when getMeshRequestStatusAsync() returned SUCCESS

Parameters
mesh: [out] The mesh to be filled.
Returns
SUCCESS if the mesh is retrieved, otherwise ERROR_CODE_FAILURE.
Note
This function only updates chunks that need to be updated and add the new ones in order to improve update speed.
Warning
You should not modify the mesh between two calls of this function, otherwise it can lead to corrupted mesh.
Deprecated:
Since SDK 2.8, use retrieveSpatialMapAsync instead


See requestMeshAsync() for an example.

◆ extractWholeMesh()

ERROR_CODE extractWholeMesh ( Mesh mesh)

Extracts the current mesh from the spatial mapping process.

If the mesh object to be filled already contains a previous version of the mesh, only changes will be updated, optimizing performance.

Parameters
mesh: [out] The mesh to be filled.
Returns
SUCCESS if the mesh is filled and available, otherwise ERROR_CODE_FAILURE.
Warning
This is a blocking function. You should either call it in a thread or at the end of the mapping process.
Is it can be long, calling this function in the grab loop will block the depth and tracking computation and therefore gives bad results.
Deprecated:
Since SDK 2.8, use extractWholeSpatialMap instead


See enableSpatialMapping() for an example.

◆ requestSpatialMapAsync()

void requestSpatialMapAsync ( )

Starts the spatial map generation process in a non blocking thread from the spatial mapping process.

The spatial map generation can take a long time depending on the mapping resolution and covered area. This function will trigger the generation of a mesh without blocking the program. You can get info about the current generation using getSpatialMapRequestStatusAsync(), and retrieve the mesh using retrieveSpatialMapAsync(...) .

Note
Only one mesh can be generated at a time. If the previous mesh generation is not over, new calls of the function will be ignored.
//Mesh is still generating
}
std::cout << "Number of triangles in the mesh: " << mesh.getNumberOfTriangles() << std::endl;
}

◆ getSpatialMapRequestStatusAsync()

ERROR_CODE getSpatialMapRequestStatusAsync ( )

Returns the spatial map generation status. This status allows to know if the mesh can be retrieved by calling retrieveSpatialMapAsync.

Returns
SUCCESS if the mesh is ready and not yet retrieved, otherwise ERROR_CODE_FAILURE.


See requestSpatialMapAsync() for an example.

◆ retrieveSpatialMapAsync() [1/2]

ERROR_CODE retrieveSpatialMapAsync ( Mesh mesh)

Retrieves the current generated spatial map.

After calling requestSpatialMapAsync , this function allows you to retrieve the generated mesh. The mesh will only be available when getMeshRequestStatusAsync() returns SUCCESS

Parameters
mesh: [out] The mesh to be filled with the generated spatial map.
Returns
SUCCESS if the mesh is retrieved, otherwise ERROR_CODE_FAILURE.
Note
This function only updates the necesasry chunks and adds the new ones in order to improve update speed.
Warning
You should not modify the mesh between two calls of this function, otherwise it can lead to corrupted mesh.


See requestSpatialMapAsync() for an example.

◆ retrieveSpatialMapAsync() [2/2]

ERROR_CODE retrieveSpatialMapAsync ( FusedPointCloud fpc)

Retrieves the current generated spatial map.

After calling requestSpatialMapAsync , this function allows you to retrieve the generated fused point cloud. The fused point cloud will only be available when getMeshRequestStatusAsync() returns SUCCESS

Parameters
fpc: [out] The fused point cloud to be filled with the generated spatial map.
Returns
SUCCESS if the fused point cloud is retrieved, otherwise ERROR_CODE_FAILURE.
Note
This function only updates the necesasry chunks and adds the new ones in order to improve update speed.
Warning
You should not modify the fused point cloud between two calls of this function, otherwise it can lead to a corrupted fused point cloud.


See requestSpatialMapAsync() for an example.

◆ extractWholeSpatialMap() [1/2]

ERROR_CODE extractWholeSpatialMap ( Mesh mesh)

Extracts the current spatial map from the spatial mapping process.

If the object to be filled already contains a previous version of the mesh, only changes will be updated, optimizing performance.

Parameters
mesh: [out] The mesh to be filled with the generated spatial map.
Returns
SUCCESS if the mesh is filled and available, otherwise ERROR_CODE_FAILURE.
Warning
This is a blocking function. You should either call it in a thread or at the end of the mapping process.
The extraction can be long, calling this function in the grab loop will block the depth and tracking computation giving bad results.


See enableSpatialMapping() for an example.

◆ extractWholeSpatialMap() [2/2]

ERROR_CODE extractWholeSpatialMap ( FusedPointCloud fpc)

Extracts the current spatial map from the spatial mapping process.

If the object to be filled already contains a previous version of the fused point cloud, only changes will be updated, optimizing performance.

Parameters
fpc: [out] The fused point cloud to be filled with the generated spatial map.
Returns
SUCCESS if the fused point cloud is filled and available, otherwise ERROR_CODE_FAILURE.
Warning
This is a blocking function. You should either call it in a thread or at the end of the mapping process.
The extraction can be long, calling this function in the grab loop will block the depth and tracking computation giving bad results.


See enableSpatialMapping() for an example.

◆ pauseSpatialMapping()

void pauseSpatialMapping ( bool  status)

Pauses or resumes the spatial mapping processes.

As spatial mapping runs asynchronously, using this function can pause its computation to free some processing power, and resume it again later.
For example, it can be used to avoid mapping a specific area or to pause the mapping when the camera is static.

Parameters
status: if true, the integration is paused. If false, the spatial mapping is resumed.

◆ disableSpatialMapping()

void disableSpatialMapping ( )

Disables the spatial mapping process.

The spatial mapping is immediately stopped.
If the mapping has been enabled, this function will automatically be called by close().

Note
This function frees the memory allocated for the spatial mapping, consequently, the spatial map cannot be retrieved after this call.

◆ findPlaneAtHit()

ERROR_CODE findPlaneAtHit ( sl::uint2  coord,
sl::Plane plane 
)

Checks the plane at the given left image coordinates.

This function gives the 3D plane corresponding to a given pixel in the latest left image grabbed.
The pixel coordinates are expected to be contained between 0 and getResolution().width-1 and getResolution().height-1 .

Parameters
coord: [in] The image coordinate. The coordinate must be taken from the full-size image
plane: [out] The detected plane if the function succeeded
Returns
SUCCESS if a plane is found otherwise ERROR_CODE_PLANE_NOT_FOUND .
Note
The reference frame is defined by the RuntimeParameters::measure3D_reference_frame given to the grab() function.

◆ findFloorPlane()

ERROR_CODE findFloorPlane ( sl::Plane floorPlane,
sl::Transform resetTrackingFloorFrame,
float  floor_height_prior = INVALID_VALUE,
sl::Rotation  world_orientation_prior = sl::Matrix3f::zeros(),
float  floor_height_prior_tolerance = INVALID_VALUE 
)

Detect the floor plane of the scene.

This function analyses the latest image and depth to estimate the floor plane of the scene.
It expects the floor plane to be visible and bigger than other candidate planes, like a table.

Parameters
floorPlane: [out] The detected floor plane if the function succeeded
resetTrackingFloorFrame: [out] The transform to align the tracking with the floor plane. The initial position will then be at ground height, with the axis align with the gravity. The positional tracking needs to be reset/enabled with this transform as a parameter (TrackingParameters.initial_world_transform)
floor_height_prior: [in] Prior set to locate the floor plane depending on the known camera distance to the ground, expressed in the same unit as the ZED. If the prior is too far from the detected floor plane, the function will return ERROR_CODE_PLANE_NOT_FOUND
world_orientation_prior: [in] Prior set to locate the floor plane depending on the known camera orientation to the ground. If the prior is too far from the detected floor plane, the function will return ERROR_CODE_PLANE_NOT_FOUND
floor_height_prior_tolerance: [in] Prior height tolerance, absolute value.
Returns
SUCCESS if the floor plane is found and matches the priors (if defined), otherwise ERROR_CODE_PLANE_NOT_FOUND
Note
The reference frame is defined by the sl:RuntimeParameters (measure3D_reference_frame) given to the grab() function. The length unit is defined by sl:InitParameters (coordinate_units). With the ZED, the assumption is made that the floor plane is the dominant plane in the scene. The ZED Mini uses the gravity as prior.

◆ enableRecording()

ERROR_CODE enableRecording ( String  video_filename,
SVO_COMPRESSION_MODE  compression_mode = SVO_COMPRESSION_MODE_LOSSLESS 
)

Creates an SVO file to be filled by record().


SVO files are custom video files containing the unrectified images from the camera along with some metadata like timestamps or IMU orientation (if applicable).
They can be used to simulate a live ZED and test a sequence with various SDK parameters.
Depending on the application, various compression modes are available. See SVO_COMPRESSION_MODE.

Parameters
video_filename: filename of the SVO file.
compression_mode: can be one of the SVO_COMPRESSION_MODE enum. default : SVO_COMPRESSION_MODE_LOSSLESS.
Returns
an ERROR_CODE that defines if SVO file was successfully created and can be filled with images.
Warning
This function can be called multiple times during ZED lifetime, but if video_filename is already existing, the file will be erased.
Note
SVO_COMPRESSION_MODE::SVO_COMPRESSION_MODE_RAW is deprecated in recording mode.
#include <sl/Camera.hpp>
using namespace sl;
int main(int argc, char **argv) {
// Create a ZED camera object
Camera zed;
// Set initial parameters
InitParameters init_params;
init_params.camera_resolution = RESOLUTION_HD720; // Use HD720 video mode (default fps: 60)
init_params.coordinate_units = UNIT_METER; // Set units in meters
// Open the camera
ERROR_CODE err = zed.open(init_params);
if (err != SUCCESS) {
std::cout << toString(err) << std::endl;
exit(-1);
}
// Enable video recording
err = zed.enableRecording("myVideoFile.svo", SVO_COMPRESSION_MODE_LOSSLESS);
if (err != SUCCESS) {
std::cout << toString(err) << std::endl;
exit(-1);
}
// Grab data during 500 frames
int i = 0;
while (i < 500) {
// Grab a new frame
if (zed.grab() == SUCCESS) {
// Record the grabbed frame in the video file
zed.record();
i++;
}
}
std::cout << "Video has been saved ..." << std::endl;
zed.close();
return 0;
}

◆ record()

RecordingState record ( )

Records the current frame provided by grab() into the file.

Calling this function after a successful grab() call saves the images into the video file opened by enableRecording() .

Returns
The recording state structure. For more details, see RecordingState.
Warning
The latest grabbed frame will be save, so grab() must be called before.


See enableRecording() for an example.

◆ isRecordingEnabled()

bool isRecordingEnabled ( )

Tells if a recording is in progress.

◆ disableRecording()

void disableRecording ( )

Disables the recording initiated by enableRecording() and closes the generated file.

Note
This function will automatically be called by close() if enableRecording() was called.

See enableRecording() for an example.

◆ enableStreaming()

ERROR_CODE enableStreaming ( StreamingParameters  streaming_parameters = StreamingParameters())

Creates a streaming pipeline.

Parameters
streaming_parameters: the structure containing all the specific parameters for the streaming.
#include <sl/Camera.hpp>
using namespace sl;
int main(int argc, char **argv) {
// Create a ZED camera object
Camera zed;
// Set initial parameters
InitParameters init_params;
init_params.camera_resolution = RESOLUTION_HD720; // Use HD720 video mode (default fps: 60)
// Open the camera
ERROR_CODE err = zed.open(init_params);
if (err != SUCCESS) {
std::cout << toString(err) << std::endl;
exit(-1);
}
// Enable video recording
stream_params.port = 30000;
stream_params.bitrate = 8000;
err = zed.enableStreaming(stream_params);
if (err != SUCCESS) {
std::cout << toString(err) << std::endl;
exit(-1);
}
// Grab data during 500 frames
int i = 0;
while (i < 500) {
// Grab a new frame
if (zed.grab() == SUCCESS) {
i++;
}
}
zed.close();
return 0;
}
Returns
an ERROR_CODE that defines if the stream was started.
Possible Error Code :
* SUCCESS if streaming was successfully started
* ERROR_CODE_INVALID_FUNCTION_CALL if open() was not successfully called before.
* ERROR_CODE_FAILURE if streaming RTSP protocol was not able to start.
* ERROR_CODE_NO_GPU_COMPATIBLE if streaming codec is not supported (in this case, use AVCHD codec).

◆ disableStreaming()

void disableStreaming ( )

Disables the streaming initiated by enableStreaming()

Note
This function will automatically be called by close() if enableStreaming() was called. See enableStreaming() for an example.

◆ isStreamingEnabled()

bool isStreamingEnabled ( )

Tells if the streaming is running (true) or still initializing (false)

◆ getSDKVersion()

static String getSDKVersion ( )
static

Returns the version of the currently installed ZED SDK.

Returns
The ZED SDK version as a string with the following format : MAJOR.MINOR.PATCH
std::cout << Camera::getSDKVersion() << std::endl;

◆ isZEDconnected()

static int isZEDconnected ( )
static

Returns the number of connected cameras.

Returns
The number of connected cameras supported by the SDK. See (MODEL )
Deprecated:
See getDeviceList.
Warning
This function has been deprecated in favor of getDeviceList() , which returns more info about the connected devices.

◆ getDeviceList()

static std::vector<sl::DeviceProperties> getDeviceList ( )
static

List all the connected devices with their associated information.

This function lists all the cameras available and provides their serial number, models and other information.

Returns
The device properties for each connected camera
Warning
As this function returns an std::vector, it is only safe to use in Release mode (not Debug).
This is due to a known compatibility issue between release (the SDK) and debug (your app) implementations of std::vector.

◆ getStreamingDeviceList()

static std::vector<sl::StreamingProperties> getStreamingDeviceList ( )
static

List all the streaming devices with their associated information.

Returns
The streaming properties for each connected camera
Warning
As this function returns an std::vector, it is only safe to use in Release mode (not Debug).
This is due to a known compatibility issue between release (the SDK) and debug (your app) implementations of std::vector.
This function takes around 2seconds to make sure all network informations has been captured. Make sure to run this function in a thread.

◆ sticktoCPUCore()

static ERROR_CODE sticktoCPUCore ( int  cpu_core)
static

Only for Nvidia Jetson: Sticks the calling thread to a specific CPU core.

Parameters
cpuCore: int that defines the core the thread must be run on. Can be between 0 and 3. (cpu0,cpu1,cpu2,cpu3).
Returns
SUCCESS if stick is OK, otherwise returns a status error.
Warning
Function only available for Nvidia Jetson. On other platforms, the result will always be 0 and no operations will be performed.