Camera Class Reference

This class is the main interface with the camera and the SDK features, such as: video, depth, tracking, mapping, and more. More...

Functions

def close (self)
  If open() has been called, this function will close the connection to the camera (or the SVO file) and free the corresponding memory. More...
 
def open (self, py_init=InitParameters())
 Opens the ZED camera from the provided InitParameters. More...
 
def is_opened (self)
 Reports if the camera has been successfully opened. More...
 
def grab (self, py_runtime=RuntimeParameters())
 This function will grab the latest images from the camera, rectify them, and compute the measurements based on the RuntimeParameters provided (depth, point cloud, tracking, etc.)  As measures are created in this function, its execution can last a few milliseconds, depending on your parameters and your hardware. More...
 
def retrieve_image (self, Mat py_mat, view=VIEW.LEFT, type=MEM.CPU, resolution=Resolution(0, 0))
 Retrieves images from the camera (or SVO file). More...
 
def retrieve_measure (self, Mat py_mat, measure=MEASURE.DEPTH, type=MEM.CPU, resolution=Resolution(0, 0))
 Computed measures, like depth, point cloud, or normals, can be retrieved using this method. More...
 
def set_svo_position (self, int frame_number)
  Sets the playback cursor to the desired frame number in the SVO file. More...
 
def get_svo_position (self)
 Returns the current playback position in the SVO file. More...
 
def get_svo_number_of_frames (self)
 Returns the number of frames in the SVO file. More...
 
def set_camera_settings (self, VIDEO_SETTINGS settings, value=-1)
 Sets the value of the requested camera setting (gain, brightness, hue, exposure, etc.) More...
 
def set_camera_settings_roi (self, VIDEO_SETTINGS settings, Rect roi, eye=SIDE.BOTH, reset=False)
 Sets the ROI of the requested camera setting (AEC_AGC_ROI) More...
 
def get_camera_settings (self, VIDEO_SETTINGS setting)
 Returns the current value of the requested camera setting. More...
 
def get_camera_settings_roi (self, VIDEO_SETTINGS setting, Rect roi, eye=SIDE.BOTH)
 Returns the current value of the currently used ROI for the camera setting (AEC_AGC_ROI)  . More...
 
def get_current_fps (self)
  Returns the current framerate at which the grab() method is successfully called. More...
 
def get_timestamp (self, TIME_REFERENCE time_reference)
 Returns the timestamp in the requested TIME_REFERENCE. More...
 
def get_frame_dropped_count (self)
  Returns the number of frames dropped since grab() was called for the first time. More...
 
def get_camera_information (self, resizer=Resolution(0, 0))
 Returns the calibration parameters, serial number and other information about the camera being used. More...
 
def get_runtime_parameters (self)
  Returns the runtime parameters used. More...
 
def get_init_parameters (self)
  Returns the init parameters used. More...
 
def get_positional_tracking_parameters (self)
 Returns the positional tracking parameters used. More...
 
def get_spatial_mapping_parameters (self)
 Returns the spatial mapping parameters used. More...
 
def get_object_detection_parameters (self)
 Returns the object detection parameters used. More...
 
def get_streaming_parameters (self)
  Returns the streaming parameters used. More...
 
def enable_positional_tracking (self, py_tracking=PositionalTrackingParameters())
  Initializes and starts the positional tracking processes. More...
 
def update_self_calibration (self)
 Performs a new self calibration process. More...
 
def get_sensors_data (self, SensorsData py_sensors_data, time_reference=TIME_REFERENCE.CURRENT)
 Retrieves the Sensors (IMU,magnetometer,barometer) Data at a specific time reference. More...
 
def set_imu_prior (self, Transform transfom)
 Set an optional IMU orientation hint that will be used to assist the tracking during the next grab(). More...
 
def get_position (self, Pose py_pose, reference_frame=REFERENCE_FRAME.WORLD)
  Retrieves the estimated position and orientation of the camera in the specified reference frame. More...
 
def get_area_export_state (self)
 Returns the state of the spatial memory export process. More...
 
def save_area_map (self, area_file_path="")
 Saves the current area learning file. More...
 
def disable_positional_tracking (self, area_file_path="")
 Disables the positional tracking. More...
 
def is_positional_tracking_enabled (self)
 Tells if the tracking module is enabled.
 
def reset_positional_tracking (self, Transform path)
 Resets the tracking, and re-initializes the position with the given transformation matrix. More...
 
def enable_spatial_mapping (self, py_spatial=SpatialMappingParameters())
 Initializes and starts the spatial mapping processes. More...
 
def pause_spatial_mapping (self, bool status)
 Pauses or resumes the spatial mapping processes. More...
 
def get_spatial_mapping_state (self)
 Returns the current spatial mapping state. More...
 
def request_spatial_map_async (self)
  Starts the spatial map generation process in a non blocking thread from the spatial mapping process. More...
 
def get_spatial_map_request_status_async (self)
  Returns the spatial map generation status. More...
 
def retrieve_spatial_map_async (self, py_mesh)
  Retrieves the current generated spatial map. More...
 
def extract_whole_spatial_map (self, py_mesh)
 Extracts the current spatial map from the spatial mapping process. More...
 
def find_plane_at_hit (self, coord, Plane py_plane)
 Checks the plane at the given left image coordinates. More...
 
def find_floor_plane (self, Plane py_plane, Transform resetTrackingFloorFrame, floor_height_prior=float('nan'), world_orientation_prior=Rotation(Matrix3f().zeros()), floor_height_prior_tolerance=float('nan'))
  Detect the floor plane of the scene. More...
 
def disable_spatial_mapping (self)
 Disables the spatial mapping process. More...
 
def enable_streaming (self, streaming_parameters=StreamingParameters())
 Creates a streaming pipeline for images. More...
 
def disable_streaming (self)
  Disables the streaming initiated by enable_straming()   More...
 
def is_streaming_enabled (self)
 Tells if the streaming is actually sending data (true) or still in configuration (false)
 
def enable_recording (self, RecordingParameters record)
 Creates an SVO file to be filled by record(). More...
 
def disable_recording (self)
 Disables the recording initiated by enable_recording() and closes the generated file. More...
 
def get_recording_status (self)
  Get the recording information More...
 
def pause_recording (self, value=True)
 Pauses or resumes the recording. More...
 
def get_recording_parameters (self)
 Returns the recording parameters used. More...
 
def enable_object_detection (self, object_detection_parameters=ObjectDetectionParameters())
 Initializes and starts the object detection module. More...
 
def disable_object_detection (self)
 Disables the Object Detection process. More...
 
def pause_object_detection (self, bool status)
 Pauses or resumes the object detection processes. More...
 
def retrieve_objects (self, Objects py_objects, object_detection_parameters=ObjectDetectionRuntimeParameters())
  Retrieve objects detected by the object detection module. More...
 
def get_objects_batch (self, list[ObjectsBatch] trajectories)
 Get a batch of detected objects. More...
 
def ingest_custom_box_objects (self, list[CustomBoxObjectData] objects_in)
 Feed the 3D Object tracking function with your own 2D bounding boxes from your own detection algorithm. More...
 

Static Functions

def get_sdk_version ()
  Returns the version of the currently installed ZED SDK.
 
def get_device_list ()
  Lists all the connected devices with their associated information. More...
 
def get_streaming_device_list ()
  Lists all the streaming devices with their associated information. More...
 
def reboot (int, sn, bool, fullReboot=True)
  Performs an hardware reset of the ZED 2. More...
 

Detailed Description

This class is the main interface with the camera and the SDK features, such as: video, depth, tracking, mapping, and more.

Find more information in the detailed description below.  

 A standard program will use the Camera class like this:  

import pyzed.sl as sl
def main():
# --- Initialize a Camera object and open the ZED
# Create a ZED camera object
zed = sl.Camera()
# Set configuration parameters
init_params = sl.InitParameters()
init_params.camera_resolution = sl.RESOLUTION.HD720 #Use HD720 video mode
init_params.camera_fps = 60 # Set fps at 60
# Open the camera
err = zed.open(init_params)
if err != sl.ERROR_CODE.SUCCESS :
print(repr(err))
exit(-1)
runtime_param = sl.RuntimeParameters()
runtime_param.sensing_mode = sl.SENSING_MODE.STANDARD
# --- Main loop grabing images and depth values
# Capture 50 frames and stop
i = 0
image = sl.Mat()
depth = sl.Mat()
while i < 50 :
# Grab an image
if zed.grab(runtime_param) == sl.ERROR_CODE.SUCCESS : # A new image is available if grab() returns SUCCESS
# Display a pixel color
zed.retrieve_image(image, sl.VIEW.LEFT) # Get the left image
center_rgb = image.get_value(image.get_width() / 2, image.get_height() / 2)
print("Image ", i, " center pixel R:", int(center_rgb[0]), " G:", int(center_rgb[1]), " B:", int(center_rgb[2]))
# Display a pixel depth
zed.retrieve_measure(depth, sl.MEASURE.DEPTH) # Get the depth map
center_depth = depth.get_value(depth.get_width() / 2, depth.get_height() /2)
print("Image ", i," center depth:", center_depth)
i = i+1
# --- Close the Camera
zed.close()
return 0
if __name__ == "__main__":
main()
 
Definition: sl.pyx:1

Functions

◆ close()

def close (   self)

 If open() has been called, this function will close the connection to the camera (or the SVO file) and free the corresponding memory.

If open() wasn't called or failed, this function won't have any effect.

Note
If an asynchronous task is running within the Camera object, like save_area_map(), this function will wait for its completion.  The open() function can then be called if needed.  
Warning
If the CUDA context was created by open(), this function will destroy it. Please make sure to delete your GPU sl.Mat objects before the context is destroyed.

◆ open()

def open (   self,
  py_init = InitParameters() 
)

Opens the ZED camera from the provided InitParameters.

This function will also check the hardware requirements and run a self-calibration.  

Parameters
py_init: a structure containing all the initial parameters. default : a preset of InitParameters.  
Returns
An error code giving information about the internal process. If SUCCESS is returned, the camera is ready to use. Every other code indicates an error and the program should be stopped.

 Here is the proper way to call this function:

zed = sl.Camera() # Create a ZED camera object
init_params = sl.InitParameters() # Set configuration parameters
init_params.camera_resolution = sl.RESOLUTION.HD720 # Use HD720 video mode
 init_params.camera_fps = 60 # Set fps at 60
 # Open the camera
 err = zed.open(init_params)
 if (err != sl.ERROR_CODE.SUCCESS) :
  print(repr(err)) # Display the error
  exit(-1)
 

 

Note
 If you are having issues opening a camera, the diagnostic tool provided in the SDK can help you identify to problems.  If this function is called on an already opened camera, Camera.close() will be called.

◆ is_opened()

def is_opened (   self)

Reports if the camera has been successfully opened.

It has the same behavior as checking if open() returns SUCCESS.

Returns
true if the ZED is already setup, otherwise false.

◆ grab()

def grab (   self,
  py_runtime = RuntimeParameters() 
)

This function will grab the latest images from the camera, rectify them, and compute the measurements based on the RuntimeParameters provided (depth, point cloud, tracking, etc.)  As measures are created in this function, its execution can last a few milliseconds, depending on your parameters and your hardware.

 The exact duration will mostly depend on the following parameters:

This function is meant to be called frequently in the main loop of your application.

Note
Since ZED SDK 3.0, this function is blocking. It means that grab() will wait until a new frame is detected and available. If no new frames is available until timeout is reached, grab() will return ERROR_CODE.CAMERA_NOT_DETECTED since the camera has probably been disconnected.
Parameters
py_runtime: a structure containing all the runtime parameters. default : a preset of RuntimeParameters.  
ReturningSUCCESS means that no problem was encountered. Returned errors can be displayed using toString(error)

 

# Set runtime parameters after opening the camera
runtime_param = sl.RuntimeParameters()
runtime_param.sensing_mode = sl.SENSING_MODE.STANDARD # Use STANDARD sensing mode
image = sl.Mat()
while True :
 # Grab an image
if zed.grab(runtime_param) == sl.ERROR_CODE.SUCCESS : # A new image is available if grab() returns SUCCESS
zed.retrieve_image(image, sl.VIEW.LEFT) # Get the left image
# Use the image for your application
 

◆ retrieve_image()

def retrieve_image (   self,
Mat  py_mat,
  view = VIEW.LEFT,
  type = MEM.CPU,
  resolution = Resolution(0,0) 
)

Retrieves images from the camera (or SVO file).

 Multiple images are available along with a view of various measures for display purposes.  Available images and views are listed here. As an example, VIEW.DEPTH can be used to get a gray-scale version of the depth map, but the actual depth values can be retrieved using retrieve_measure() .

Memory
By default, images are copied from GPU memory to CPU memory (RAM) when this function is called. If your application can use GPU images, using the type parameter can increase performance by avoiding this copy. If the provided Mat object is already allocated and matches the requested image format, memory won't be re-allocated.

Image size
By default, images are returned in the resolution provided by get_camera_information() in CameraInformation.camera_resolution However, you can request custom resolutions. For example, requesting a smaller image can help you speed up your application.

Parameters
py_mat: [out] the Mat to store the image.  
view: defines the image you want (see VIEW). default : VIEW.LEFT.  
type: whether the image should be provided in CPU or GPU memory. default : MEM.CPU
resolution: if specified, defines the Resolution of the output mat. If set to (0,0) , the ZED resolution will be taken. default : (0,0).
Returns
An ERROR_CODE :
- ERROR_CODE.SUCCESS if the method succeeded,
- ERROR_CODE.INVALID_FUNCTION_PARAMETERS if the view mode requires a module not enabled (VIEW.DEPTH with DEPTH_MODE.NONE for example),
- ERROR_CODE.INVALID_RESOLUTION if the resolution is higher than CameraInformation.camera_resolution provided by get_camera_information()

 

Note
As this function retrieves the images grabbed by the grab() function, it should be called afterwards.

 

# create sl.Mat objects to store the images
left_image = sl.Mat()
 depth_view = sl.Mat()
while True :
 # Grab an image
 if zed.grab() == sl.ERROR_CODE.SUCCESS : # A new image is available if grab() returns SUCCESS
zed.retrieve_image(left_image, sl.VIEW.LEFT) # Get the rectified left image
  zed.retrieve_image(depth_view, sl.VIEW.DEPTH) # Get a grayscale preview of the depth map
  # Display the center pixel colors
left_center = left_image.get_value(left_image.get_width() / 2, left_image.get_height() / 2)
print("left_image center pixel R:", int(left_center[0]), " G:", int(left_center[1]), " B:", int(left_center[2]))
depth_center = depth_view.get_value(depth_view.get_width() / 2, depth_view.get_height() / 2)
print("depth_view center pixel R:", int(depth_venter[1]), " G:", int(depth_center[1]), " B:", int(depth_center[2]))
 

◆ retrieve_measure()

def retrieve_measure (   self,
Mat  py_mat,
  measure = MEASURE.DEPTH,
  type = MEM.CPU,
  resolution = Resolution(0,0) 
)

Computed measures, like depth, point cloud, or normals, can be retrieved using this method.

 Multiple measures are available after a Camera.grab() call. A full list is available here.

Memory By default, images are copied from GPU memory to CPU memory (RAM) when this function is called.  If your application can use GPU images, using the type parameter can increase performance by avoiding this copy.  If the provided Mat object is already allocated and matches the requested image format, memory won't be re-allocated.    Measure size By default, measures are returned in the resolution provided by get_camera_information() in CameraInformations.camera_resolution .  However, custom resolutions can be requested. For example, requesting a smaller measure can help you speed up your application.

Parameters
py_mat: [out] the Mat to store the measures
measure: defines the measure you want. (see MEASURE), default : MEASURE.DEPTH  
type: the type of the memory of provided mat that should by used. default : MEM.CPU.
resolution: if specified, define the resolution of the output mat. If set to Resolution (0,0) , the ZED resolution will be taken. default : (0,0).
Returns
An ERROR_CODE
- ERROR_CODE.SUCCESS if the method succeeded,
- ERROR_CODE.INVALID_FUNCTION_PARAMETERS if the view mode requires a module not enabled (VIEW.DEPTH with DEPTH_MODE for example "DEPTH_MODE.NONE",
- ERROR_CODE.INVALID_RESOLUTION if the resolution is higher than CameraInformation.camera_resolution provided by get_camera_information()
- ERROR_CODE.FAILURE if another error occured.

 

Note
As this function retrieves the measures computed by the grab() function, it should be called after.

 

depth_map = sl.Mat()
point_cloud = sl.Mat()
resolution = zed.get_camera_informations().camera_resolution
x = int(resolution.width / 2) # Center coordinates
y = int(resolution.height / 2)
while True :
if zed.grab() == sl.ERROR_CODE.SUCCESS : # Grab an image
  zed.retrieve_measure(depth_map, sl.MEASURE.DEPTH, sl.MEM.CPU) # Get the depth map
# Read a depth value
center_depth = depth_map.get_value(x, y sl.MEM.CPU) # each depth map pixel is a float value
if isnormal(center_depth) : # + Inf is "too far", -Inf is "too close", Nan is "unknown/occlusion"
print("Depth value at center: ", center_depth, " ", init_params.coordinate_units)
zed.retrieve_measure(point_cloud, sl.MEASURE.XYZRGBA, sl.MEM.CPU) # Get the point cloud
# Read a point cloud value
err, pc_value = point_cloud.get_value(x, y) # each point cloud pixel contains 4 floats, so we are using a numpy array
# Get 3D coordinates
if (isnormal(pc_value[2])) :
print("Point cloud coordinates at center: X=", pc_value[0], ", Y=", pc_value[1], ", Z=", pc_value[2])
# Get color information using Python struct package to unpack the unsigned char array containing RGBA values
import struct
packed = struct.pack('f', pc_value[3])
char_array = struct.unpack('BBBB', packed)
print("Color values at center: R=", char_array[0], ", G=", char_array[1], ", B=", char_array[2], ", A=", char_array[3])
 

◆ set_svo_position()

def set_svo_position (   self,
int  frame_number 
)

 Sets the playback cursor to the desired frame number in the SVO file.

This function allows you to move around within a played-back SVO file. After calling, the next call to grab() will read the provided frame number.

Parameters
frame_number: the number of the desired frame to be decoded.
Note
Works only if the camera is open in SVO playback mode.

 

import pyzed.sl as sl
 def main() :
  # Create a ZED camera object
zed = sl.Camera()
# Set configuration parameters
init_params = sl.InitParameters()
init_params.set_from_svo_file("path/to/my/file.svo")
# Open the camera
err = zed.open(init_params)
if err != sl.ERROR_CODE.SUCCESS :
  print(repr(err))
exit(-1)
  # Loop between frame 0 and 50
i = 0
left_image = sl.Mat()
while zed.get_svo_position() < zed.get_svo_number_of_frames()-1 :
  print("Current frame: ", zed.get_svo_position())
# Loop if we reached frame 50
  if zed.get_svo_position() == 50 :
zed.set_svo_position(0)
# Grab an image
if zed.grab() == sl.ERROR_CODE.SUCCESS :
  zed.retrieve_image(left_image, sl.VIEW.LEFT) # Get the rectified left image
# Use the image in your application
# Close the Camera
zed.close()
return 0
 if __name__ == "__main__" :
main()

◆ get_svo_position()

def get_svo_position (   self)

Returns the current playback position in the SVO file.

The position corresponds to the number of frames already read from the SVO file, starting from 0 to n.    Each grab() call increases this value by one (except when using InitParameters.svo_real_time_mode).

Returns
The current frame position in the SVO file. Returns -1 if the SDK is not reading an SVO.

 

Note
Works only if the camera is open in SVO playback mode.

See set_svo_position() for an example.

◆ get_svo_number_of_frames()

def get_svo_number_of_frames (   self)

Returns the number of frames in the SVO file.

Returns
The total number of frames in the SVO file (-1 if the SDK is not reading a SVO).

 

Note
Works only if the camera is open in SVO reading mode.

◆ set_camera_settings()

def set_camera_settings (   self,
VIDEO_SETTINGS  settings,
  value = -1 
)

Sets the value of the requested camera setting (gain, brightness, hue, exposure, etc.)

Possible values (range) of each setting are available here.

 

Parameters
settings: the setting to be set  
value: the value to set, default : auto mode
# Set the gain to 50
 zed.set_camera_settings(sl.VIDEO_SETTINGS.GAIN, 50)

 

Warning
Setting VIDEO_SETTINGS.EXPOSURE or VIDEO_SETTINGS.GAIN to default will automatically sets the other to default.

 

Note
Works only if the camera is opened in live mode.

◆ set_camera_settings_roi()

def set_camera_settings_roi (   self,
VIDEO_SETTINGS  settings,
Rect  roi,
  eye = SIDE.BOTH,
  reset = False 
)

Sets the ROI of the requested camera setting (AEC_AGC_ROI)

 

Parameters
settings: the setting to be set
roi: the requested ROI
eye: the requested side. Default: SIDE.BOTH
reset: cancel the manual ROI and reset it to the full image. Default: False
roi = sl.Rect(42, 56, 120, 15)
zed.set_camera_settings_roi(sl.VIDEO_SETTINGS.AEC_AGC_ROI, roi, sl.SIDE.BOTH)

 

Note
Works only if the camera is opened in live mode.

◆ get_camera_settings()

def get_camera_settings (   self,
VIDEO_SETTINGS  setting 
)

Returns the current value of the requested camera setting.

(gain, brightness, hue, exposure, etc.)    Possible values (range) of each setting are available here.  

Parameters
setting: the requested setting.  
Returns
The current value for the corresponding setting. Returns -1 if encounters an error.

 

gain = zed.get_camera_settings(sl.VIDEO_SETTINGS.GAIN)
 print("Current gain value: ", gain)
 

 

Note
Works only if the camera is open in live mode. (Settings aren't exported in the SVO file format)

◆ get_camera_settings_roi()

def get_camera_settings_roi (   self,
VIDEO_SETTINGS  setting,
Rect  roi,
  eye = SIDE.BOTH 
)

Returns the current value of the currently used ROI for the camera setting (AEC_AGC_ROI)  .

Parameters
setting: the requested setting.
roi: the current ROI used
eye: the requested side. Default: SIDE.BOTH  
Returns
An ERROR_CODE

 

roi = sl.Rect()
err = zed.get_camera_settings_roi(sl.VIDEO_SETTINGS.AEC_AGC_ROI, roi, sl.SIDE.BOTH)
 print("Current ROI for AEC_AGC: " + str(roi_.x) + " " + str(roi_.y)+ " " + str(roi_.width) + " " + str(roi_.height))
 

 

Note
Works only if the camera is open in live mode. (Settings aren't exported in the SVO file format)

◆ get_current_fps()

def get_current_fps (   self)

 Returns the current framerate at which the grab() method is successfully called.

 The returned value is based on the difference of camera timestamps between two successful grab() calls.

 

Returns
The current SDK framerate

 

Warning
The returned framerate (number of images grabbed per second) can be lower than InitParameters.camera_fps if the grab() function runs slower than the image stream or is called too often.

 

current_fps = zed.get_current_fps()
 print("Current framerate: ", current_fps)

◆ get_timestamp()

def get_timestamp (   self,
TIME_REFERENCE  time_reference 
)

Returns the timestamp in the requested TIME_REFERENCE.

 - When requesting the TIME_REFERENCE.IMAGE timestamp, the UNIX nanosecond timestamp of the latest grabbed image will be returned. This value corresponds to the time at which the entire image was available in the PC memory. As such, it ignores the communication time that corresponds to 2 or 3 frame-time based on the fps (ex: 33.3ms to 50ms at 60fps).

 - When requesting the TIME_REFERENCE.CURRENT timestamp, the current UNIX nanosecond timestamp is returned.

This function can also be used when playing back an SVO file.

 

Parameters
time_reference: The selected TIME_REFERENCE.
Returns
The Timestamp in nanosecond. 0 if not available (SVO file without compression).

 

Note
As this function returns UNIX timestamps, the reference it uses is common across several Camera instances.

 

last_image_timestamp = zed.get_timestamp(sl.TIME_REFERENCE.IMAGE)
current_timestamp = zed.get_timestamp(sl.TIME_REFERENCE.CURRENT)
print("Latest image timestamp: ", last_image_timestamp.get_nanoseconds(), "ns from Epoch.")
 print("Current timestamp: ", current_timestamp.get_nanoseconds(), "ns from Epoch.")
 

 

◆ get_frame_dropped_count()

def get_frame_dropped_count (   self)

 Returns the number of frames dropped since grab() was called for the first time.

A dropped frame corresponds to a frame that never made it to the grab function.  This can happen if two frames were extracted from the camera when grab() is called. The older frame will be dropped so as to always use the latest (which minimizes latency)

Returns
The number of frames dropped since the first grab() call.

◆ get_camera_information()

def get_camera_information (   self,
  resizer = Resolution(0, 0) 
)

Returns the calibration parameters, serial number and other information about the camera being used.

As calibration parameters depend on the image resolution, you can provide a custom resolution as a parameter to get scaled information.  When reading an SVO file, the parameters will correspond to the camera used for recording.    

Parameters
resizer: You can specify a size different from default image size to get the scaled camera information. default = (0,0) meaning original image size.  
Returns
CameraInformation containing the calibration parameters of the ZED, as well as serial number and firmware version.

 

Note
The returned parameters might vary between two execution due to the self-calibration being ran in the open() method.

◆ get_runtime_parameters()

def get_runtime_parameters (   self)

 Returns the runtime parameters used.

Corresponds to the structure sent when the grab() function was called

Returns
RuntimeParameters containing the parameters that defines the behavior of the grab()

◆ get_init_parameters()

def get_init_parameters (   self)

 Returns the init parameters used.

Corresponds to the structure sent when the open() function was called

 

Returns
InitParameters containing the parameters used to initialize the Camera object.

◆ get_positional_tracking_parameters()

def get_positional_tracking_parameters (   self)

Returns the positional tracking parameters used.

Corresponds to the structure sent when the Camera.enable_positional_tracking() function was called.

 

Returns
PositionalTrackingParameters containing the parameters used for positional tracking initialization.

◆ get_spatial_mapping_parameters()

def get_spatial_mapping_parameters (   self)

Returns the spatial mapping parameters used.

Corresponds to the structure sent when the Camera.enable_spatial_mapping() function was called.

 

Returns
SpatialMappingParameters containing the parameters used for spatial mapping initialization.

◆ get_object_detection_parameters()

def get_object_detection_parameters (   self)

Returns the object detection parameters used.

Corresponds to the structure sent when the Camera.enable_object_detection() function was called

 

Returns
ObjectDetectionParameters containing the parameters used for object detection initialization.

◆ get_streaming_parameters()

def get_streaming_parameters (   self)

 Returns the streaming parameters used.

Corresponds to the structure sent when the Camera.enable_streaming() function was called.    

Returns
StreamingParameters containing the parameters used for streaming initialization.

◆ enable_positional_tracking()

def enable_positional_tracking (   self,
  py_tracking = PositionalTrackingParameters() 
)

 Initializes and starts the positional tracking processes.

 This function allows you to enable the position estimation of the SDK. It only has to be called once in the camera's lifetime.

When enabled, the position will be updated at each grab call. Tracking-specific parameters can be set by providing PositionalTrackingParameters to this function.

 

Parameters
py_tracking: structure containing all the PositionalTrackingParameters . default : a preset of PositionalTrackingParameters.  
Returns
ERROR_CODE.FAILURE if the area_file_path file wasn't found, ERROR_CODE.SUCCESS otherwise.

 

Warning
The positional tracking feature benefits from a high framerate. We found HD720@60fps to be the best compromise between image quality and framerate.

 

 import pyzed.sl as sl
def main() :
# --- Initialize a Camera object and open the ZED
  # Create a ZED camera object
  zed = sl.Camera()
# Set configuration parameters
  init_params = sl.InitParameters()
  init_params.camera_resolution = sl.RESOLUTION.HD720 # Use HD720 video mode
  init_params.camera_fps = 60 # Set fps at 60
  # Open the camera
  err = zed.open(init_params)
  if err != sl.ERROR_CODE.SUCCESS :
print(repr(err))
exit(-1)
  # Set tracking parameters
track_params = sl.PositionalTrackingParameters()
track_params.enable_spatial_memory = True
# Enable positional tracking
err = zed.enable_tracking(track_params)
  if err != sl.ERROR_CODE.SUCCESS :
  print("Tracking error: ", repr(err))
  exit(-1)
  # --- Main loop
  while True :
if zed.grab() == sl.ERROR_CODE.SUCCESS : # Grab an image and computes the tracking
camera_pose = sl.Pose()
  zed.get_position(camera_pose, sl.REFERENCE_FRAME.WORLD)
  print("Camera position: X=", camera_pose.get_translation()[0], " Y=", camera_pose.get_translation()[1], " Z=", camera_pose.get_translation()[2])
# --- Close the Camera
zed.close()
  return 0
 if __name__ == "__main__" :
main()
 

◆ update_self_calibration()

def update_self_calibration (   self)

Performs a new self calibration process.

In some cases, due to temperature changes or strong vibrations, the stereo calibration becomes less accurate. Use this function to update the self-calibration data and get more reliable depth values.

Note
The self calibration will occur at the next grab() call.
This function is similar to the previous resetSelfCalibration() used in 2.X SDK versions.
Warning
New values will then be available in getCameraInformation(), be sure to get them to still have consistent 2D <-> 3D conversion.

◆ get_sensors_data()

def get_sensors_data (   self,
SensorsData  py_sensors_data,
  time_reference = TIME_REFERENCE.CURRENT 
)

Retrieves the Sensors (IMU,magnetometer,barometer) Data at a specific time reference.

 Calling get_sensors_data with TIME_REFERENCE.CURRENT gives you the latest sensors data received. Getting all the data requires to call this function at 800Hz in a thread. Calling get_sensors_data with TIME_REFERENCE.IMAGE gives you the sensors data at the time of the latest image grabbed.

SensorsData object contains the previous IMUData structure that was used in ZED SDK v2.X: For IMU data, the values are provided in 2 ways : Time-fused pose estimation that can be accessed using:

  • data.imu.pose  Raw values from the IMU sensor:
    • data.imu.angular_velocity, corresponding to the gyroscope
    • data.imu.linear_acceleration, corresponding to the accelerometer
    both gyroscope and accelerometer are synchronized. The delta time between previous and current values can be calculated using
  • data.imu.timestamp

     

    Note
    The IMU quaternion (fused data) is given in the specified COORDINATE_SYSTEM of InitParameters.

     

    Warning
    In SVO reading mode, the TIME_REFERENCE.CURRENT is currently not available (yielding ERROR_CODE.INVALID_FUNCTION_PARAMETERS .  * Only the quaternion data and barometer data (if available) at TIME_REFERENCE.IMAGE are available. Other values will be set to 0.

◆ set_imu_prior()

def set_imu_prior (   self,
Transform  transfom 
)

Set an optional IMU orientation hint that will be used to assist the tracking during the next grab().

 This function can be used to assist the positional tracking rotation while using a ZED Mini.    

Note
This function is only effective if a ZED Mini (ZED-M) is used.   It needs to be called before the grab() function.  
Parameters
transform: Transform to be ingested into IMU fusion. Note that only the rotation is used.  
Returns
ERROR_CODE.SUCCESS if the transform has been passed, ERROR_CODE.INVALID_FUNCTION_CALL otherwise (e.g. when used with a ZED camera which doesn't have IMU data).

◆ get_position()

def get_position (   self,
Pose  py_pose,
  reference_frame = REFERENCE_FRAME.WORLD 
)

 Retrieves the estimated position and orientation of the camera in the specified reference frame.

 Using REFERENCE_FRAME.WORLD, the returned pose relates to the initial position of the camera. (PositionalTrackingParameters.initial_world_transform ) Using REFERENCE_FRAME.CAMERA, the returned pose relates to the previous position of the camera.

 If the tracking has been initialized with PositionalTrackingParameters.enable_area_memory to true (default), this function can return POSITIONAL_TRACKING_STATE::SEARCHING. This means that the tracking lost its link to the initial referential and is currently trying to relocate the camera. However, it will keep on providing position estimations.

 

Parameters
camera_pose[out]: the pose containing the position of the camera and other information (timestamp, confidence)  
reference_frame: defines the reference from which you want the pose to be expressed. Default : REFERENCE_FRAME::WORLD.  
Returns
The current state of the tracking process.

 
Extract Rotation Matrix : camera_pose.get_rotation()
Extract Translation Vector : camera_pose.get_translation()  
Extract Orientation / quaternion : camera_pose.get_orientation()

 

while True :
 if zed.grab() == sl.ERROR_CODE.SUCCESS : # Grab an image and computes the tracking
  camera_pose = sl.Pose()
  zed.get_position(camera_pose, sl.REFERENCE_FRAME.WORLD)
  print("Camera position: X=", camera_pose.get_translation().[0], " Y=", camera_pose.get_translation()[1], " Z=", camera_pose.get_translation()[2])
print("Camera Euler rotation: X=", camera_pose.get_euler_angles()[0], " Y=", camera_pose.get_euler_angles()[1], " Z=", camera_pose.get_euler_angles()[2])
print("Camera Rodrigues rotation: X=", camera_pose.get_rotation_vector()[0], " Y=", camera_pose.get_rotation_vector()[1], " Z=", camera_pose.get_rotation_vector()[2])
print("Camera quaternion orientation: X=", camera_pose.get_orientation()[0], " Y=", camera_pose.get_orientation()[1], " Z=", camera_pose.get_orientation()[2], " W=", camera_pose.get_orientation()[3])
 

◆ get_area_export_state()

def get_area_export_state (   self)

Returns the state of the spatial memory export process.

 As Camera.save_area_map() only starts the exportation, this function allows you to know when the exportation finished or if it failed.

Returns
The current state of the spatial memory export process.

◆ save_area_map()

def save_area_map (   self,
  area_file_path = "" 
)

Saves the current area learning file.

The file will contain spatial memory data generated by the tracking.

If the tracking has been initialized with PositionalTrackingParameters.enable_area_memory to true (default), the function allows you to export the spatial memory. Reloading the exported file in a future session with PositionalTrackingParameters.area_file_path initializes the tracking within the same referential. This function is asynchronous, and only triggers the file generation. You can use get_area_export_state() to get the export state. The positional tracking keeps running while exporting.

 

Parameters
area_file_path: saves the spatial memory database in an '.area' file.  
Returns
ERROR_CODE.FAILURE if the area_file_path file wasn't found, ERROR_CODE.SUCCESS otherwise.

See get_area_export_state()

 

Note
Please note that this function will also flush the area database that was built / loaded.    
Warning
If the camera wasn't moved during the tracking session, or not enough, the spatial memory won't be usable and the file won't be exported. The get_area_export_state() will return AREA_EXPORTING_STATE.NOT_STARTED  A few meters (~3m) of translation or a full rotation should be enough to get usable spatial memory. However, as it should be used for relocation purposes, visiting a significant portion of the environment is recommended before exporting.

 

while True :
if zed.grab() == sl.ERROR_CODE.SUCCESS : # Grab an image and computes the tracking
camera_pose = Pose()
zed.get_position(camera_pose, REFERENCE_FRAME.WORLD)
# Export the spatial memory for future sessions
zed.save_area_map("office.area") # The actual file will be created asynchronously.
print(repr(zed.get_area_export_state()))
 # Close the camera
 zed.close()
 

◆ disable_positional_tracking()

def disable_positional_tracking (   self,
  area_file_path = "" 
)

Disables the positional tracking.

 The positional tracking is immediately stopped. If a file path is given, save_area_map() will be called asynchronously. See get_area_export_state() to get the exportation state.  If the tracking has been enabled, this function will automatically be called by close() .

 

Parameters
area_file_path: if set, saves the spatial memory into an '.area' file. default : (empty)  
area_file_path is the name and path of the database, e.g. "path/to/file/myArea1.area".

 

Note
The '.area' database depends on the depth map SENSING_MODE chosen during the recording. The same mode must be used to reload the database.

◆ reset_positional_tracking()

def reset_positional_tracking (   self,
Transform  path 
)

Resets the tracking, and re-initializes the position with the given transformation matrix.

Parameters
path: Position of the camera in the world frame when the function is called. By default, it is set to identity.
Returns
ERROR_CODE.SUCCESS if the tracking has been reset, ERROR_CODE.FAILURE otherwise.

 

Note
Please note that this function will also flush the accumulated or loaded spatial memory.

◆ enable_spatial_mapping()

def enable_spatial_mapping (   self,
  py_spatial = SpatialMappingParameters() 
)

Initializes and starts the spatial mapping processes.

 The spatial mapping will create a geometric representation of the scene based on both tracking data and 3D point clouds.  The resulting output can be a Mesh or a FusedPointCloud. It can be be obtained by calling extract_whole_spatial_map() or retrieve_spatial_map_async(). Note that retrieve_spatial_map_async should be called after request_spatial_map_async().

 

Parameters
py_spatial: the structure containing all the specific parameters for the spatial mapping. Default: a balanced parameter preset between geometric fidelity and output file size. For more information, see the SpatialMappingParameters documentation.
Returns
ERROR_CODE.SUCCESS if everything went fine, ERROR_CODE.FAILURE otherwise.
Warning
The tracking (enable_positional_tracking() ) and the depth (RuntimeParameters.enable_depth ) needs to be enabled to use the spatial mapping.
The performance greatly depends on the spatial_mapping_parameters.
Lower SpatialMappingParameters.range_meter and SpatialMappingParameters.resolution_meter for higher performance.  If the mapping framerate is too slow in live mode, consider using an SVO file, or choose a lower mesh resolution.

 

Note
This features uses host memory (RAM) to store the 3D map. The maximum amount of available memory allowed can be tweaked using the SpatialMappingParameters.  Exceeding the maximum memory allowed immediately stops the mapping.

 

 import pyzed.sl as sl
 def main() :
# Create a ZED camera object
zed = sl.Camera()
  # Set initial parameters
init_params = sl.InitParameters()
init_params.camera_resolution = sl.RESOLUTION.HD720 # Use HD720 video mode (default fps: 60)
init_params.coordinate_system = sl.COORDINATE_SYSTEM.RIGHT_HANDED_Y_UP # Use a right-handed Y-up coordinate system (The OpenGL one)
  init_params.coordinate_units = sl.UNIT.METER # Set units in meters
# Open the camera
  err = zed.open(init_params)
  if err != sl.ERROR_CODE.SUCCESS :
  exit(-1)
  # Positional tracking needs to be enabled before using spatial mapping
tracking_parameters sl.PositionalTrackingParameters()
  err = zed.enable_tracking(tracking_parameters)
  if err != sl.ERROR_CODE.SUCCESS :
  exit(-1)
# Enable spatial mapping
  mapping_parameters sl.SpatialMappingParameters()
  err = zed.enable_spatial_mapping(mapping_parameters)
  if err != sl.ERROR_CODE.SUCCESS :
  exit(-1)
# Grab data during 500 frames
  i = 0
mesh = sl.Mesh() # Create a mesh object
while i < 500 :
  # For each new grab, mesh data is updated
  if zed.grab() == sl.ERROR_CODE.SUCCESS :
  # In the background, spatial mapping will use newly retrieved images, depth and pose to update the mesh
  mapping_state = zed.get_spatial_mapping_state()
  # Print spatial mapping state
  print("Images captured: ", i << " / 500 || Spatial mapping state: ", repr(mapping_state))
  i = i + 1
  # Extract, filter and save the mesh in a obj file
  print("Extracting Mesh ...")
zed.extract_whole_spatial_map(mesh) # Extract the whole mesh
print("Filtering Mesh ...")
mesh.filter(sl.MESH_FILTER.LOW) # Filter the mesh (remove unnecessary vertices and faces)
  print("Saving Mesh in mesh.obj ...")
  mesh.save("mesh.obj") # Save the mesh in an obj file
  # Disable tracking and mapping and close the camera
  zed.disable_spatial_mapping()
zed.disable_tracking()
  zed.close()
  return 0
 if __name__ == "__main__" :
main()
 

◆ pause_spatial_mapping()

def pause_spatial_mapping (   self,
bool  status 
)

Pauses or resumes the spatial mapping processes.

   As spatial mapping runs asynchronously, using this function can pause its computation to free some processing power, and resume it again later.  For example, it can be used to avoid mapping a specific area or to pause the mapping when the camera is static.  

Parameters
status: if true, the integration is paused. If false, the spatial mapping is resumed.

◆ get_spatial_mapping_state()

def get_spatial_mapping_state (   self)

Returns the current spatial mapping state.

 As the spatial mapping runs asynchronously, this function allows you to get reported errors or status info.  

Returns
The current state of the spatial mapping process    See also SPATIAL_MAPPING_STATE

◆ request_spatial_map_async()

def request_spatial_map_async (   self)

 Starts the spatial map generation process in a non blocking thread from the spatial mapping process.

   The spatial map generation can take a long time depending on the mapping resolution and covered area. This function will trigger the generation of a mesh without blocking the program.  You can get info about the current generation using get_spatial_map_request_status_async(), and retrieve the mesh using retrieve_spatial_map_async().

 

Note
Only one mesh can be generated at a time. If the previous mesh generation is not over, new calls of the function will be ignored.  

 

cam.request_spatial_map_async()
 while cam.get_spatial_map_request_status_async() == sl.ERROR_CODE.FAILURE :
  # Mesh is generating
mesh = sl.Mesh()
 if cam.get_spatial_map_request_status_async() == sl.ERROR_CODE.SUCCESS :
cam.retrieve_spatial_map_async(mesh)
  nb_triangles = mesh.get_number_of_triangles()
  print("Number of triangles in the mesh: ", mesh)
 

◆ get_spatial_map_request_status_async()

def get_spatial_map_request_status_async (   self)

 Returns the spatial map generation status.

This status allows to know if the mesh can be retrieved by calling retrieve_spatial_map_async()  

Returns
ERROR_CODE.SUCCESS if the mesh is ready and not yet retrieved, otherwise ERROR_CODE.FAILURE

◆ retrieve_spatial_map_async()

def retrieve_spatial_map_async (   self,
  py_mesh 
)

 Retrieves the current generated spatial map.

   After calling retrieve_spatial_map_async() , this function allows you to retrieve the generated mesh or fused point cloud. The Mesh or FusedPointCloud will only be available when get_spatial_map_request_status_async() returns ERROR_CODE.SUCCESS

 

Parameters
py_mesh: [out] The Mesh or FusedPointCloud to be filled with the generated spatial map.  
Returns
ERROR_CODE.SUCCESS if the mesh is retrieved, otherwise ERROR_CODE.FAILURE

 

Note
This function only updates the necessary chunks and adds the new ones in order to improve update speed.  
Warning
You should not modify the mesh / fused point cloud between two calls of this function, otherwise it can lead to corrupted mesh / fused point cloud .

 See request_spatial_map_async() for an example.

◆ extract_whole_spatial_map()

def extract_whole_spatial_map (   self,
  py_mesh 
)

Extracts the current spatial map from the spatial mapping process.

 If the object to be filled already contains a previous version of the mesh / fused point cloud, only changes will be updated, optimizing performance.

 

Parameters
py_mesh: [out] The Mesh or FuesedPointCloud to be filled with the generated spatial map.

 

Returns
ERROR_CODE.SUCCESS if the mesh is filled and available, otherwise ERROR_CODE.FAILURE

 

Warning
This is a blocking function. You should either call it in a thread or at the end of the mapping process. The extraction can be long, calling this function in the grab loop will block the depth and tracking computation giving bad results.

◆ find_plane_at_hit()

def find_plane_at_hit (   self,
  coord,
Plane  py_plane 
)

Checks the plane at the given left image coordinates.

   This function gives the 3D plane corresponding to a given pixel in the latest left image grabbed.  The pixel coordinates are expected to be contained between 0 and get_camera_informations().camera_resolution.width-1 and get_camera_informations().camera_resolution.height-1

 

Parameters
coord: [in] The image coordinate. The coordinate must be taken from the full-size image  
plane: [out] The detected plane if the function succeeded  
Returns
ERROR_CODE.SUCCESS if a plane is found otherwise ERROR_CODE.PLANE_NOT_FOUND

 

Note
The reference frame is defined by the RuntimeParameters.measure3D_reference_frame given to the grab() function.

◆ find_floor_plane()

def find_floor_plane (   self,
Plane  py_plane,
Transform  resetTrackingFloorFrame,
  floor_height_prior = float('nan'),
  world_orientation_prior = Rotation(Matrix3f().zeros()),
  floor_height_prior_tolerance = float('nan') 
)

 Detect the floor plane of the scene.

   This function analyses the latest image and depth to estimate the floor plane of the scene.    It expects the floor plane to be visible and bigger than other candidate planes, like a table.    

Parameters
py_plane: [out] The detected floor plane if the function succeeded  
resetTrackingFloorFrame: [out] The transform to align the tracking with the floor plane. The initial position will then be at ground height, with the axis align with the gravity. The positional tracking needs to be reset/enabled  
floor_height_prior: [in] Prior set to locate the floor plane depending on the known camera distance to the ground, expressed in the same unit as the ZED. If the prior is too far from the detected floor plane, the function will return ERROR_CODE.PLANE_NOT_FOUND  
world_orientation_prior: [in] Prior set to locate the floor plane depending on the known camera orientation to the ground. If the prior is too far from the detected floor plane, the function will return ERROR_CODE.PLANE_NOT_FOUND  
floor_height_prior_tolerance: [in] Prior height tolerance, absolute value.  
Returns
ERROR_CODE.SUCCESS if the floor plane is found and matches the priors (if defined), otherwise ERROR_CODE.PLANE_NOT_FOUND

 

Note
The reference frame is defined by the RuntimeParameters (measure3D_reference_frame) given to the grab() function. The length unit is defined by InitParameters (coordinate_units). With the ZED, the assumption is made that the floor plane is the dominant plane in the scene. The ZED Mini uses the gravity as prior.

◆ disable_spatial_mapping()

def disable_spatial_mapping (   self)

Disables the spatial mapping process.

 The spatial mapping is immediately stopped. If the mapping has been enabled, this function will automatically be called by close() .  

Note
This function frees the memory allocated for th spatial mapping, consequently, mesh cannot be retrieved after this call.

◆ enable_streaming()

def enable_streaming (   self,
  streaming_parameters = StreamingParameters() 
)

Creates a streaming pipeline for images.

 

Parameters
streaming_parameters: the structure containing all the specific parameters for the streaming.

 

import pyzed.sl as sl
def main() :
# Create a ZED camera object
zed = sl.Camera()
  # Set initial parameters
  init_params = sl.InitParameters()
init_params.camera_resolution = sl.RESOLUTION.HD720 # Use HD720 video mode (default fps: 60)
  # Open the camera
  err = zed.open(init_params)
if err != sl.ERROR_CODE.SUCCESS :
  print(repr(err))
exit(-1)
  # Enable streaming
  stream_params = sl.StreamingParameters()
  stream_params.port = 30000
  stream_params.bitrate = 8000
err = zed.enable_streaming(stream_params)
  if err != sl.ERROR_CODE.SUCCESS :
print(repr(err))
  exit(-1)
# Grab data during 500 frames
  i = 0
while i < 500 :
if zed.grab() == sl.ERROR_CODE.SUCCESS :
i = i+1
  zed.disable_streaming()
  zed.close()
  return 0
 if __name__ == "__main__" :
main()
 

 

Returns
an ERROR_CODE that defines if the stream was started.
Possible Error Code :  
- ERROR_CODE.SUCCESS if the streaming was successfully started  
- ERROR_CODE.INVALID_FUNCTION_CALL if open() was not successfully called before.  
- ERROR_CODE.FAILURE if streaming RTSP protocol was not able to start.
- ERROR_CODE.NO_GPU_COMPATIBLE if streaming codec is not supported (in this case, use H264 codec).

◆ disable_streaming()

def disable_streaming (   self)

 Disables the streaming initiated by enable_straming()  

Note
This function will automatically be called by close() if enable_streaming() was called.

 See enable_streaming() for an example.

◆ enable_recording()

def enable_recording (   self,
RecordingParameters  record 
)

Creates an SVO file to be filled by record().

   SVO files are custom video files containing the un-rectified images from the camera along with some meta-data like timestamps or IMU orientation (if applicable).  They can be used to simulate a live ZED and test a sequence with various SDK parameters.  Depending on the application, various compression modes are available. See SVO_COMPRESSION_MODE.    

Parameters
record: RecordingParameters such as filename and compression mode    
Returns
an ERROR_CODE that defines if SVO file was successfully created and can be filled with images.    
Warning
This function can be called multiple times during ZED lifetime, but if video_filename is already existing, the file will be erased.

   

 import pyzed.sl as sl
 def main() :
  # Create a ZED camera object
  zed = sl.Camera()
  # Set initial parameters
  init_params = sl.InitParameters()
init_params.camera_resolution = sl.RESOLUTION.HD720 # Use HD720 video mode (default fps: 60)
  init_params.coordinate_units = sl.UNIT.METER # Set units in meters
  # Open the camera
  err = zed.open(init_params)
  if (err != sl.ERROR_CODE.SUCCESS) :
  print(repr(err))
  exit(-1)
  # Enable video recording
record_params = RecordingParameters("myVideoFile.svo, sl.SVO_COMPRESSION_MODE.HD264)
  err = zed.enable_recording(record_params)
if (err != sl.ERROR_CODE.SUCCESS) :
  print(repr(err))
  exit(-1)
  # Grab data during 500 frames
  i = 0
  while i < 500 :
  # Grab a new frame
if zed.grab() == sl.ERROR_CODE.SUCCESS :
  # Record the grabbed frame in the video file
i = i + 1
 
zed.disable_recording()
print("Video has been saved ...")
  zed.close()
return 0
 if __name__ == "__main__" :
  main()
 

◆ disable_recording()

def disable_recording (   self)

Disables the recording initiated by enable_recording() and closes the generated file.

 

Note
This function will automatically be called by close() if enable_recording() was called.

 See enable_recording() for an example.

◆ get_recording_status()

def get_recording_status (   self)

 Get the recording information

Returns
The recording state structure. For more details, see RecordingStatus.

◆ pause_recording()

def pause_recording (   self,
  value = True 
)

Pauses or resumes the recording.

Parameters
status: if true, the recording is paused. If false, the recording is resumed.

◆ get_recording_parameters()

def get_recording_parameters (   self)

Returns the recording parameters used.

Corresponds to the structure sent when the enable_recording() function was called

 

Returns
RecordingParameters containing the parameters used for streaming initialization.

◆ enable_object_detection()

def enable_object_detection (   self,
  object_detection_parameters = ObjectDetectionParameters() 
)

Initializes and starts the object detection module.

 The object detection module will detect and track objects, people or animals in range of the camera, the full list of detectable objects is available in OBJECT_CLASS.

 Detected objects can be retrieved using the retrieve_objects() function.

 As detecting and tracking the objects is CPU and GPU-intensive, the module can be used synchronously or asynchronously using ObjectDetectionParameters.image_sync .  - Synchronous: the retrieve_objects() function will be blocking during the detection.  - Asynchronous: the detection is running in the background, and retrieve_objects() will immediately return the last objects detected.    

Parameters
object_detection_parameters: Structure containing all specific parameters for object detection.

 For more information, see the ObjectDetectionParameters documentation  

Returns
 ERROR_CODE.SUCCESS if everything went fine  ERROR_CODE.OBJECT_DETECTION_NOT_AVAILABLE if the AI model is missing or corrupted. In this case, the SDK needs to be reinstalled  ERROR_CODE.OBJECT_DETECTION_MODULE_NOT_COMPATIBLE_WITH_CAMERA if the camera used does not have a IMU (ZED Camera). the IMU gives the gravity vector that helps in the 3D box localization. Therefore the Object detection module is available only for ZED-M and ZED2 camera models.  ERROR_CODE.SENSORS_NOT_DETECTED if the camera model is correct (ZED-M or ZED2) but the IMU is missing. It probably happens because InitParameters.sensors_required was set to true ERROR_CODE.INVALID_FUNCTION_CALL if one of the ObjectDetection parameter is not compatible with other modules parameters (For example, depth mode has been set to NONE).  ERROR_CODE.FAILURE otherwise.

 

Note
This feature uses AI to locate objects and requires a powerful GPU. A GPU with at least 3GB of memory is recommended.

 

import pyzed.sl as sl
 def main():
  # Create a ZED camera object
  zed = sl.Camera()
  # Open the camera
  err = zed.open()
  if err != sl.ERROR_CODE.SUCCESS :
  print(repr(err))
exit(-1)
# Set the object detection parameters
object_detection_params = sl.ObjectDetectionParameters()
  object_detection_params.image_sync = True
  # Enable the object detection
err = zed.enable_object_detection(object_detection_params)
  if err != sl.ERROR_CODE.SUCCESS :
  print(repr(err))
  exit(-1)
  # Grab an image and detect objects on it
objects = sl.Objects()
  while True :
if zed.grab() == sl.ERROR_CODE.SUCCESS :
zed.retrieve_objects(objects)
  print(len(objects.object_list), " objects detected\n")
  # Use the objects in your application
  # Close the camera
  zed.disable_object_detection()
zed.close()
 if __name__ == "__main__":
  main()

◆ disable_object_detection()

def disable_object_detection (   self)

Disables the Object Detection process.

 The object detection module immediately stops and frees its memory allocations. If the object detection has been enabled, this function will automatically be called by close().

◆ pause_object_detection()

def pause_object_detection (   self,
bool  status 
)

Pauses or resumes the object detection processes.

If the object detection has been enabled with ObjectDetectionParameters.image_sync set to false (running asynchronously), this function will pause processing. While in pause, calling this function with status = false will resume the object detection. The retrieve_objects function will keep on returning the last objects detected while in pause.

Parameters
status: If true, object detection is paused. If false, object detection is resumed.

◆ retrieve_objects()

def retrieve_objects (   self,
Objects  py_objects,
  object_detection_parameters = ObjectDetectionRuntimeParameters() 
)

 Retrieve objects detected by the object detection module.

 This function returns the result of the object detection, whether the module is running synchronously or asynchronously.

 - Asynchronous: this function immediately returns the last objects detected. If the current detection isn't done, the objects from the last detection will be returned, and Objects.is_new will be set to false.  - Synchronous: this function executes detection and waits for it to finish before returning the detected objects.

It is recommended to keep the same Objects object as the input of all calls to this function. This will enable the identification and the tracking of every objects detected.    

Parameters
py_objects: [in,out] The detected objects will be saved into this object. If the object already contains data from a previous detection, it will be updated, keeping a unique ID for the same person.
object_detection_parameters: [in] Object detection runtime settings, can be changed at each detection. In async mode, the parameters update is applied on the next iteration.
Returns
ERROR_CODE.SUCCESS if everything went fine, ERROR_CODE.FAILURE otherwise
objects = sl.Objects()
 while True :
  if zed.grab() == sl.ERROR_CODE.SUCCESS :
zed.retrieve_objects(objects)
object_list = objects.object_list
  for i in range(len(object_list)) :
  print(repr(object_list[i].label))
 

◆ get_objects_batch()

def get_objects_batch (   self,
list[ObjectsBatch trajectories 
)

Get a batch of detected objects.

Warning
This function needs to be called after retrieve_objects, otherwise trajectories will be empty. It is the retrieve_objects function that ingests the current/live objects into the batching queue.
Parameters
trajectories: list of ObjectsBatch that will be filled by the batching queue process. An empty list should be passed to the function
Returns
ERROR_CODE.SUCCESS if everything went fine, ERROR_CODE.INVALID_FUNCTION_CALL if batching module is not available (TensorRT!=7.1) or if object tracking was not enabled.
objects = sl.Objects() # Unique Objects to be updated after each grab
while True: # Main loop
if zed.grab() == sl.ERROR_CODE.SUCCESS: # Grab an image from the camera
zed.retrieve_objects(objects) # Call retrieve_objects so that objects are ingested in the batching system
trajectories = [] # Create an empty list of trajectories
zed.get_objects_batch(trajectories) # Get batch of objects
print("Size of batch : {}".format(len(trajectories)))

◆ ingest_custom_box_objects()

def ingest_custom_box_objects (   self,
list[CustomBoxObjectData objects_in 
)

Feed the 3D Object tracking function with your own 2D bounding boxes from your own detection algorithm.

Parameters
objects_in: list of CustomBoxObjectData.
Returns
ERROR_CODE.SUCCESS if everything went fine
Note
The detection should be done on the current grabbed left image as the internal process will use all current available data to extract 3D information and perform object tracking.

◆ get_device_list()

def get_device_list ( )
static

 Lists all the connected devices with their associated information.

 This function lists all the cameras available and provides their serial number, models and other information.  

Returns
The device properties for each connected camera

◆ get_streaming_device_list()

def get_streaming_device_list ( )
static

 Lists all the streaming devices with their associated information.

   

Returns
The streaming properties for each connected camera

 

Warning
As this function returns an std::vector, it is only safe to use in Release mode (not Debug).  This is due to a known compatibility issue between release (the SDK) and debug (your app) implementations of std::vector.

◆ reboot()

def reboot (   int,
  sn,
  bool,
  fullReboot = True 
)
static

 Performs an hardware reset of the ZED 2.

 

Parameters
sn(int) : Serial number of the camera to reset, or 0 to reset the first camera detected.
fullReboot(bool) : If set to True, performs a full reboot (Sensors and Video modules). Default: True  
Returns
ERROR_CODE.SUCCESS if everything went fine, ERROR_CODE.CAMERA_NOT_DETECTED if no camera was detected, ERROR_CODE.FAILURE otherwise.

 

Note
This function only works for ZED2 and ZED2i cameras.    
Warning
This function will invalidate any sl.Camera object, since the device is rebooting.