ObjectDetectionSensorsParameters Struct Reference

Attributes

unsigned int instance_module_id = 0
 Id of the module instance. More...
 
std::set< SensorDeviceIdentifiersensors_ids = {}
 List of sensor id that will be used for this instance By default empty which means all available sensors in Sensors when the object detection instance is started. More...
 
sl::String fused_objects_group_name
 Specify which group this model belongs to. More...
 
bool enable_tracking = true
 Whether the object detection system includes object tracking capabilities across a sequence of images. More...
 
bool enable_segmentation = false
 Whether the object masks will be computed. More...
 
OBJECT_DETECTION_MODEL detection_model = OBJECT_DETECTION_MODEL::MULTI_CLASS_BOX_FAST
 sl::OBJECT_DETECTION_MODEL to use. More...
 
sl::String custom_onnx_file
 Path to the YOLO-like onnx file for custom object detection ran in the ZED SDK. More...
 
sl::Resolution custom_onnx_dynamic_input_shape = sl::Resolution(512, 512)
 Resolution to the YOLO-like onnx file for custom object detection ran in the ZED SDK. This resolution defines the input tensor size for dynamic shape ONNX model only. The batch and channel dimensions are automatically handled, it assumes it's color images like default YOLO models. More...
 
float max_range = -1.f
 Upper depth range for detections. More...
 
OBJECT_FILTERING_MODE filtering_mode = OBJECT_FILTERING_MODE::NMS3D
 Filtering mode that should be applied to raw detections. More...
 
float prediction_timeout_s = 0.2f
 Prediction duration of the ZED SDK when an object is not detected anymore before switching its state to sl::OBJECT_TRACKING_STATE::SEARCHING. More...
 

Variables

◆ instance_module_id

unsigned int instance_module_id = 0

Id of the module instance.

This is used to identify which object detection module instance is used.

◆ sensors_ids

std::set<SensorDeviceIdentifier> sensors_ids = {}

List of sensor id that will be used for this instance By default empty which means all available sensors in Sensors when the object detection instance is started.

◆ fused_objects_group_name

sl::String fused_objects_group_name

Specify which group this model belongs to.

In a multi camera setup, multiple cameras can be used to detect objects and multiple detector having similar output layout can see the same object. Therefore, Fusion will fuse together the outputs received by multiple detectors only if they are part of the same fused_objects_group_name.

◆ enable_tracking

bool enable_tracking = true

Whether the object detection system includes object tracking capabilities across a sequence of images.

◆ enable_segmentation

bool enable_segmentation = false

Whether the object masks will be computed.

◆ detection_model

◆ custom_onnx_file

sl::String custom_onnx_file

Path to the YOLO-like onnx file for custom object detection ran in the ZED SDK.

When detection_model is OBJECT_DETECTION_MODEL::CUSTOM_YOLOLIKE_BOX_OBJECTS, a onnx model must be passed so that the ZED SDK can optimize it for your GPU and run inference on it.

The resulting optimized model will be saved for re-use in the future.

Attention
- The model must be a YOLO-like model.
- The caching uses the deserialized custom_onnx_file along with your GPU specs to decide whether to use the cached optmized model or to optimize the passed onnx model. If you change the weights of the onnx file and pass the same path, the ZED SDK will detect the difference and optimize the new model.
Note
This parameter is useless when detection_model is not OBJECT_DETECTION_MODEL::CUSTOM_YOLOLIKE_BOX_OBJECTS.

◆ custom_onnx_dynamic_input_shape

sl::Resolution custom_onnx_dynamic_input_shape = sl::Resolution(512, 512)

Resolution to the YOLO-like onnx file for custom object detection ran in the ZED SDK. This resolution defines the input tensor size for dynamic shape ONNX model only. The batch and channel dimensions are automatically handled, it assumes it's color images like default YOLO models.

Note
This parameter is only used when detection_model is OBJECT_DETECTION_MODEL::CUSTOM_YOLOLIKE_BOX_OBJECTS and the provided ONNX file is using dynamic shapes.
Attention
- Multiple model only support squared images

Default: Squared images 512x512 (input tensor will be 1x3x512x512)

◆ max_range

float max_range = -1.f

Upper depth range for detections.

Default: -1.f (value set in sl::InitParameters.depth_maximum_distance)

Note
The value cannot be greater than sl::InitParameters.depth_maximum_distance and its unit is defined in sl::InitParameters.coordinate_units.

◆ filtering_mode

Filtering mode that should be applied to raw detections.

Default: sl::OBJECT_FILTERING_MODE::NMS_3D (same behavior as previous ZED SDK version)

Note
This parameter is only used in detection model sl::OBJECT_DETECTION_MODEL::MULTI_CLASS_BOX_XXX and sl::OBJECT_DETECTION_MODEL::CUSTOM_BOX_OBJECTS.
For custom object, it is recommended to use sl::OBJECT_FILTERING_MODE::NMS_3D_PER_CLASS or sl::OBJECT_FILTERING_MODE::NONE.
In this case, you might need to add your own NMS filter before ingesting the boxes into the object detection module.

◆ prediction_timeout_s

float prediction_timeout_s = 0.2f

Prediction duration of the ZED SDK when an object is not detected anymore before switching its state to sl::OBJECT_TRACKING_STATE::SEARCHING.

It prevents the jittering of the object state when there is a short misdetection.
The user can define their own prediction time duration.

Note
During this time, the object will have sl::OBJECT_TRACKING_STATE::OK state even if it is not detected.
The duration is expressed in seconds.
Warning
prediction_timeout_s will be clamped to 1 second as the prediction is getting worse with time.
Setting this parameter to 0 disables the ZED SDK predictions.