Attributes | |
| unsigned int | instance_module_id = 0 |
| Id of the module instance. More... | |
| std::set< SensorDeviceIdentifier > | sensors_ids = {} |
| List of sensor id that will be used for this instance By default empty which means all available sensors in Sensors when the object detection instance is started. More... | |
| sl::String | fused_objects_group_name |
| Specify which group this model belongs to. More... | |
| bool | enable_tracking = true |
| Whether the object detection system includes object tracking capabilities across a sequence of images. More... | |
| bool | enable_segmentation = false |
| Whether the object masks will be computed. More... | |
| OBJECT_DETECTION_MODEL | detection_model = OBJECT_DETECTION_MODEL::MULTI_CLASS_BOX_FAST |
| sl::OBJECT_DETECTION_MODEL to use. More... | |
| sl::String | custom_onnx_file |
| Path to the YOLO-like onnx file for custom object detection ran in the ZED SDK. More... | |
| sl::Resolution | custom_onnx_dynamic_input_shape = sl::Resolution(512, 512) |
| Resolution to the YOLO-like onnx file for custom object detection ran in the ZED SDK. This resolution defines the input tensor size for dynamic shape ONNX model only. The batch and channel dimensions are automatically handled, it assumes it's color images like default YOLO models. More... | |
| float | max_range = -1.f |
| Upper depth range for detections. More... | |
| OBJECT_FILTERING_MODE | filtering_mode = OBJECT_FILTERING_MODE::NMS3D |
| Filtering mode that should be applied to raw detections. More... | |
| float | prediction_timeout_s = 0.2f |
| Prediction duration of the ZED SDK when an object is not detected anymore before switching its state to sl::OBJECT_TRACKING_STATE::SEARCHING. More... | |
| unsigned int instance_module_id = 0 |
Id of the module instance.
This is used to identify which object detection module instance is used.
| std::set<SensorDeviceIdentifier> sensors_ids = {} |
List of sensor id that will be used for this instance By default empty which means all available sensors in Sensors when the object detection instance is started.
| sl::String fused_objects_group_name |
Specify which group this model belongs to.
In a multi camera setup, multiple cameras can be used to detect objects and multiple detector having similar output layout can see the same object. Therefore, Fusion will fuse together the outputs received by multiple detectors only if they are part of the same fused_objects_group_name.
| bool enable_tracking = true |
Whether the object detection system includes object tracking capabilities across a sequence of images.
| bool enable_segmentation = false |
Whether the object masks will be computed.
| OBJECT_DETECTION_MODEL detection_model = OBJECT_DETECTION_MODEL::MULTI_CLASS_BOX_FAST |
sl::OBJECT_DETECTION_MODEL to use.
| sl::String custom_onnx_file |
Path to the YOLO-like onnx file for custom object detection ran in the ZED SDK.
When detection_model is OBJECT_DETECTION_MODEL::CUSTOM_YOLOLIKE_BOX_OBJECTS, a onnx model must be passed so that the ZED SDK can optimize it for your GPU and run inference on it.
The resulting optimized model will be saved for re-use in the future.
custom_onnx_file along with your GPU specs to decide whether to use the cached optmized model or to optimize the passed onnx model. If you change the weights of the onnx file and pass the same path, the ZED SDK will detect the difference and optimize the new model.| sl::Resolution custom_onnx_dynamic_input_shape = sl::Resolution(512, 512) |
Resolution to the YOLO-like onnx file for custom object detection ran in the ZED SDK. This resolution defines the input tensor size for dynamic shape ONNX model only. The batch and channel dimensions are automatically handled, it assumes it's color images like default YOLO models.
Default: Squared images 512x512 (input tensor will be 1x3x512x512)
| float max_range = -1.f |
Upper depth range for detections.
Default: -1.f (value set in sl::InitParameters.depth_maximum_distance)
| OBJECT_FILTERING_MODE filtering_mode = OBJECT_FILTERING_MODE::NMS3D |
Filtering mode that should be applied to raw detections.
Default: sl::OBJECT_FILTERING_MODE::NMS_3D (same behavior as previous ZED SDK version)
| float prediction_timeout_s = 0.2f |
Prediction duration of the ZED SDK when an object is not detected anymore before switching its state to sl::OBJECT_TRACKING_STATE::SEARCHING.
It prevents the jittering of the object state when there is a short misdetection.
The user can define their own prediction time duration.