Basic Concepts - Unity

This section serves as an introduction to the ZED Plugin for Unity. It details some of the plugin’s basic concepts and features which you will find useful in most projects.

Cameras

When you create a new project and import ZEDCamera.package, you will find a Mono and Stereo camera rig in the Prefabs folder: ZED_Rig_Mono and ZED_Rig_Stereo. These prefabs are custom AR cameras that replace the regular Unity Camera in a scene. They mix the virtual 3D elements rendered by the Unity camera with the real-world video captured by the ZED.

Attached to the ZED Rig Mono is Camera_Left which contains a Frame holding the video source of the left camera. The ZED Rig Stereo prefab contains both the left and right video sources for passthrough AR. When adding the ZED Rig to your hierarchy, it will appear at (0,0,0) with -Z representing the forward direction facing the camera.

Note: When adding ZED Rig to your project, delete the Main Camera from the Hierarchy to avoid interfering with the embedded camera of the prefab. If your use case requires multiple camera outputs in your scene, you may need multiple display windows visible in the editor.

Depth-aware AR

The ZED and ZED Mini cameras capture depth and normal maps of the surrounding environment in real-time. To create a believable and interactive AR/MR experience, the depth and normal buffers captured by the cameras are integrated into the Unity rendering pipeline. This allows to add several key AR features:

  • Object placement: Virtual objects can be placed anywhere in the real world without having to scan the environment first. See Object Placement to learn more.
  • Interactive collisions: Virtual elements can collide with real people and objects moving in the camera field of view.
  • Realistic shadows and lighting: Unity’s lights can cast shadows and project light onto the real world. See Lighting and Shadows to learn how to enable interactive AR lighting.
  • Depth occlusions: Virtual objects are naturally occluded by the surrounding environment. Occlusions are automatically enabled by the plugin.

The features above are accessible through the ZED Manager or support functions located in the Scripts folder of the package. Explore the different samples to learn how to use these support functions.

ZED Manager

The ZED Manager allows you to configure video, depth and tracking parameters. These parameters are set by default and you often don’t need to modify them when adding a rig prefab to your project.

After adding a ZED_Rig_Mono or ZED_Rig_Stereo to the Hierarchy view, click on it to access the parameters of the ZED Manager script in the Inspector window.

Input

  • Camera ID: You can connect up to four ZED or ZED Mini cameras to a single PC at once. This value decides which camera you’ll connect to. Note that the order of cameras is defined by the order in which Windows recognizes the devices.

  • Depth mode: Depth quality mode. PERFORMANCE mode is recommended. See Depth Modes.

  • Input Type: The ZED SDK can take input from one of three methods:

    • USB: Input from a live camera attached to your ZED. You can specify the resolution and FPS of that camera.
    • SVO: Load a recorded SVO file taken during a previous session with a live ZED. This file acts as if a live ZED were attached. Specify the path to the .SVO file, whether to loop it once finished, and whether to play each frame based on its time stamps or sequentially.
    • Stream: Input from a ZED on a remote device that’s actively streaming its camera input. Set the IP and Port to connect to. See the Streaming section of ZEDManager to broadcast a stream.
  • Resolution: Video mode of the ZED camera. Higher resolutions result in lower FPS. See Video Modes.

  • FPS: Desired FPS of the camera. The maximum FPS you’ll achieve depends on the resolution (see above bullet) but this setting allows you to lower it below the maximum.

Motion Tracking

  • Enable Tracking: If positional tracking is enabled, the virtual rig automatically moves to match the real-world movement of the device using its own input (no need for an external tracker). This allows you to place virtual objects in the scene and make sure they keep their real-world position while you move. Disable this option when using external sensors for tracking. See Positional Tracking.
  • Enable Spatial Memory: Enable spatial memory to correct drift during tracking. There can be small pose jumps when a correction is required. See Spatial Memory.
  • Path Spatial Memory: Path to the file that will be loaded and/or saved by the Spatial Memory. Loading an existing file will allow absolute localization in an environment. Leaving blank will not save an area file at the end of a tracking session.
  • Estimate Initial Position: When the ZED first initialized, this will cause it to estimate the camera’s pitch, roll, and height from the floor. Works better the more of the floor is visible.
  • Tracking Is Static: If enabled, the ZED will not move after the first frame (after the initial position is estimated, if enabled). Use this if using features that require tracking (such as object detection) but when you will not be moving the camera.

Rendering

  • Depth Occlusion: When enabled, virtual pixels can be covered up by real pixels, allowing a virtual cube to be behind your real table, for example. Turn off to make virtual objects always appear over the real world.
  • AR Post-Processing: Whether to apply additional post-processing effects to make the pass-through experience feel more realistic. Requires extra performance but is usually worth it.
  • Camera Brightness: Control image brightness. Use this to darken the real-world video without affecting virtual objects.

Spatial Mapping

This section lets you scan your environment into a mesh. Useful for collisions where geometry must be persistent, for building navmeshes that AI can use to navigate, and for saving the meshes for later. It is also how you create a .area file used by the Spatial Memory feature of ZEDManager.

See our Spatial Mapping Unity guide for more details.

Recording

At runtime, you can record an .SVO video file of the ZED’s input, to be played back later and used as if it were input from a live ZED. At runtime, press the “Start Recording” button.

Streaming

This section lets you broadcast your ZED’s input so that other devices can use it as input.

  • Enable Streaming Output: Check this to enable streaming.
  • Codec: The compression used to encode the output video.
  • Port: The port on which to broadcast the stream.
  • Bitrate: How much information to send at once. Lower settings results in lower video quality but is easier on the network.
  • GOP: Maximum GOP size for the codec. Setting to -1 removes the limit.
  • Adaptive Bitrate: Enable to automatically increase and decrease the bitrate based on performance.

Advanced Settings

Under normal circumstances, you never have to change any of these settings. However, they can be useful for debugging or very specific use cases.

  • Sensing Mode: How the ZED SDK handles missing depth values, eg. pixels where the SDK couldn’t calculate depth due to occlusion, being too near/far, etc. FILL estimates their values, STANDARD leaves them empty. FILL is recommended for most mixed reality applications.

  • Max Depth Range: Maximum depth at which the camera will display the real world, in meters. Pixels further than this value will be invisible.

  • Confidence Threshold: How tolerant the ZED SDK is to low confidence values. Lower values filter out more pixels.

  • Image Enhancement: Whether to enable the new color/gamma curve added to the ZED SDK in v3.0. Exposes more detail in darker regions and removes a slight red bias.

  • Fade In At Start: Disable to remove the fade-in effect you see when the ZED is first connected.

  • Grey Out Skybox on Start: Removes color and color emissions from the skybox, which can cause unrealistic lighting effects in an AR scene. Leave this checked when the real environment is more prominent than the virtual environment, such as pass-through AR. Turn it off when virtual elements are predominant, such as greenscreen VR capture.

  • Don’t Destroy On Load: Enable to set the ZED rig’s DontDestroyOnLoad value, which will prevent its destruction when you change scenes.

  • AR Layer: The second AR rig used in pass-through AR mode needs to see nothing but the canvases in front of it. To accomplish this while making it simple to understand for users, we assign the quad objects in the AR rig to the layer specified here, and it’s the only layer that the cameras in that rig can see. Assign this to an unused layer, and make sure not to put other objects in it.

  • Show Final AR Rig: In pass-through AR mode, the plugin uses a second, hidden AR rig to make final adjustments to the image before it’s sent to the headset. It’s hidden by default because it can be quite confusing to see, but this setting lets advanced users observe and modify this rig.

  • Enable Right Depth: Whether to enable depth measurements from the right camera. Required for depth effects in AR pass-through, but requires performance even if not used. AUTO enables it only if a ZEDRenderingPlane component set to the right eye is detected as a child of ZEDManager’s GameObject (as in the ZED rig prefabs.)

  • Allow AR Pass-Through: If true, the ZED rig will enter ‘pass-through’ mode if it detects a stereo rig - at least two cameras as children with ZEDRenderingPlane components, each with a different eye) - and a VR headset is connected. If false, it will never enter pass-through mode.

  • Set IMU Prior in AR: In AR pass-through mode, whether to compare the ZED’s IMU data against the reported position of the VR headset. This helps compensate for drift, but on rare occasions causes tracking issues.

  • Self-Calibration: If true, the ZED SDK will subtly adjust the ZED’s calibration during runtime to account for heat and other factors. Reasons to disable this are rare.