Tutorial - Using Spatial Mapping

This tutorial shows how to use a ZED stereo camera to capture a live 3D reconstruction of your environment. The code starts spatial mapping for 500 frames, extracts a mesh, filters it and saves it as an OBJ file.

Getting Started

  • First, download the latest version of the ZED SDK.
  • Download the C++ or Python sample code.

Code Overview

Open the camera

Let’s initialize the camera. In this example, we use HD720 mode at 60 fps (which also has a wide FOV) to make sure camera tracking is reliable. We also select a right-handed coordinate system with Y axis up, since it is the most common system used by 3D viewing software (Meshlab for example).

// Create a ZED camera object
Camera zed;

// Set configuration parameters
InitParameters init_params;
init_params.camera_resolution = RESOLUTION::HD720; // Use HD720 video mode (default fps: 60)
init_params.coordinate_system = COORDINATE_SYSTEM::RIGHT_HANDED_Y_UP; // Use a right-handed Y-up coordinate system
init_params.coordinate_units = UNIT::METER; // Set units in meters

// Open the camera
ERROR_CODE err = zed.open(init_params);

Enable positional tracking

Spatial mapping needs position tracking to be enabled first with enablePositionalTracking().

PositionalTrackingParameters tracking_parameters;
err = zed.enablePositionalTracking(tracking_parameters);

Enable spatial mapping

Spatial mapping setup works the same way as positional tracking: we create a SpatialMappingParameters settings and call enableSpatialMapping() with this parameter.

SpatialMappingParameters mapping_parameters;
err = zed.enableSpatialMapping(mapping_parameters);

It is not the purpose of this tutorial to go into the details of SpatialMappingParameters class. For more information, read the API documentation.

Run live 3D reconstruction

To start mapping an area, there is no need to call any function in the grab() loop. The ZED SDK checks in the background that a new image, depth and position is available and automatically builds a 3D map asynchronously from this data.

In this tutorial, we simply grab 500 frames, check the 3D mapping state and then stop the loop to extract the resulting mesh.

// Grab data during 500 frames
int i = 0;
Mesh mesh; // Create a mesh object
while (i < 500) {
	if (zed.grab() == ERROR_CODE::SUCCESS) {
		// In background, spatial mapping will use new images, depth and pose to create and update the mesh. No specific functions are required here.
		SPATIAL_MAPPING_STATE mapping_state = zed.getSpatialMappingState();
		// Print spatial mapping state
		cout << "\rImages captured: " << i << " / 500  |  Spatial mapping state: " << spatialMappingState2str(mapping_state) << "                     " << flush;

Extract mesh

After capturing 3D map during 500 frames, we can now retrieve the mesh stored in the Mesh object using extractWholeMesh(). This function will be blocking until the full mesh is available.

zed.extractWholeMesh(mesh); // Extract the whole mesh

Note: It is possible to asynchronously retrieve a mesh during mapping. See the Using Spatial Mapping API section.

This mesh can be filtered (if needed) to remove duplicate vertices and unneeded faces. Then we save the mesh as an OBJ file for external use.

mesh.filter(MeshFilterParameters::MESH_FILTER::LOW); // Filter the mesh (remove unnecessary vertices and faces)
mesh.save("mesh.obj"); // Save the mesh in an obj file

Disable modules and exit

Once the mesh is extracted and saved, don’t forget to disable the mapping and tracking modules (in this order) and close the camera before exiting your application.

// Disable tracking and mapping and close the camera
 return 0;

Advanced Example

To learn how to capture a live 3D mesh of the environment and display it as an overlay on the camera image, check the Live 3D Reconstruction sample code.

Next Steps

At this point, you know how to retrieve image, depth, odometry and 3D mapping data from ZED stereo cameras.

Read the next Object Detection tutorial to see how to detect and track persons in 3D.