Since HTC’s Vive Pro launched with two front-facing cameras, many developers have asked us how using those cameras for pass-through augmented reality would compare to using our own ZED Mini. Now that HTC released their SRWorks AR SDK, we tested Vive Pro AR and ZED Mini AR side-by-side.
As you can see, the Vive Pro’s advantages as an AR headset primarily come from its strength as a VR headset: it’s comfortable and integrated, and it’s great if you also do VR development. The ZED Mini excels on nearly all AR-specific qualities. The one exception is vertical field of view, which we’ll examine later.
The Vive Pro’s form factor is perhaps its biggest advantage. It does not require a separate attachment or USB cable. There is no mounting process, and the headset is more comfortable than either headset that the ZED Mini supports.
You need a minimum of a GTX 1060 to use the ZED Mini, but only a GTX 970 for Vive Pro AR. This is because, with higher resolution and more features, the ZED Mini is simply doing more at once. As we built the ZED SDK around NVIDIA CUDA, you can only use ZED Mini with NVIDIA GPUs. SRWorks is also restricted to NVIDIA GPUs and likely for the same reasons.
Ignoring VR purposes, the ZED Mini is technically more cost-effective. If you own no previous VR hardware, then the cost of the required starter kit ($1,099) exceeds the combination of the ZED Mini with either compatible headset. You can buy the Vive Pro as a standalone headset for $799 if you already own the base stations and controllers by buying a Vive, but in that case, you only need to spend $449 to add AR to your existing headset.
The ZED Mini’s image is substantially sharper, given its 720p resolution over the Vive Pro’s 480p. Like watching a low-res YouTube video on large screen, this affects the enjoyment of every moment in the headset.
The ZED Mini also renders real and virtual objects at the same resolution, whereas Vive Pro AR renders virtual objects at the full Vive Pro resolution of 2880×1600. This makes them look distinctly different from the 480p world they are supposed to be a part of.
Vive Pro AR capture from desktop mirror
ZED Mini capture from desktop mirror (cropped for comparison)
With the Vive Pro cameras’ 96° and the ZED Mini’s 90° horizontal field of view, both effectively fill the headset from left to right. For vertical field of view, the Vive Pro’s AR’s is significantly higher, at 80° vs. 60°. As a result, you can see black borders at the top and bottoms of the ZED Mini image when viewed from the headset, but not in Vive Pro AR.
This is a trade-off we deliberately made at Stereolabs. We could have selected lenses with a wider field of view for the ZED Mini. However, this would have lowered the effective resolution in the way your eye perceives it. The middle 60° of our eyesight – called Central Vision – is where we have the most nerves and see the majority of the data that gets sent to our brains. As such, we found it is worth chopping off part of the image outside that range if it makes the image look better within it. However, that advantage is lost when the user looks at the edges of the screen, when the image borders enter Central Vision.
Blue circles represent Central Vision, where the eye perceives most of the detail it sends to the brain.
The frame rate of each is tied more to USB bandwidth than computing power; stereo cameras have to send twice the data of a standard camera with the same resolution, so there are limitations on how fast these images can be sent through USB 3.0.
When set to 480p, the limit is 90FPS for both cameras. The ZED Mini gets 60FPS at its default setting of 720p, however we’ve found this to be the most comfortable balance of resolution/frame rate.
The latency of Vive Pro AR is 3.3x that of the ZED Mini, at 200ms and 60ms, respectively. The impact of latency varies with the experience. With slower, simpler apps, it may be bearable. With more fast-paced apps, or any that require physical movement, 200ms will be impairing and may cause motion sickness.
Below we’ve captured both the virtual and real versions of the Vive controller in a scene, where the virtual version’s position is unaffected by the camera’s latency:
Vive Pro AR:
Synchronized capture of Vive Pro AR (left) and ZED Mini (right). Audio levels from the HMD mics are visible to show synchronization.
To understand the 3D world, both headsets calculate depth to every pixel in view. This is necessary for any case where a virtual object must act on a real object in 3D space. Both sensors calculate depth the way the human eye does, using stereo triangulation from two different views of the world.
Depth estimation in Vive Pro is done in software, just like the ZED Mini. HTC seems to use open-source stereo depth estimation techniques, which is very noisy indoors with sparse measurements. The ZED Mini uses in-house stereo depth sensing technology which has been developed over the years at Stereolabs. See the video below for comparison.
Depth maps from Vive Pro AR (top) and ZED Mini (bottom)
Depth accuracy, range and completeness are very different between the two. Higher accuracy means objects will behave as expected; virtual shadows cast on the real world will appear at the correct angle, virtual balls will bounce correctly on your floor, etc. With less accuracy, lighting effects can appear noisy, and a virtual object might fly through a real one when it should collide with it. Beyond the maximum range of a depth sensor, no spatially aware interactions can occur. Finally, without a complete depth, AR occlusions cannot happen in dynamic environments.
Developers can build applications for either Vive Pro AR or ZED Mini, directly with their SDK or through Unity or Unreal plugins. The ZED Mini’s SDK is more expansive, however it was designed for general purpose use, so developing pass-through AR without the plugins will likely take more implementation than with the Vive Pro SDK.
The ZED Unity plugin currently has more features to simplify development. As these are public, high-level features, advanced developers could implement any such features present in one plugin but not the other.
Both are capable of spatial mapping for proper physics and other applications but the ZED Mini can scan faster and at a longer range.
Vive Pro AR Spatial Mapping
ZED Mini Spatial Mapping
SRWorks requires pre-scanning an environment for a number of features whereas the ZED Mini doesn’t require pre-scanning for most of the features. See below.
Visual Effects Both ZED Mini and Vive Pro AR support depth occlusion – having real objects hide virtual objects behind them – and casting virtual shadows on real objects. Note that only the ZED Mini delivers both these features without any pre-scanning unlike the Vive Pro AR.
Vive Pro AR occlusion
Vive Pro AR shadows
ZED Mini does not require pre-scanning for either feature, instead deriving the proper effects from its live depth map in real time. Besides convenience, this also allows occlusions and shadows to work with moving real objects.
ZED Mini occlusion and shadows
The ZED Mini can apply realistic virtual lighting to objects in real-time. SRWorks does not support this feature at the moment.
ZED Mini virtual light projection in the real world
As both have access to a depth map, both can use a Z-test to find if a moving object has hit the real world or not. This technique doesn’t provide enough data for proper physics simulation but is useful for things like projectiles.
As of version 2.4, the ZED Mini supports real-time plane detection, allowing you to turn flat surfaces into virtual meshes near-instantaneously, complete with a collider. This lets you accurately bounce or place objects off of real flat surfaces without the pre-scanning. SRWorks only includes plane detection internally to assist with pre-scanning.
The ZED Mini can piggyback onto the headset’s tracking or use its own. However, as the Vive Lighthouse tracking system is the most accurate available, it will almost always be the best choice if already constrained to a tracked environment (which is most situations).
If using the Oculus Rift, the ZED Mini’s tracking can allow users to use the headset without base stations, technically allowing for world-scale AR.
Creating AR for a headset is hard. It means solving countless problems from capture to rendering to display, where each solution has to be implemented perfectly to make the experience believable–and endurable. Things like calibration, depth calculation, optimization, latency mitigation and the many nuances of how the eye expects the world to look. Each takes time and experience to solve.
AR is not the primary function of the Vive Pro, and SRWorks simply needs more time. The challenge is software, not hardware. Stereo vision is an unforgiving science that takes years to get right. But with companies like HTC walking down that path, it validates a bright future for stereovision and pass-through augmented reality. For now, if you’re looking for a comfortable, interactive and realistic AR experience, try ZED Mini with your Rift or Vive.