Umbra 3 for VR

Umbra 3 Visibility Solution for VR

Reach 120 fps or build bigger scenes with the Umbra 3 Stereo Camera

Virtual reality platforms such as Oculus Rift, the Valve prototype and other upcoming VR-systems set very different requirements for content than what video game developers have been used to. Besides the challenge of creating new control and input methods, VR worlds require an immense amount of visual detail. This is where the Umbra 3 Visibility Solution will help by optimizing the performance so that the highest degree of fidelity can be achieved automatically. Umbra 3 is ready and optimized for VR applications, today.

Drivers for high frame rates in VR

Currently a typical game runs at 30 frames per second, which means that it spends 33 milliseconds to process each frame. With VR the requirements are much higher than that for two reasons.

Immersion

Real-time graphical performance in VR applications is critically important. Oculus VR currently recommends that the absolute minimum frame-rate is 60 fps, below which players might feel disoriented. A low-persistence display, such as the Oculus Rift prototype “Crystal Cove”, requires substantially higher frame rates of 120 fps or more to maintain comfortable experience and immersion. Displaying 120 frames each seconds leaves only 8.3 milliseconds for the rendering engine and the GPU to process each frame.

Low overall latency

Oculus VR recommends a soft-limit of 20 milliseconds of overall latency for motion-to-photons latency. It is obvious that to reach this target it is imperative to process the each frame much faster than that.

The above reasons mean that the developer has only one fourth of the time budget what she is accustomed to. With the reduced time available for rendering the graphics, the fidelity of the visuals suffers significantly when compared with the modern PC or console games. Currently, the VR demo applications use old, relatively simple content and present very small scenes to overcome this hurdle.

Umbra 3 is a toolkit for optimizing the rendering performance and cutting down the overall CPU and GPU time. That makes it the perfect tool for making the virtual reality experience possible. The following two chapters take a closer look on Umbra 3 and how it supports the stereo rendering required by virtual reality applications.

How Umbra 3 works?

Umbra 3 is a solution for determining which parts of a 3D scene or model produce visible pixels on the screen, from any given camera angle. This information allows for adding more detail to the visible parts for the same frame budget, or for improving the overall performance of the application. This process is called occlusion culling. In simpler terms, Umbra 3 makes sure only what is required to be shown on screen at any given moment is being rendered as efficiently as possible.

Umbra 3 overview

The visibility determination is very fast, and it is trivial to parallelize this processing over several CPU cores to optimize this even further.

Umbra 3 is completely automatic and does not require the level artists to model any special geometry as opposed to traditional methods. As Umbra 3 does all its processing on the CPU it is very easy to just plug it into any game engine.

Umbra 3 Stereo Visibility

vr stereo image FINAL

The stereo effect in VR requires that the image is rendered twice, once for each eye. A naive occlusion culling solution would require that the visibility testing be done individually for each eye, so that the results would be correct.

Umbra 3 supports stereo natively through a feature called Stereo Camera, with which both eyes can use the results of a single occlusion culling operation – effectively halving the required processing time.

The Stereo Camera is a runtime operation, which returns the set of visible objects within a specified sphere, instead of a point in space. The query origin is set exactly between the two eyes of the viewer, and the sphere is set to encompass both eyes, which guarantees correct results.

The Stereo Camera is a unique property of the Umbra 3 algorithm.

Example of Stereo Camera in action

view left side view right side

Left Eye                                                                                   Right eye

Above is an example of 3D scene drawn individually from each eye, visualized from birds-eye perspective. The red parts are determined to be hidden and will not be drawn.It is clear that the visibility is different from each eye so using just one of the results for rendering both eyes would cause bad visual glitches. We need to know the combined set of visible objects.

stereo camera final view

Stereo Camera

Above we have the same scene, but now the visibility is determined using the Stereo Camera, placed between the two eyes. The resulting set of visible objects is correct for both eyes. Stereo Camera saves CPU processing time by determining the visibility simultaneously for both eyes!