When VRFocus has reported on Foveated Rendering in the past it has generally been in conjunction with some sort of eye tracking hardware by companies like Tobii. However, that’s not necessarily needed as Oculus has recently explained in a developer blog posting.
For those unaware of the technique, foveated rendering is a graphical process to help reduce the processing load on a GPU. This is achieved by reducing the quality of areas in a players peripheral vision, whilst keeping maximum clarity in areas they are looking. As mentioned, this technique works very well with eye tracking tech, allowing a system to know exactly where someone is looking to accurately reduce the quality and processing power.
Oculus has now highlighted a technique called Mask-based foveated rendering (MBFR) that: ‘decreases the shading rate of the peripheral region of the eye buffers by dropping some of the pixels, based on a checkboard pattern.’ With the technique developers can drop different amounts of pixels, so for example the image above has 50 percent of the pixels dropped with the left part the masked portion, the right being the original – or ground truth image – while the centre shows the reconstructed result.
Oculus explains that: “MBFR reduces GPU pixel shading cost by dropping a subset of the pixels in the world rendering passes. But it also introduces extra cost in the post-processing passes for reconstructing the dropped pixels.” This means developers can expect to see more than 10 percent GPU performance savings but with an extra performance cost in reconstructing those dropped pixels.
While demonstrated using Epic Games’ Robo Recall for Oculus Rift use of this technique would most benefit mobile headsets like Oculus Go, due to the restricted processing power these devices have.
For studios wishing to experiment with MBFR it is currently included on Oculus GitHub UE4 Repository and works with Unreal Engine 4.19 and 4.20-preview versions. For any further updates on Oculus’ latest developer techniques, keep reading VRFocus.