Facebook Reality Labs (FRL) has been experimenting with a number of ways to improve the realism of virtual reality (VR), through both hardware and software means. During the Facebook Developers Conference (F8) 2018 in May the company unveiled Half Dome, a prototype headset with a varifocal mechanism. Now the lab has revealed DeepFocus, an AI-powered platform designed to render blur in real time and at various focal distances.
What Deep Focus and Half Dome are both trying to achieve is something our eyes do naturally, a defocus effect. As the gif above demonstrates, when our eyes look at objects at different distances whatever they’re not focused on is blurred and out of focus. While this may seem simple, trying to replicate the effect in VR isn’t exactly easy, but its creation has a whole bunch of use cases for the technology.
The first is the goal of truly realistic experiences inside VR. “Our end goal is to deliver visual experiences that are indistinguishable from reality,” says Marina Zannoli, a vision scientist at FRL via the Oculus Blog. “Our eyes are like tiny cameras: When they focus on a given object, the parts of the scene that are at a different depth look blurry. Those blurry regions help our visual system make sense of the three-dimensional structure of the world, and help us decide where to focus our eyes next. While varifocal VR headsets can deliver a crisp image anywhere the viewer looks, DeepFocus allows us to render the rest of the scene just the way it looks in the real world: naturally blurry.”
Another important aspect of DeepFocus is the comfort. The more natural VR looks and feels, the easier it is to use. “This is about all-day immersion,” says Douglas Lanman, FRL’s Director of Display Systems Research. “Whether you’re playing a video game for hours or looking at a boring spreadsheet, eye strain, visual fatigue and just having a beautiful image you’re willing to spend your day with, all of that matters.”
While FRL is currently using DeepFocus with Half Dome the software has been designed to be platform agnostic, which is why the DeepFocus team is open-sourcing the work and data set for engineers developing new VR systems, vision scientists, and other researchers studying perception. As further updates from FRL are released, VRFocus will let you know.