Every year during Oculus Connect, Oculus Chief Scientist Michael Abrash holds a session to discuss his thoughts about virtual reality (VR), it’s future and how the industry might get there – these are in-depth but worth a watch. All of the company’s R&D used to be under Oculus Research until last year when it was renamed Facebook Research Labs. Every so often there a little sneak peeks at what the labs are working on but now they’re going to be even more transparent, thanks to a new blog series.
Abrash made the announcement via the Oculus Blog today, revealing that it’ll be a year-long series of posts delving inside the various Facebook Reality Labs, highlighting a different team and what they’re working on for the future.
“I expect these blog posts to be markers on the journey to the AR/VR future,” notes Abrash. “Over the coming months, you’ll see deep dives into optics and displays, computer vision, audio, graphics, haptic interaction, brain/computer interface, and eye/hand/face/body tracking.”
Hopefully revealing all sorts of interesting developments and breakthroughs the teams are making, the first blog post focuses on lifelike avatars and the connections people make inside a digital world. The project is called Codec Avatars which is run by Yaser Sheikh, the Director of Research at Facebook Reality Labs in Pittsburgh.
Seeking to overcome the challenges of distance between people, the project uses a combination of 3D capture technology and AI systems to build realistic avatars of users quickly and simply, a stepping stone towards digital online interaction that’s as normal as the real world.
The team at FRL Pittsburgh have been working on this challenge for a number of years, with Sheikh joining Facebook in 2015. Their work was showcased during F8 2018 with two realistic digital people animated in real time. Since then: “We’ve completed two capture facilities, one for the face and one for the body,” says Sheikh. “Each one is designed to reconstruct body structure and to measure body motion at an unprecedented level of detail. Reaching these milestones has enabled the team to take captured data and build an automated pipeline to create photorealistic avatars.”
The capture system FRL Pittsburgh has developed gathers 180 gigabytes of data per second, thanks to hundreds of cameras – each camera captures data at a rate of 1 GB per second. A proprietary algorithm then uses the data to create a unique avatar for the individual scanned.
The type of technology FRL Pittsburgh is using isn’t going to rapidly become available for the everyday consumer to put themselves into VR, but it certainly showcases the steps needed to be taken to eventually get there. Do read the post as it is fairly extensive, and as Abrash details more projects Facebook Reality Labs is working on VRFocus will let you know.