Google’s creating higher quality streamed video with Equi-Angular Cubemaps

360-degree and VR videos are going to start looking even better.

The world is used to watching videos on YouTube and depending on bandwidth, altering the resolution to get a steady stream, regardless of quality. For immersive 360-degree content however that’s not as easy. While poor internet speeds can ruin virtual reality (VR) viewing, low resolutions can also destroy immersion and make content an unwatchable mess. So in a joint effort between the YouTube and Daydream teams, they’ve been working on new techniques to solve the issue, one of which is Equi-Angular Cubemaps (EAC).  

In a new series of blog postings, Google VR will be taking an in-depth look at improving VR experiences, and with immersive video gaining popularity as it becomes more widely available the company has chosen to start there.

So what are Equi-Angular Cubemaps and how do they help improve 360-degree content? 360-degree videos require a significant amount of pixels to create a decent experience, ideally being 60 pixels per degree of immersive content to match human visual acuity. But due to current device capabilities and internet speeds this isn’t generally possible, instead clever projection methods are needed.


There’s Equirectangular Projection, where latitudes and longitudes are used to form a square grid, but as Chip Brown, Staff Software Engineer, Daydream, explains: ‘when used for video transmission, it has serious problems. First, the poles get a lot of pixels, and the equator gets relatively few.’

Then you’ve got Cube Maps, this deforms a sphere into a cube which can then be unfolded. “This is an improvement over equirectangular projections, but it still causes substantial variation in pixel density,” explains Brown. “The problem is that the centers of the cube faces are close to the sphere, while the corners are further away.”

This is where Equi-Angular Cubemaps come in. “The traditional cubemap has samples of varying length depending on the sample’s location on the cube face. EAC is specifically constructed to keep these lengths equal, creating uniformly allocated pixels,” said Brown. “The EAC formula is mathematically precise in 2D, but only an approximation of an equal angle pixel distribution in 3D, albeit a pretty good approximation with manageable distortion.”

EAC FrameCompare_SansSerif_n9EZUWL.width-1000

As you can see from the image above, what this means is a higher quality video for consumers creating a far more engaging experience.

YouTube engineers haven’t stopped there, they’ve also developed a Projection Independent Mesh, which they hope will eventually become a widely agreed upon industry standard.

“A Projection Independent Mesh describes the projection by including a 3D mesh along with its texture mapping in the video container. The video rendering software simply renders this mesh as per the texture mapping specified and does not need to understand the details of the projection used,” said software engineer Anjali Wheeler. “Some 360-degree cameras do not capture the entire field of view. For example, they may not have a lens to capture the top and bottom or may only capture a 180-degree scene. Our proposal supports these cameras and allows replacing the uncaptured portions of the field of view by a static geometry and image.”

Android already benefits from EAC projection streamed using a projection independent mesh and the techniques will be coming soon to iOS and desktop.

To keep up to date on Google’s latest advancements in VR, keep reading VRFocus.

InvestVR OrangeLady Horizontal

You might also like More from author