Have you ever dreamed of living out your favourite movie scene? Want to be right in the middle of a squadron of X-Wings during the Death Star run, or dodging Orcs and Goblins in Lord of the Rings? A new augmented reality (AR) tool called Volume may soon allow you to do just that.
The developer behind Volume has utilised machine learning to create a tool that allows 2D videos and images to be converted into 3D spaces. The tool is still in relatively early stages of development, but the team has already successfully produced some proof-of-concept videos.
The team behind Volume hope that the tool will eventually become a tool for access for storytelling, archiving and cultural reconstruction. The app is able to predict and reconstruct 2D footage in 3D and place the character’s within the user’s space in AR, enabling the user to see the movie in an entirely new way.
The developers, Or Fleisher and Shirin Anlen, have already released a brief demo showing a scene from hit Quentin Tarantino movie Pulp Fiction converted into an AR experience. A video of the demo can be viewed below.
Volume was inspired by recent research into immersive digital platforms and methods of 3D depth predictions. Volume uses a state-of-the-art machine learning algorithm. A Convolutional Neural Network allows the software to ‘observe’ RGBD images and build a model of how to reconstruct those 2D images into a 3D space.
The intention is for Volume to become an end-to-end web app, able to be easily accessible from any compatible web browser and capable of taking any input from 2D still images to video sequences or GIFs and convert it into an immersive 3D experience. The aim is for Volume to support content for AR, virtual reality (VR) and web platforms, with its models being flexible enough to support a variety of uses, from academic research to media and entertainment.