Several companies are working on attempting to improve the display quality for virtual reality (VR), with many concentrating on attempting to improve the resolution and refresh rate, but this does come with an attendant additional load on the processor and graphics card. Electronics company LG are trying out a different solution, one which involves artificial intelligence (AI).
LG Display have been working together with Sogang University in South Korea to create an AI-powered algorithm that can reduce latency and motion blue in VR content.
High latency has been shown to be one of the factors that can cause simulation sickness symptoms, along with motion blur. Higher-resolution displays can even exacerbate this problem since they require more calculations which can increase the latency.
The LG technology uses an AI algorithm that converts low-resolution video into higher resolutions in real time. The deep learning algorithm only uses the internal memory of the device in order to do this, so no additional hardware would be needed.
LG says the AI technology could reduce latency and motion blue by up to five times for VR devices, with the additional benefit of also reducing energy consumption, since there would be less of a load on the GPU. It might also be possible to enable lower-end GPUs to produce high-quality VR experience by utilising this technology.
In order to test the technology, LG and Sogang University created a motor-powered rig that could measure the latency and blue in VR headsets by mimicking the optical view and head movements of a human.
That’s isn’t the only area where LG have been active in VR, as the company has also been working with Google to create a super high-resolution display to allow for richer and more realistic VR experiences. The proof-of-concept display featured a 120 degree field-of-view and a 4,800 x 3,840 pixels per eye with a 120Hz refresh rate.
For future news on developments in VR technology, keep checking back with VRFocus.