One of the serious drawbacks of the current generation of VR headsets is the rather low resolution of the built-in displays. This leads to a blurry picture and a pronounced screen-door effect, when the user sees through the magnifying lenses the dark gaps between the pixels. It seems that Google is seriously going to close this issue once and for all, presenting the world with a new generation of displays for virtual reality devices, many times exceeding the existing solutions in terms of image quality.
According to the information posted on the web, within the framework of the Display Week 2018 event will begin on May 20, Google plans to demonstrate 4.3-inch displays with a resolution of 18 megapixels, and this, for a second, the resolution is 5657 at 3,182 pixels. The production of matrices will be occupied by the South Korean company LG, and the main consumers of this product will be manufacturers of VR-devices of the new generation. The density of pixels on these OLED displays will be impressive 1443 pixels per inch, which will eliminate discomfort and image deficiencies in VR games and applications.
"A white OLED display with a structural color filter made it possible to achieve an unattainable pixel density. To improve the mobility of electrons, we decided to use an LTPS-panel of n-type (LTPS-short for Low-Temperature Polycrystalline Silicon), "the official Google statement says.
The existing VR-headset on the market boasts such a resolution, Unfortunately, they can not. The most advanced helmet HTC Vive Pro, which only has to go on sale, each of the 3.5-inch displays will produce a picture with a resolution of 1440 by 1600 pixels and a refresh rate of 90 Hz. The previous generation of headsets (Oculus Rift and HTC Vive) has a resolution of 1080 by 1200 pixels for each eye.
The new Google display is not enough to display a picture with a resolution of 3182p, but also able to maintain a refresh rate above 90 Hz. Given the huge number of pixels, developers are well aware that for this quality of the picture will need incredibly powerful hardware. To reduce the load on computing power, Google engineers plan to use the user's view tracking, as well as the "advanced rendering" technology, in order to maximize detail only that part of the image where the human vision is focused. Something similar has already been suggested by other companies, including Nokia.