THE FUTURE OF VR
Looking 10 years forward
In the future, we'll view the first consumer Rift (CV1) as we do the first iPhone: a revolutionary product representing the start of an entirely new industry but merely the primordial beginnings of something much greater. It will become an antiquated museum piece resting behind a fingerprint-ridden plexiglass display. Here are some hardware and software advancements we can expect to see in the future that will put CV1 to shame:
Wider field of view: Field of view (FOV) determines how much of the screen fills your vision. The larger the FOV, the less of the outside world you see and the more immersed you feel in a virtual environment. At 90 degrees, a high FOV was one of the Rift's key innovations (compared to competing HMD's with paltry 45 degree FOVs.) Eventually, HMD's will match the human eye's field of view of nearly 180 degrees.
Higher resolution screens: CV1 is rumored to feature a resolution of 1080P or higher. The larger the resolution, the less pixels the eye perceives, eliminating the screen-door effect and resulting in a stronger sense of presence. Once display technology advances, 4K (four times the resolution of 1080P HD, aka Ultra HD) and even 8K screens will become standard. But implementing OLED panels with higher pixel density creates another challenge: higher resolutions like 4K require significantly more powerful graphics cards. Keep in mind, VR requires rendering two frames at a time (one for each eye) to create the stereoscopic 3D illusion. A 4K HMD will happen but it might take a while before graphics chips become cheap and powerful enough to make it viable.
Foveated rendering: One potential way to mediate the computational strain higher resolutions place on graphics cards is with foveated rendering, an experimental technique that increases frame-rate through its mimicry of the human eye. Its name comes from a part of the eye called the fovea, a small region in the center of the retina responsible for sharp vision and fine detail. To better understand the fovea, stick your thumb out in front of your face like you're hitchhiking. Notice how the area around your thumbnail is sharpest while everything around it appears blurry. Visual acuity drops off dramatically the further away from the center of your focus.
This phenomenon can be simulated in VR by rendering layers of decreasing resolution that radiate outwards from the area of focus each frame. Thus, only a small circular portion of the frame is rendered at the highest possible resolution, delivering a huge boost in frame-rate. While the center of focus can be fixed in the center of the display or moved around manually, the real magic begins with eye-tracking. Once a user's gaze can be tracked to a high degree of accuracy within an HMD, the center of focus can move with it so they're always seeing the sharpest resolution. Of course, integrating accurate eye-tracking technology into slim HMDs presents an enormous technical challenge today, but will undoubtedly be accomplished within a decade.
Path-tracing: Path-tracing is a rendering technique that traces rays of light reflecting off 3D geometry, generating images nearly indistinguishable from real life. Incredibly processor-intensive, ray-tracing is often used to produce animated movies, taking hours to render single frames. Real-time path-tracing is slowly becoming possible with powerful graphics cards and new game engines like Brigade. Combined with foveated rendering, path-tracing could be the best technology to fully simulate real life.
3D audio: 3D audio, also known as binaural audio, simulates how audio sources sound in the real world, greatly adding to a sense of presence in virtual worlds. By mimicking imperceptible delays in time a sound hits each ear, a person can tell where a sound is coming from in space. For example, a person's voice will grow louder as you walk closer to them. As game engines begin to support 3D audio, VR games and experiences will feel all that more real.
Convergence of VR and AR into Replaced Reality
In the future, our vision will be constantly augmented with digital information. Unlike virtual reality which completely simulates a computer generated environment, augmented reality (AR) brings the virtual world into the real world through cameras and advanced computer vision software. AR became synonymous in the mid 2000's with black and white patterns similar to QR codes printed out on paper. When put in front of a laptop camera, virtual objects appeared to stick to the piece of paper as if it were really there, moving accordingly with the paper's position and orientation. With the introduction of the iPhone, AR made its way to mobile platforms with apps like Layar and Blippar. While AR has always been fun to play around with, it hasn't become integrated into everyday life like technologists hoped it would. The lack of accurate tracking data and mobile processors fast enough to simultaneously track the real world and render out frames meant prevented AR from reaching it's fullest potential.
New smartphone prototypes like Google's Project Tango aim to change that. The best way to understand Tango is to think of it as having a Kinect in your pocket: an amalgamation of advanced 3D depth sensors, a co-processor with ultra-low power consumption, and computer vision software combine to form an understanding of the world around us like a human does. For example, pointing the phone around a room impressively scans it into a 3D model in real-time. While developers are only recently getting their hands on Tango, this kind of technology will soon enable massive innovation in the smartphone industry and will bring true augmented reality into the hands of consumers for the first time.
The logical future of Tango lies within its integration with wearable computers like Google Glass, where it's constantly recording and understanding the world around you, giving context to software that previously had none. But the more exciting future putting this technology HMD where you can tap in and out of the real world, seamlessly switching between AR and VR. One researcher in London is already working on something like this:
Eventually, VR and AR will converge into something completely different altogether called replaced reality, or "RR." Imagine a future Oculus Rift with stereoscopic 3D depth cameras that analyze the environment and project computer-generated frames onto a curved piece of glass in real-time. Walking around New York City, for example, could look like 15th century Japan. We're many years away from this dream, but it will happen.
VR and AR will redefine the computer and desk workspace as computers enter the third dimension. New user interface metaphors and body-tracking input devices will be created to support this system. Traditional desktops will be replaced with a system comprising body/hand-tracking cameras, a computer base station, and a Rift (or other HMD). Eventually both the computer and tracking device will be integrated into the HMD, consolidating everything into one gadget. The HMD will essentially become an infinite screen, creating a new kind of technology called "borderless computing." While it's hard to predict exactly what this software might look like, the form-factor and user interface computers harbor today will be completely different in the future.
VR as an art-form: codifying the grammar of VR software
When motion picture cameras were first invented, people didn't know what to do with them. Film was scoffed at; some believed movies could never be better than plays at the theater. Then came D.W. Griffith, who figured out that he could tell stories by cutting from shot to shot. He created meaning through the juxtaposition of different scenes were there was none previously. As film technology improved and more people had access to film cameras, the language of film was figured out, which led to the "rules" of composition, editing, and storytelling we have today. As the film industry matured, people got more creative with the medium, bending time and space to achieve their unique directorial vision.
This early film industry bears striking resemblance to the VR industry today. But stories in VR lack cuts and other editorial elements inherent to film. In the future, the language and rules of VR as an art-form will be figured out. In some sense, It already is now. Developers like Denny Unger are quickly learning what works and what doesn't. His game The Gallery: Six Elements employs techniques like snap turns and contextual depth of field to ease the player into a pleasurable experience.
Coexistence of VR with other mediums
VR will become the predominant form of entertainment, communication, and creative expression around the world, but it will not kill movies and TV shows, at least in the near-future. To understand this just look at the history of transformative communication technologies: TV did not kill off the radio, the internet did not kill off TV, etc. Each medium presents completely different experiences with unique benefits and disadvantages.
Instead, the different arts will morph and blend together into something impossible to predict this early on. The lines between video games and movies in particular will rapidly blur.
April 7th, 2014