PTom Logo

Oculus Rift: First Thoughts

I recently had a chance to play around with the Oculus Rift development kit, and came away thoroughly impressed.  I’ll side with Cliff Bleszinski’s comments from SxSW: “There are two types of people when it comes to the Oculus Rift – there are those who haven’t seen it, and those who have seen it and believe.”

The experience of wearing the Rift is transportative, if not outright transformative.  The bulk and weight feel natural in terms of fit and distribution on the head, and the sense of the screen itself quickly fades from view to be replaced by the whatever it’s showcasing – it gets out of the way as a platform and lets the content speak for the capabilities.  Its strength as a portal to the work of other creators and designers will hopefully cement its place as a mainstream product and bestow major success and market-share on the Oculus team (and their “just happy to be here, folks!” founder Palmer Luckey, who comes across as a really nice guy who loves what he does – I can’t help but wish him success).  Deep pockets can sink early ventures pretty easily, in the event that Sony, Microsoft, or others want to set their sights on the emerging VR market after the hard work of helping it onto its feet has already been done.

Back to the device though: the dev kit is a lower resolution and higher bulk than the target specs for the consumer release, but were still impressive in their own right until you wanted to read something (more on that later).  The stereoscopy is exceptional and provides not only depth but a serious demonstration of the scale of things as well.  A typical trope of first-person PoV in video games relies on making game worlds absolutely massive in order to create a sense of scope and scale to impress (and then let the character run through it at a bazillion miles an hour on indefatigable and inhuman legs capable of bounding over improbable obstacles so as to make it still navigable instead of tiresome).  The Tuscany demo provided by the Oculus team, however, manages to sit right between “comfortable” and “spacious” with a model and world that in any other mainstream title would actually come across as “quaint”, a piece of set dressing hardly worth exploration.

This is helped along considerably by the proprioceptive projection invoked by the medium – that is, the sense of the environment meshes so well with the brain’s expectations of how “the I that is me” relates to the world, that it (the brain) slips easily into the sense of reality we usually construct from our physical surroundings.  Standing on the balcony overlooking the courtyard and out to the sea, I wanted to crouch down and inspect the stonework of the banister.  Not only did it immediately relate to a concept of my own scale in that environment, but my brain craved additional subtle details it expected but found absent: I wanted my voice to echo off the wall, to feel shifting air and patterns of temperature as I moved from sun to shade or turned relative to wind, even humidity and smells.  I wanted to touch things not only to measure their position relative to myself, but also to become aware of their texture, solidity, and age.

The fact that I jumped so quickly into a realm of subtlety is a credit to the visual experience.  Human eyes are constantly making minute adjustments to correlate for how we bounce ourselves around, even for tiny head shakes resulting from speech.  The very sensitive 120hz positional sampling in the Rift caught and balanced these perfectly, providing a sense of stability and responsiveness unrivaled by any kind of 3d VR experience I’ve had before this, managing to simply disappear

In fact, the last time I remember anything close to this was the first time I played Doom (when it first came out in 1993, a full 20 years ago), and felt an emotional response to hearing the grunt of an unvanquished Imp somewhere in the level.  In that case the gameplay was engrossing, though not entirely immersive, until I reflected back on it later that first evening: my memory was not of the keyboard, screen, and speakers, but of the environment itself – it had provided enough detail for my brain to fill in the rest and appreciate it the same way it did with its other concepts of space.  The Rift does that up front, so that upon reflection the rest of the subtlety comes in to play.  In fact, I found myself referring to my physical presence as “the real world” as distinct from the world I was experiencing and inhabiting (as opposed to “physical” and “virtual”).  My sense of orientation relative to the desk and my developer friend whose kit I was inhabiting, even with his voice providing some orientation, became completely un-grounded – that’s just not the world my brain was in at the time.  This young lady’s reaction is quite illustrative.

Now to be fair, it does have its limitations.  Content developers are going to have to think a lot about interface – movement is very different when you add head tracking as an additional means of orientation.  The Team Fortress 2 demo does a good job of this, and separates the aiming reticle from the viewport (typically in first-person shooter’s its embedded in the center of the screen, and aiming the camera and the weapon are one in the same).  The combination of look + mouse + keyboard for movement was quickly natural, and will hopefully be used as a starting point or template for others.

Other details will also need to be worked out:

  • Movement matters – a walking view should feel like walking, complete with subtle shifts in position or bounce as weight changes feet, so one is not impossibly glide-stepping or rolling around the virtual environment in a wheelie chair.
  • Image textures are not enough – if a section of wall is simply painted to look like a grate, a 2D interface might let you get away with it.  But in full 3D, the eyes immediately register it as an utterly flat plane, making that image (no matter how nice) look like cheap wallpaper.  Bump mapping will help, but people are also going to want to stick their noses a lot of places they haven’t previously, so as much as a person will ever be able to peer through or around will need some life breathed into it.  For that matter, UV registration (the process by which shape and image are matched up) will need a lot of precision work as well – the corner of that brick had better match up with the corner of the picture.
  •  Structural integrity needs to be considered – in virtual space an object need not be 2-sided, or even 3-dimensional.  A pane of glass (or a railing, for that matter) can be depthless, and just because a box has 3 sides doesn’t mean the others are complete.  But unless the point of the virtual world is to explore Klein-style mathematical constructs, maintaining Euclidean geometry and physics is important for preserving the illusion.
  • Eliminate lag at all costs – if no other subtlety can be preserved, make sure that the physical head turn indicators in the headset translate seamlessly into the virtual representation.  Stutter or lag in visual perception is more 4th-wall shattering than anything else, in addition to being more nausea-inducing than awkward and/or rapid movement.
  • Reading is right out – the TF2 HUD was a wonderful experience, to have it really appear as though it were floating on top of the rest of the fluid environment, but in order to be viable as an interface it needs large, high-contrast text near the center of the field of view. It’s like going back to the barely post-DOS games of yore, and means that textual interaction has to be kept to a minimum and rely on other cues (such as color coding, simple distinct glyphs, etc.).  This will probably be worked out in successive iterations with higher resolution, but for now is a distinct limitation.

I’m excited for what this can do expand the possibilities of experiencing virtual worlds, and not just for gaming.  I’ve recently begun to do my sculpting digitally, trading in polymer clays for a pen and tablet – way less mess, no set-up and clean-up, and I don’t have to bother with planning out all my internal support structures in advance (letting me stay spontaneous throughout the course of the entire project).  An infinite level of detail, independent object addressing, layers, even “undo” are giving me as much freedom in a computer that I experienced when moving photography into PhotoShop.  To combine that with more natural modes of manipulation (still waiting on my twice-delayed Leap Motion controller) and perception will further decrease the barriers between imagination and creation.

Navigating infoscapes is another big one I’m looking forward to, and will have another write-up in that regard soon.

But really? One of the biggest reasons I’m excited for this is due to the McArdle’s disease: when I talk about inhuman and indefatigable feats in navigating virtual worlds, that goes doubly for me.   Even with good physical therapy and conditioning there’s stuff I just can’t do anymore, and being able to strap on a different set of eyes and overcome physical limitations is thoroughly enticing.

« »