Why HTC's Vive VR headset offers a completely different experience – both for users and creators

All images: HTC

Daryl Atkins, creative director at Rewind:VR – which has created VR experiences for Bjork, the BBC and Red Bull – explains how you have to think differently about a VR system that lets you move round the room.

The HTC Vive system (above) has the ability to deliver immersive room-scale experiences making it one of the most anticipated VR headsets this year [selling 15,000 units in 10 minutes when it went on sale yesterday for US$800/£570. Ed]. Rather than an ‘inside out’ constellation system which forms the basis for the Oculus Rift tracking system with infrared light, the Vive uses a somewhat different approach. The system is provided with two base stations (dubbed ‘lighthouses’) which when placed in opposite corners of the room fill it with cleverly structured lazer light (see below). This allows the headset's 37 photosensors (70 in total, including each controller) to calculate their relative position in space. Not only does this enable high-fidelity, fluid tracking, it also allows the user to freely roam within around 5m² of space without the constraints of having to remain largely facing forward.

[To see this in action, watch Mark Hachman from Digital Arts' sister site IDG TV try it at the CES consumer tech show in January (below). Ed]

The obvious content for the home user is of course gaming. Considering HTC’s development partnership with Valve – the US games giant behind the Steam digital distribution system, Half-Life, Portal and DOTA franchises – the ability to explore and roam in a virtual world is at the heart of this medium; an obvious place in which this immersive hardware resonates.

This technology not only enables but encourages the user engage in exploring, ducking, crouching and interacting with their virtual world in a very physical way. The hand controllers act best when used as tools or weapons due to the way they feel in your hand, and as of the latest pre-release version we have just received, the build quality (despite early concerns) is extremely high.

This differentiation between free-roaming experiences and forward-facing ones I refer to as ‘forward bias’. While Oculus has focussed on the latter, the Valve system has taken a different approach. Both have their inherent strengths and weaknesses but remain different in terms of the user experience. This should promote considerations for content creators to author dynamic and complementary experiences for each method to maximise the users engagement. Some experiences designed for the Vive system don’t translate directly to ones for the Oculus, and vice versa.

Simply put the Vive (above) works consistently in 360 degrees over the entire play-space. The Oculus CV1 and Touch controllers (below) are intuitive, ergonomic and high-fidelity but are best suited to front 180 degrees due to the tracking cameras positioning. Although the Oculus system will track you facing backwards, it’s much more susceptible to occlusion, and as such Oculus aren’t focussing on largely room-scale experiences for now. This split between approaches creates a dividing line in the way we design experiences for each system.

Image: Oculus

These are not the same challenges that content producers face for traditional cross-platform authoring; they actually have a deeper impact for the core experience design in how we interact in VR. Although it may appear simple to just move your rear positioned interactions to the front, it can dramatically change the feeling of the experience, or worse break it.

So here are five key design implications of room-scale VR.

Responsive game design

This actually goes deeper when we understand that dynamic experiences are now a practical modulator for gameplay. The Vive system allows the user to define the area in which they want to play based on the space they have available. With the Vive we ideally want to create dynamic versions of a level layout (if it’s intended for end user use); dependant on the ‘play-space’ that the user has defined. Consider that a player shouldn’t be penalised for having a smaller play-space, or worse, unable to play.

Conversely for competitive and eventual online gameplay: we don’t want to give the player a strategic advantage from having a smaller play-space – making it easier to reach items or complete tasks for example. Games where speed and navigation have strategic implications could be affected dramatically by dynamic play-spaces. Much in the same way the growing adoption of mobile content has given rise to responsive design, we too are now seeing an increasing demand for some level of responsive gameplay on the horizon.

It’s not all a massive headache for experience designers however, the system itself opens many avenues for creating immersive rich experiences which brings the user much closer to the world we are creating. We can create quite fundamental and multi sensory experiences in which the user can inhabit and deliver. Another benefit of room-scale tracking is the user's ability to grasp fundamentals of the input method as they rely heavily on real word gesture, movement and scale. Picking up a box for example is a much more intuitive action, rather than an abstract one a traditional game controller requires.

Input methods

In regards to the controllers, the Vive’s require a grip similar to a sword or torch with input from triggers, buttons and a touchpad. This is fantastic for ‘tool’ grips, as well as guns and golf clubs for example. However the Touch controllers feel much more natural when it comes to raw hand fidelity due to the resting grip and the finger sensors giving the user very natural and intuitive use of their hands.

Both work well but don’t directly translate from each other. Surgeon Simulator (below) is good fun on the Vive, but fantastic on the Oculus Touch. It’s just better suited to the content and the input method, making it far more enjoyable to play. Although many developers understandably wish to ship products to cater for a range of hardware, it’s necessary to augment the experience to maximise intuitive input and improve comfort of that hardware.This could be achieved by altering the grip angle of a gun perhaps or arranging interactions in the scene to make it easier for the user to execute.

Traditionally games rely on the use of an input controller to translate a player though a larger world. While this works well for content on a screen, in VR this effect is massively undesirable. Problems with vertigo, vection and motion sickness challenge our vestibular system creating an unnerving and unstable physical response in the user.

Challenges in Translation

One of the early challenges of having room-scale and forward bias experiences is in developing new and novel ways of dealing with limitations in physical play-space. One solution for navigation is to move the world rather than the player. Vehicles or moving platforms are ideal to keep the user fixed in a relative sense and can re-introduce the controller as in input method to navigate larger worlds without proving too uncomfortable.

One other successful method is Teleportation. This provides a usable and comfortable method of large scale translation when your narrative exists beyond your physical walls. This method has been successfully used by a few early developers, most noticeably Epic Games' Bullet Train (below) in which it’s a feature not only for translation but as a strategy method for it’s gameplay. From our user testing at Rewind we have found it important never to re-orientate the player’s heading and choose your teleport locations to ensure they don’t arrive face-to-face with a wall. Another useful consideration is to keep something in common in the player field of view in order to make the teleportation mechanic feel natural.


The temptation with room scale VR is to overdo it! This works well for short arcade-style experiences but as we move to longer formats another potential consideration for designing wide field exploration is fatigue. Input, gaze and motion fatigue limit the amount we can and should ask our viewer to move around and interact. We need to think about staging action to create peaks and troughs in our need to require movement from our viewer.

Big moments in action or translation of a protagonist (for example) across the frame can be a great opportunity to ask the user to follow. Similarly a moment where the user is prompted to explore the scene can give people motivation to enhance the experience by utilising this ability. This should be both a justified motion and spaced strategically. As a core mechanic for longer format experiences strenuous motion can prove tiring, the art is in calling these occasions effectively and sparingly.

This should also be a consideration for input methods. We must be careful to design user interaction to not ask too much for the user and allow regular breaks from high frequency or sustained elevated actions.

Beyond gaming

Whilst gaming is going to be important, HTC has also announced an impressive roster of content partners for Vive, including HBO, Lionsgate and Google. No word on what this content will involve, but it's arguable that TV and film-style content may have a greater draw than games when it comes to attracting the casual consumer towards virtual reality.

Although the high-end consumer headsets will provide the very best VR has to offer, the high entry price and need for ancillary computers, cables and space may prove too fussy for the everyday living room. Samsung’s GearVR and Playstation VR may provide a more attractive platform for the mass adoption of VR in homes. We will certainly be looking forward to immersive theatre, music and movie tie-ins to provide other lateral applications of Virtual Reality beyond just games.

Note: We may earn a commission when you buy through links on our site, at no extra cost to you. This doesn't affect our editorial independence. Learn more.

Read Next...