Boneworks

This article is a repost promoting content originally published elsewhere. See more things Dan's reposted.

I know only a small percentage of you use VR and to everyone else I might as well by telling you how spiffy the handrails are up in this ivory tower, but for what it’s worth, Boneworks is the first game in a while to make me think VR might be getting somewhere. It’s not there yet. The physics is full of little niggles as you might expect from a game trying to juggle so much. The major issue with the climbing is only your hands and head can be moved and your in-game legs just flop around getting in the way of things like two stubborn trails of cum dangling off your mum’s chin, but forget all that.

Speaking of VR, Yahtzee’s still playing with it and thinks it’s improving, which is high praise. So there’s hope yet.

I really need to dig my heavyweight gear out of the attic, but I’m waiting until we (eventually) move house. And I absolutely agree with Yahtzee’s observation about the value of VR games in which you can sit down, sometimes.

Difference Between VR and AR

This weekend, my sister Sarah challenged me to define the difference between Virtual Reality and Augmented Reality. And the more I talked about the differences between them, the more I realised that I don’t have a concrete definition, and I don’t think that anybody else does either.

A man wearing a VR headset while writing on a whiteboard.
AR: the man sees simulated reality and, probably, the whiteboard.
VR: the man sees simulated reality only, which may or may not include a whiteboard.
Either way: what the hell is he doing?

After all: from a technical perspective, any fully-immersive AR system – for example a hypothetical future version of the Microsoft Hololens that solves the current edition’s FOV problems – exists in a theoretical superset of any current-generation VR system. That AR augments the reality you can genuinely see, rather than replacing it entirely, becomes irrelevant if that AR system could superimpose a virtual environment covering your entire view. So the argument that compared to VR, AR only covers part of your vision is not a reliable definition of the difference.

Spectrum showing AR view of the Louvre on the left (with 5% of the view occluded by UI) and VR view of a 3D model of the Louvre on the right (with 100% of the view occluded). The middle of the spectrum remains undefined.
The difference by how much of your view is occluded by machine-generated images fails to define where the boundary between AR and VR lies. 5%? 50%? 99%?

This isn’t a new conundrum. Way back in 1994 back when the Sega VR-1 was our idea of cutting edge, Milgram et al. developed a series of metaphorical spectra to describe the relationship between different kinds of “mixed reality” systems. The core difference, they argue, is whether or not the computer-generated content represents a “world” in itself (VR) is just an “overlay” (AR).

But that’s unsatisfying for the same reason as above. The HTC Vive headset can be configured to use its front-facing camera(s) to fade seamlessly from the game world to the real world as the player gets close to the boundaries of their play space. This is a safety feature, but it doesn’t have to be: there’s no reason that a HTC Vive couldn’t be adapted to function as what Milgram would describe as a “class 4” device, which is functionally the same as a headset-mounted AR device. So what’s the difference?

You might argue that the difference between AR and VR is content-based: that is, it’s the thing that you’re expected to focus on that dictates which is which. If you’re expected to look at the “real world”, it’s an augmentation, and if not then it’s a virtualisation. But that approach fails to describe Google’s tech demo of putting artefacts in your living room via augmented reality (which I’ve written about before), because your focus is expected to be on the artefact rather than the “real world” around it. The real world only exists to help with the interpretation of scale: it’s not what the experience is about and your countertop is as valid a real world target as the Louvre: Google doesn’t care.

Milgram et al. (1994)'s Table 1: Some major differences between classes of Mixed Reality (MR) displays, showing seven classes: 1. monitor-based video with CG overlays, 2. headset-based video with CG overlays, 3. headset-based see-through display with CG overlays, 4. headset-based see-through video with CG overlays, 5. monitor/CG world with video overlays, 6. headset-based CG world with video overlays, and 7. CG-based world with real object intervention.
Categories 3 and 4 would probably be used to describe most contemporary AR; categories 6 and 7 to describe contemporary VR.

Many researchers echo Milgram’s idea that what turns AR into VR is when the computer-generated content completely covers your vision.

But even if we accept this explanation, the definition gets muddied by the wider field of “extended reality” (XR). Originally an umbrella term to cover both AR and VR (and “MR“, if you believe that’s a separate and independent thing), XR gets used to describe interactive experiences that cover other senses, too. If I play a VR game with real-world “props” that I can pick up and move around, but that appear differently in my vision, am I not “augmenting” reality? Is my experience, therefore, more or less “VR” than if the interactive objects exist only on my screen? What about if – as in a recent VR escape room I attended – the experience is enhanced by fans to simulate the movement of air around you? What about smell? (You know already that somebody’s working on bridging virtual reality with Smell-O-Vision.)

Dan and Robin concluding a VR Escape Room.
Not sure if you’re real or if you’re dreaming? Don’t ask an XR researcher; they don’t know either.

Increasingly, then, I’m beginning to feel that XR itself is a spectrum, and a pretty woolly one. Just as it’s hard to specify in a concrete way where the boundary exists between being asleep and being awake, it’s hard to mark where “our” reality gives way to the virtual and vice-versa.

It’s based upon the addition of information to our senses, by a computer, and there can be more (as in fully-immersive VR) or less (as in the subtle application of AR) of it… but the edges are very fuzzy. I guess that the spectrum of the visual experience of XR might look a little like this:

Comic showing different levels of "XR-ness", from the most to the least: The Matrix has you now, fully immersive VR, headset AR, "Magic Window" AR, conventional videogaming, HUDs, using google maps while walking, imagining playing Tetris, reality.

Honestly, I don’t know any more. But I don’t think my sister does either.

A man wearing a VR headset while writing on a whiteboard.× Spectrum showing AR view of the Louvre on the left (with 5% of the view occluded by UI) and VR view of a 3D model of the Louvre on the right (with 100% of the view occluded). The middle of the spectrum remains undefined.× Milgram et al. (1994)'s Table 1: Some major differences between classes of Mixed Reality (MR) displays, showing seven classes: 1. monitor-based video with CG overlays, 2. headset-based video with CG overlays, 3. headset-based see-through display with CG overlays, 4. headset-based see-through video with CG overlays, 5. monitor/CG world with video overlays, 6. headset-based CG world with video overlays, and 7. CG-based world with real object intervention.× Dan and Robin concluding a VR Escape Room.× Comic showing different levels of "XR-ness", from the most to the least: The Matrix has you now, fully immersive VR, headset AR, "Magic Window" AR, conventional videogaming, HUDs, using google maps while walking, imagining playing Tetris, reality.×

An Unusual Workday

Some days, my day job doesn’t seem like a job that a real person would have at all. It seems like something out of a sitcom. Today, I have:

  • Worn a bear mask in the office (panda in my case; seen below alongside my head of department, in a grizzly mask).
    Bears in the office
  • Chatted about popular TV shows that happen to contain libraries, for inclusion in a future podcast series.
  • Experimented with Web-based augmented reality as a possible mechanism for digital exhibition content delivery. (Seen this thing from Google Arts & Culture? If you don’t have an AR-capable device to hand, here’s a video of what it’s like.)
    Virtual Reality at the Bodleian
  • Implemented a demonstrative XSS payload targetting a CMS (as a teaching tool, to demonstrate how a series of minor security vulnerabilities can cascade into one huge one).
  • Gotten my ‘flu jab.

Not every day is like this. But sometimes, just sometimes, one can be.

Review of Raw Data

This review originally appeared on Steam. See more reviews by Dan.

The most intense and engaging VR experience I’ve ever had.

Whether you’re dodging and diving behind cover while you fire your pistol or you’re getting up-close with the androids as you swing your laser sword, you’re always on the move in this immersive, high-energy VR shooter. The teleport mechanic minimises motion sickness even for those who suffer badly, the graphics are nothing short of beautiful, and there’s nothing quite so terrifying as the moment that you realise that THERE’S ONE OF THEM BEHIND YOU! MOVE!

Immersive Storytelling – Thoughts on Virtual Reality, part 3

This is the (long-overdue) last in a three-part blog post about telling stories using virtual reality. Read all of the parts here.

For the first time in two decades, I’ve been playing with virtual reality. This time around, I’ve been using new and upcoming technologies like Google Cardboard and the Oculus Rift. I’m particularly interested in how these new experiences can be used as a storytelling medium by content creators, and the lessons we’ll learn about immersive storytelling by experimenting with them.

Annabel plays with a Google Cardboard with a Samsung Galaxy S6 Edge attached.
There are few user interfaces as simple as moving your own head. Even Annabel – who struggles with the idea that some screens aren’t touchscreens – manages.

It seems to me that the biggest questions that VR content creators will need to start thinking about as we collectively begin to explore this new (or newly-accessible) medium are:

How do we make intuitive user interfaces?

This question mostly relates to creators making “interactive” experiences. Superficially, VR gives user experience designers a running start because there’s little that’s as intuitive as “turning your head to look around” (and, in fact, trying the technology out on a toddler convinced me that it’s adults – who already have an anticipation of what a computer interface ought to be – who are the only ones who’ll find this challenging). On the other hand, most interactive experiences demand more user interaction than simply looking around, and therein lies the challenge. Using a keyboard while you’re wearing a headset is close to impossible (trust me, I’ve tried), although the augmented-reality approach of the Hololens and potentially even the front-facing webcam that’s been added to the HTC Vive PRE might be used to mitigate this. A gamepad is workable, but it’s slightly immersion-breaking in some experiences to hold your hands in a conventional “gamer pose”, as I discovered while playing my Gone Home hackalong: this was the major reason I switched to using a Wiimote.

A pair of Oculus Rift "touch" controllers.
All of the major VR manufacturers are working on single-handed controllers with spatial awareness and accessible buttons. Some also support haptic feedback so that you can “feel” UI components.

So far, I’ve seen a few attempts that don’t seem to work, though. The (otherwise) excellent educational solar system exploration tool Titans of Space makes players stare at on-screen buttons for a few seconds to “press” them, which is clunky and unintuitive: in the real world, we don’t press buttons with our eyes! I understand why they’ve done this: they’re ensuring that their software has the absolute minimum interface requirement that’s shared between the platforms that it supports, but that’s a concern too! If content creators plan to target two or more of the competing systems that will launch this year alone, will they have to make usability compromises?

There’s also the question of how we provide ancillary information to players: the long-established paradigms of “health in the bottom left, ammo in the bottom right” don’t work so obviously when they’re hidden in your peripheral vision. Games like Elite Dangerous have tackled this problem from their inception by making a virtualised “real” user interface comprised of the “screens” in the spaceship around you, but it’s an ongoing challenge for titles that target both VR and conventional platforms in future. Wareable made some great observations about these kinds of concerns, too.

How do we tell stories without forced visual framing?

In my previous blog post, I talked about a documentary that used 360° cameras to “place” the viewer among the protesters that formed the subject of the documentary. In order to provide some context and to reduce the disorientation experienced by “jumping” from location to location, the creator opted to insert “title slides” between scenes with text explaining what would be seen next. But title slides necessitate that the viewer is looking in a particular direction! In the case of this documentary and several other similar projects I’ve seen, the solution was to put the title in four places – at each of the four cardinal directions – so that no matter which way you were looking you’ll probably be able to find one. But title slides are only a small part of the picture.

Two gamers wearing Oculus Rift headsets.
Does anybody else see photos like this and get reminded of the pictures of hooded captives at interrogation camps?

Directors producing content – whether interactive or not – for virtual reality will have to think hard about the implications of the fact that their camera (whether a physical camera or – slightly easier and indeed more-controllable – a simulated camera in a 3D-rendered world) can look in any direction. Sets must be designed to be all-encompassing, which poses huge challenges for the traditional methods of producing film and television programmes. Characters’ exits and entrances must be through believable portals: they can’t simply walk off to the left and stop. And, of course, the content creator must find a way to get the audience’s attention when they need it: watching the first few minutes of Backstage with an Elite Ballerina, for example, puts you in a spacious dance studio with a spritely ballerina to follow… but there’s nothing to stop you looking the other way (perhaps by accident), and – if you do – you might miss some of the action or find it difficult to work out where you’re supposed to be looking. Expand that to a complex, busy scene like, say… the ballroom scene in Labyrinth… and you might find yourself feeling completely lost within a matter of minutes (of course, a feeling of being lost might be the emotional response that the director intends, and hey – VR is great for that!).

Sarah and Jareth dance in the ballroom scene of Labyrinth.
You’re looking the wrong way. Turn around, and you’ll see the best part of the movie.

The potential for VR in some kinds of stories is immense, though. How about a murder mystery story played out in front of you in a dollhouse (showing VR content “in minature” can help with the motion sickness some people feel if they’re “dragged” from scene to scene): you can move your head to peep in to any room and witness the conversations going on, but the murder itself happens during a power cut or otherwise out-of-sight and the surviving characters are left to deduce the clues. In such a (non-interactive) experience the spectator has the option to follow the action in whatever way they like, and perhaps even differently on different playthroughs, putting the focus on the rooms and characters and clues that interest them most… which might affect whether or not they agree with the detective’s assertions at the end…

What new storytelling mechanisms can this medium provide?

As I mentioned in the previous blog post, we’ve already seen the evolution of storytelling media on several occasions, such as the jump from theatre to cinema and the opportunities that this change eventually provided. Early screenwriters couldn’t have conceived of some of the tools used in modern films, like the use of long flowing takes for establishing shots or the use of fragmented hand-held shots to add an excited energy to fight scenes. It wasn’t for lack of imagination (Georges Méliès realised back in the nineteenth century that timelapse photography could be used to produce special effects not possible in theatre) but rather a lack of the technology and more-importantly a lack of the maturity of the field. There’s an ongoing artistic process whereby storytellers find new ways to manage their medium from one another: Romeo Must Die may have made clever use of a “zoom-to-X-ray” when a combatant’s bones were broken, but it wouldn’t have been possible if The Matrix hadn’t shown the potential for “bullet time” the previous year. And if we’re going down that road: have you seen the bullet time scene in Zotz!, a film that’s older than the Wachowskis themselves?

Clearly, we’re going to discover new ways of telling stories that aren’t possible with traditional “flat screen” media nor with more-immersive traditional theatre: that’s what makes VR as a storytelling tool so exciting.

Perhaps the original cinematic use of bullet time, in Zotz!
The original use of bullet time still wasn’t entirely new, as the original bullet predates it by hundreds of years.

Of course, we don’t yet know what storytelling tools we’ll find in this medium, but some ideas I’ve been thinking about are:

  • Triggering empathetic responses by encouraging the audience to more-closely relate to the situation of characters by putting them more-directly “in their shoes”. That Dragon, Cancer, an autobiographical game about the experience of a child’s terminal cancer, is an incredibly emotive experience… but only begins to touch upon the emotional journeys possible through virtual reality: what’s it really like to be close to somebody who’s terminally ill?
  • Allowing spectators to spectate a story in their own way, or from a perspective that they choose and control. We’ve already begun to explore this as a concept with the (little-used) multi-angle feature on DVDs: for example, if you’ve got the special edition of Die Hard then you can rewatch certain scenes and flick between different cameras as you watch. But that’s nothing on the potential for future animated films to allow you to walk or fly around and watch from any angle… or in the case of interactive experiences, to influence the direction that the story takes by your actions or even just by your presence: how about a heist story in which the burglars will only carry out their plan if they can’t tell that you’re watching them, forcing you to be surreptitious in your glances over to see what they’re up to?
"Fly" motion simulator
There’s no need to build a rollercoaster at all: a good motion simulator plus a VR headset can probably provide a similar experience.
  • Combining VR with motion simulation: Alton Towers is leading the way here, with their announcement that they’re going to re-engineer the Air rollercoaster into Galactica, upon which the ride gives the sensation of motion while a Samsung Gear VR headset simulates an otherwise-impossible spacefaring experience, and I’m hugely excited about the prospect. But a more-adaptable and economical way to achieve a similar result would be to repurpose a motion simulator: the good ones can provide the sensation of g-forces on almost any vector for an extended period of time; the really good ones can provide short bursts of g-forces at levels other than that provided by Earth’s gravity (usually by flinging the carriage around on a programmable shuttle arm, although weightlessness is still unfeasible while you remain on the ground). If you didn’t think that 2013’s Gravity was nauseating enough when it was merely in 3D, wait until you try a similar experience in motion-assisted virtual reality.
  • Point-of-view framing: this paradigm has always been at least a little unsatisfying in regular movies. I mean, it might have been the best moment in Doom, but that’s more to do with how apalling that film was than how good the technique is! But the potential for stepping in to the viewpoint of another human and being able to look around has great potential for immersion-building without allowing the participant to stray too-far from the main storyline. Something that people who haven’t yet experienced VR don’t often appreciate is that a few little things can really improve the experience of immersion… things like being able to move your head, even just being a few degrees, make you feel like you’re “there”. There are some big challenges to overcome with this, of course, such as how to make the movement of the camera not make the watcher feel ‘dragged along’, especially if their experience is of moving sideways… but these are challenges that will probably be solved for us quickly by the porn industry, who’re working very hard on making this kind of experience seamless. Just like the leaps and bounds we took with streaming video, yet again technology will get to thank peoples’ love of porn for advancing what home computers are capable of.
GIF showing a variety people watching VR porn. [SFW]
Nothing in this GIF reflects how people will genuinely watch VR porn. There’ll be a lot more lube and a lot fewer clothes, I guarantee it.
  • Exploring therapeutic experiences: until I really started trying out different VR gear, I didn’t think that it would be sufficiently engaging to be able to trigger a strong enough response to be useful in a therapeutic capacity. But after the first time I came out of a 10-minute game of Caaaaardboard! feeling genuinely wobbly at the knees in the same way as after my first parachute jump, I realised that modern VR really can produce an experience that results in a psychosomatic response. And that’s really important, because it provides a whole new medium in which we can treat (and, I suppose, study), for example, phobias in a controlled and ‘escapable’ environment. Of course, that raises other questions too, such as: is it possible to cause disorders like PTSD with virtual reality? If it’s simply the case that optimally-made VR is more-immersive than the best possible “flat screen” experiences and that it’s this that can improve its therapeutic potential, then surely it can be more-traumatic, too: I know enough people that were emotionally-scarred by Bambi‘s mother’s death, E.T.‘s almost-death, or that one scene from Watership Down that gave me nightmares for years: how much more (potentially)-damaging could a VR experience be? Whether or not it’s truly the case, it’ll only take one or two media circuses about murderous psychopaths who are unable to differentiate their virtual reality from the real kind before people start getting asked these kind of questions.
  • Oh, and this webcomic, of course.

As I’m sure I’ve given away these last three blog posts, I’m really interested in the storytelling potential of VR, and you can bet I’ll be bothering you all again with updates of the things I get to play with later this year (and, in fact, some of the cool technologies I’ve managed to get access to just while I’ve been writing up these blog posts).

If you haven’t had a chance to play with contemporary VR, get yourself a cardboard. It’s dirt-cheap and it’s (relatively) low-tech and it’s nowhere near as awesome as “real” hardware solutions… but it’s still a great introduction to what I’m talking about and it’s absolutely worth doing. And if you have, I’d love to hear your thoughts on storytelling using virtual reality, too.

Annabel plays with a Google Cardboard with a Samsung Galaxy S6 Edge attached.× A pair of Oculus Rift "touch" controllers.× Two gamers wearing Oculus Rift headsets.× Sarah and Jareth dance in the ballroom scene of Labyrinth.× Perhaps the original cinematic use of bullet time, in Zotz!× "Fly" motion simulator×

Immersive Storytelling – Thoughts on Virtual Reality, part 2

This is the second in a three-part blog post about telling stories using virtual reality. Read all of the parts here.

I’m still waiting to get in on the Oculus Rift and HTC Vive magic when they’re made generally-available, later this year. But for the meantime, I’m enjoying quite how hackable VR technologies are. I chucked my Samsung Galaxy S6 edge into an I Am Cardboard DSCVR, paired it with a gaming PC using TrinusVR, used GlovePIE to hook up a Wii remote (playing games with a keyboard or even a gamepad is challenging if your headset doesn’t have a headstrap, so a one-handed control is needed), and played a game of Gone Home. It’s a cheap and simple way to jump into VR gaming, especially if – like me – you already own the electronic components: the phone, PC, and Wiimote.

An I Am Cardboard, a Samsung Galaxy S6, and a Nintendo Wii Remote
My VR system is more-ghetto than yours.

While the media seems to mostly fixate on the value of VR in “action” gaming – shoot-’em-ups, flight simulators, etc. – I actually think there’s possibly greater value in it more story-driven genres. I chose Gone Home for my experiment, above, because it’s an adventure that you play at your own pace, where the amount you get out of it as a story depends on your level of attention to detail, not how quickly you can pull a trigger. Especially on this kind of highly-affordable VR gear, “twitchy” experiences that require rapid head turning are particularly unsatisfying, not-least because the response time of even the fastest screens is always going to be significantly slower than that of real life. But as a storytelling medium (especially in an affordable form) it’s got incredible potential.

Screengrab from Hong Kong Unrest - a 360° Virtual Reality Documentary.
Nothing quite gives you a feel of the human scale of the Hong Kong protests like being able to look around you, as if you’re stood in the middle of them.

I was really pleased to discover that some content creators are already experimenting with the storytelling potential of immersive VR experiences. An example would be the video Hong Kong Unrest – a 360° Virtual Reality Documentary, freely-available on YouTube. Standing his camera (presumably a Jump camera rig, or something similar) amongst the crowds of the 2014 Hong Kong protests, the creator of this documentary gives us a great opportunity to feel as though we’re standing right there with the protesters. The sense of immersion of being “with” the protesters is, in itself, a storytelling statement that shows the filmmaker’s bias: you’re encouraged to empathise with the disenfranchised Hong Kong voters, to feel like you’re not only with them in a virtual sense, but emotionally with them in support of their situation. I’m afraid that watching the click-and-drag version of the video doesn’t do it justice: strap a Cardboard to your head to get the full experience.

An augmented reality shoot-'em-up game via Microsoft Hololens
Don’t go thinking that I’m not paying attention to the development of the Hololens, too: I am, because it looks amazing. I just don’t know… what it’s for. And, I suspect, neither does Microsoft.

But aside from the opportunities it presents, Virtual Reality brings huge new challenges for content creators, too. Consider that iconic spaghetti western The Good, The Bad, And The Ugly. The opening scene drops us right into one of the artistic themes of the film – the balance of wide and close-up shots – when it initially shows us a wide open expanse but then quickly fills the frame with the face of Tuco (“The Ugly”), giving us the experience of feeling suddenly cornered and trapped by this dangerous man. That’s a hugely valuable shot (and a director’s wet dream), but it represents something that we simply don’t have a way of translating into an immersive VR setting! Aside from the obvious fact that the viewer could simply turn their head and ruin the surprise of the shot, it’s just not possible to fill the frame with the actor’s face in this kind of way without forcing the focal depth to shift uncomfortably.

Opening scene of The Good, The Bad, And The Ugly.
Sergio Leone’s masterpiece makes strategic use of alternating close and wide shots (and shots like the opening, which initially feels open but rapidly becomes claustrophobic).

That’s not to say that there exist stories that we can’t tell using virtual reality… just that we’re only just beginning to find out feet with this new medium. When stage directors took their first steps into filmography in the early years of the 20th century, they originally tried to shoot films “as if” they were theatre (albeit, initially, silent theatre): static cameras shooting an entire production from a single angle. Later, they discovered ways in which this new medium could provide new ways to tell stories: using title cards to set the scene, close-ups to show actors’ faces more-clearly, panning shots, and so on.

Similarly: so long as we treat the current generation of VR as something different from the faltering steps we took two and a half decades ago, we’re in frontier territory and feeling our way in VR, too. Do you remember when smartphone gaming first became a thing and nobody knew how to make proper user interfaces for it? Often your tiny mobile screen would simply try to emulate classic controllers, with a “d-pad” and “buttons” in the corners of the screen, and it was awful… but nowadays, we better-understand the relationship that people have with their phones and have adapted accordingly (perhaps the ultimate example of this, in my opinion, is the addictive One More Line, a minimalist game with a single-action “press anywhere” interface).

Dan plays Back To Dinosaur Island on an Oculus Rift.
A few seconds after this photograph was taken, a T-rex came bounding out from the treeline and I literally jumped.

I borrowed an Oculus Rift DK2 from a co-worker’s partner (have I mentioned lately that I have the most awesome co-workers?) to get a little experience with it, and it’s honestly one of the coolest bits of technology I’ve ever had the priviledge of playing with: the graphics, comfort, and responsiveness blows Cardboard out of the water. One of my first adventures – Crytek’s tech demo Back to Dinosaur Island – was a visual spectacle even despite my apparently-underpowered computer (I’d hooked the kit up to Gina, my two-month old 4K-capable media centre/gaming PC: I suspect that Cosmo, my multi-GPU watercooled beast might have fared better). But I’ll have more to say about that – and the lessons I’ve learned – in the final part of this blog post.

An I Am Cardboard, a Samsung Galaxy S6, and a Nintendo Wii Remote× Screengrab from Hong Kong Unrest - a 360° Virtual Reality Documentary.× Opening scene of The Good, The Bad, And The Ugly.× Dan plays Back To Dinosaur Island on an Oculus Rift.×

Immersive Storytelling – Thoughts on Virtual Reality, part 1

This is the first in a three-part blog post about telling stories using virtual reality. Read all of the parts here.

As part of my work at the Bodleian… but to a greater extent “just for fun”… I’ve spent the last few weeks playing with virtual reality. But first, a history lesson.

Dan stomps around his office wearing a Google Cardboard.
Virtual Reality’s biggest failing is that it’s sheer coolness is equally offset by what an idiot you look like when you’re using it.

This isn’t the first time I’ve used virtual reality. The first time, for me, was in the early 1990s, at the Future Entertainment Show, where I queued for a shot at Grid Busters on a Virtuality 1000-CS. The Virtuality 1000 was powered by an “Expality”: functionally an Amiga 3000 with specially-written software for reading the (electromagnetically-sensed) facing of the headset and the accompanying “space joystick”… and providing output via a pair of graphics cards (one for each eye) to LCD screens. The screens were embedded in chunky bits on the sides of the helmet and projected towards mirrors and lenses at the far end – this apparently being an effort to reduce how “front-heavy” it felt, but I can tell you that in practice a  Virtuality headset felt weighty on your neck, even for its era!

Nonetheless, the experience stuck with me: I returned to school and became the envy of my friends (the nerdy ones, at least) when I told them about my VR adventure, and – not least thanks to programs like Tomorrow’s World and, of course, the episode of Bad Influence that reminded me quite how badly I wanted to get myself down to Nottingham for a go at Legend Quest – I was genuinely filled with optimism that within the decade, playing a VR game would have gone from the fringes of science fiction to being something where everybody-knew-somebody who did it routinely.

A Virtuality 1000 CS system.
A modern computer and VR headset combined probably weighs less than this reconditioned Virtuality 1000 headset.

I never managed to get to play Legend Quest, and that first “VR revolution” swiftly fell flat. My generation was promised all of the hi-tech science, immersion, and magical experience of The Lawnmower Man, but all we were left with was the overblown promises, expensive effects, and ill-considered user experience of, well… The Lawnmower Man. I discovered Virtuality machines in arcades once or twice, but they seemed to be out-of-order more often than not, and they quickly disappeared. You can’t really blame the owners of arcades: if a machine costs you in the region of £40,000 to buy and you can charge, say, £1 for a 3-minute go on it (bear in mind that even the most-expensive digital arcade machines tended to charge only around 30p, at this time, and most were 10p or 20p), and it needs supervision, and it can’t be maintained by your regular guy… well, that swiftly begins to feel like a bad investment.

Jobe's first experience of virtual reality, in 1992's The Lawnmower Man.
The Lawnmower Man has a lot to answer for.

Plus, the fifth generation of games consoles came along: the (original) Sony PlayStation, the Nintendo N64, and – if you really wanted the highest-technology system (with the absolute least imaginative developers) – the Sega Saturn. These consoles came at price points that made them suitable Christmas gifts for the good boys and girls of middle-class parents and sported 3D polygon graphics of the type that had previously only been seen in arcades, and the slow decline of the video arcade accelerated dramatically. But home buyers couldn’t afford five-figure (still moderately-experimental) VR systems, and the market for VR dried up in a matter of years. Nowadays, if you want to play on a Virtuality machine like the one I did, you need to find a collector (you might start with this guy from Leicester, whose website was so useful in jogging my memory while I wrote this blog post).

The Dean's VR machine, in Season 6 of Community, was clearly inspired by Virtuality.
And Jesus wept, for there were no more VR machines anywhere for, like, two decades.

2016 is the year in which this might change. The need for ubiquitous cheap computing has made RAM and even processors so economical that we throw them away when we’re done with them. The demands of modern gaming computers and consoles has given us fast but affordable graphics rendering hardware. And the battle for the hottest new smartphones each year has helped to produce light, bright, high-resolution screens no bigger than the palm of your hand.

In fact, smartphones are now the simplest and cheapest way to play with VR. Under the assumption that you’ve already got a smartphone, you’re only a couple of cheap plastic lenses and a bit of cardboard away from doing it for yourself. So that’s how my team and I started out playing: with the wonderfully-named Google Cardboard. I know that Google Cardboard is old-hat now and all the early adopters have even got their grandmothers using it now, but it’s still a beautiful example of how economical VR threatens to become if this second “VR revolution” takes hold. Even if you didn’t already own a compatible smartphone, you could buy a second-hand one on eBay for as little as £30: that’s an enormous difference from the £40K Virtuality machines of my youth, which had only a fraction of the power.

Liz plays with a Google Cardboard.
An original-style Google Cardboard makes you look as much of a fool as any VR headset does. But more-specifically like a fool with a box on their head.

I’m going somewhere with this, I promise: but I wanted to have a jumping-off point from which to talk about virtual reality more-broadly first and it felt like I’d be overstretching if I jumped right in at the middle. Y’know, like the second act of The Lawnmower Man. In the next part of this series, I’d like to talk about the storytelling opportunities that modern VR offers us, and some of the challenges that come with it, and share my experience of playing with some “proper” modern hardware – an Oculus Rift.

Liz plays with a Google Cardboard.×