Once again narrowly beaten to a FTF by Go Catch! I don’t live a million miles from here, so I took a diversion on my cycle home. Knew exactly where to start looking, and spotted the
container before I’d even dismounted from my bike! My pen’s leaking a bit, though: sorry for the inky fingerprints on the back of the log!
This is the (long-overdue) last in a three-part blog post about telling stories using virtual reality. Read all of the
parts here.
For the first time in two decades, I’ve been playing with virtual reality. This time around, I’ve been using new and upcoming technologies like Google Cardboard and the Oculus Rift. I’m particularly interested in how these new experiences can be
used as a storytelling medium by content creators, and the lessons we’ll learn about immersive storytelling by experimenting with them.
It seems to me that the biggest questions that VR content creators will need to start thinking about as we collectively begin to explore this new (or newly-accessible) medium are:
How do we make intuitive user interfaces?
This question mostly relates to creators making “interactive” experiences. Superficially, VR gives user experience designers a running start because there’s little that’s as intuitive
as “turning your head to look around” (and, in fact, trying the technology out on a toddler convinced me that it’s adults – who already have an anticipation of what a computer interface
ought to be – who are the only ones who’ll find this challenging). On the other hand, most interactive experiences demand more user interaction than simply looking around, and
therein lies the challenge. Using a keyboard while you’re wearing a headset is close to impossible (trust me, I’ve tried), although the augmented-reality approach of the Hololens and potentially even the front-facing webcam that’s been added to the HTC Vive PRE
might be used to mitigate this. A gamepad is workable, but it’s slightly immersion-breaking in some experiences to hold your hands in a conventional “gamer pose”, as I discovered while
playing my Gone Home hackalong: this was the major reason I switched to using a Wiimote.
So far, I’ve seen a few attempts that don’t seem to work, though. The (otherwise) excellent educational solar system exploration tool Titans of Space makes players stare at on-screen buttons for a few seconds to “press” them, which is clunky and unintuitive: in the
real world, we don’t press buttons with our eyes! I understand why they’ve done this: they’re ensuring that their software has the absolute minimum interface requirement that’s shared
between the platforms that it supports, but that’s a concern too! If content creators plan to target two or more of the competing systems that will launch this year alone, will they
have to make usability compromises?
There’s also the question of how we provide ancillary information to players: the long-established paradigms of “health in the bottom left, ammo in the bottom right” don’t work so
obviously when they’re hidden in your peripheral vision. Games like Elite Dangerous have tackled this problem from their inception
by making a virtualised “real” user interface comprised of the “screens” in the spaceship around you, but it’s an ongoing challenge for titles that target both VR and conventional
platforms in future. Wareable made some great observations about these kinds of concerns, too.
How do we tell stories without forced visual framing?
In my previous blog post, I talked about a documentary that used 360° cameras to “place” the viewer among the protesters that formed the subject of the documentary. In order to provide
some context and to reduce the disorientation experienced by “jumping” from location to location, the creator opted to insert “title slides” between scenes with text explaining what
would be seen next. But title slides necessitate that the viewer is looking in a particular direction! In the case of this documentary and several other similar projects I’ve seen, the
solution was to put the title in four places – at each of the four cardinal directions – so that no matter which way you were looking you’ll probably be able to find one. But
title slides are only a small part of the picture.
Directors producing content – whether interactive or not – for virtual reality will have to think hard about the implications of the fact that their camera (whether a physical camera or
– slightly easier and indeed more-controllable – a simulated camera in a 3D-rendered world) can look in any direction. Sets must be designed to be all-encompassing, which poses
huge challenges for the traditional methods of producing film and television programmes. Characters’ exits and entrances must be through believable portals: they can’t simply walk off
to the left and stop. And, of course, the content creator must find a way to get the audience’s attention when they need it: watching the first few minutes of Backstage with an Elite Ballerina, for example, puts you in a spacious dance studio with a spritely ballerina to follow… but
there’s nothing to stop you looking the other way (perhaps by accident), and – if you do – you might miss some of the action or find it difficult to work out where you’re
supposed to be looking. Expand that to a complex, busy scene like, say… the ballroom scene in Labyrinth… and you might find yourself feeling completely lost within a matter of minutes (of course, a feeling of being
lost might be the emotional response that the director intends, and hey – VR is great for that!).
The potential for VR in some kinds of stories is immense, though. How about a murder mystery story played out in front of you in a dollhouse (showing VR content “in minature” can help
with the motion sickness some people feel if they’re “dragged” from scene to scene): you can move your head to peep in to any room and witness the conversations going on, but the murder
itself happens during a power cut or otherwise out-of-sight and the surviving characters are left to deduce the clues. In such a (non-interactive) experience the spectator has the
option to follow the action in whatever way they like, and perhaps even differently on different playthroughs, putting the focus on the rooms and characters and clues that interest them
most… which might affect whether or not they agree with the detective’s assertions at the end…
What new storytelling mechanisms can this medium provide?
As I mentioned in the previous blog post, we’ve already seen the evolution of storytelling media on several occasions, such as the jump from theatre to cinema and the opportunities that
this change eventually provided. Early screenwriters couldn’t have conceived of some of the tools used in modern films, like the use of long flowing takes for establishing shots or the
use of fragmented hand-held shots to add an excited energy to fight scenes. It wasn’t for lack of imagination (Georges
Méliès realised back in the nineteenth century that timelapse photography could be used to produce special effects not possible in theatre) but rather a lack of the
technology and more-importantly a lack of the maturity of the field. There’s an ongoing artistic process whereby storytellers find new ways to manage their medium from one another:
Romeo Must Die may have made clever use of a “zoom-to-X-ray” when a combatant’s bones were broken, but it wouldn’t
have been possible if The Matrix hadn’t shown the potential for “bullet time” the previous year. And if we’re going down
that road: have you seen the bullet time scene in Zotz!, a film
that’s older than the Wachowskis themselves?
Clearly, we’re going to discover new ways of telling stories that aren’t possible with traditional “flat screen” media nor with more-immersive traditional theatre: that’s what
makes VR as a storytelling tool so exciting.
Of course, we don’t yet know what storytelling tools we’ll find in this medium, but some ideas I’ve been thinking about are:
Triggering empathetic responses by encouraging the audience to more-closely relate to the situation of characters by putting them more-directly “in their shoes”.
That Dragon, Cancer, an autobiographical game about the experience of a child’s terminal cancer, is an incredibly emotive
experience… but only begins to touch upon the emotional journeys possible through virtual reality: what’s it really like to be close to somebody who’s terminally ill?
Allowing spectators to spectate a story in their own way, or from a perspective that they choose and control. We’ve already begun to explore this as a concept with
the (little-used) multi-angle feature on DVDs: for example, if you’ve got the special edition of Die Hard then you can
rewatch certain scenes and flick between different cameras as you watch. But that’s nothing on the potential for future animated films to allow you to walk or fly around and watch
from any angle… or in the case of interactive experiences, to influence the direction that the story takes by your actions or even just by your presence: how about a heist story in
which the burglars will only carry out their plan if they can’t tell that you’re watching them, forcing you to be surreptitious in your glances over to
see what they’re up to?
Combining VR with motion simulation: Alton Towers is leading the way here, with their announcement that they’re going to re-engineer the Air rollercoaster into Galactica, upon which the ride gives the sensation of motion while a
Samsung Gear VR headset simulates an otherwise-impossible spacefaring experience, and I’m hugely excited about
the prospect. But a more-adaptable and economical way to achieve a similar result would be to repurpose a motion simulator: the good ones can provide the sensation of g-forces on
almost any vector for an extended period of time; the really good ones can provide short bursts of g-forces at levels other than that provided by Earth’s gravity (usually by flinging
the carriage around on a programmable shuttle arm, although weightlessness is still unfeasible while you remain on the ground). If you didn’t think that 2013’s Gravity was nauseating enough when it was merely in 3D, wait until you try a similar experience in motion-assisted virtual
reality.
Point-of-view framing: this paradigm has always been at least a little unsatisfying in regular movies. I mean, it might have been the best moment in Doom, but that’s more to do with how apalling that film was than how good the technique is! But the potential for stepping
in to the viewpoint of another human and being able to look around has great potential for immersion-building without allowing the participant to stray too-far from the main
storyline. Something that people who haven’t yet experienced VR don’t often appreciate is that a few little things can really improve the experience of immersion… things like being
able to move your head, even just being a few degrees, make you feel like you’re “there”. There are some big challenges to overcome with this, of course, such as how to make the
movement of the camera not make the watcher feel ‘dragged along’, especially if their experience is of moving sideways… but these are challenges that will probably be solved for us
quickly by the porn industry, who’re working very hard on making this kind of experience seamless.
Just like the leaps and bounds we took with streaming video, yet again technology will get to thank peoples’ love of porn for advancing what home computers are capable of.
Exploring therapeutic experiences: until I really started trying out different VR gear, I didn’t think that it would be sufficiently engaging to be able to trigger a
strong enough response to be useful in a therapeutic capacity. But after the first time I came out of a 10-minute game of Caaaaardboard! feeling genuinely wobbly at the knees in the same way
as after my first parachute jump, I realised that modern VR really can produce an experience that results in a
psychosomatic response. And that’s really important, because it provides a whole new medium in which we can treat (and, I suppose, study), for example, phobias in a controlled and
‘escapable’ environment. Of course, that raises other questions too, such as: is it possible to cause disorders like PTSD with virtual reality? If it’s simply the case that optimally-made VR is more-immersive than the best possible “flat screen” experiences
and that it’s this that can improve its therapeutic potential, then surely it can be more-traumatic, too: I know enough people that were emotionally-scarred by Bambi‘s
mother’s death, E.T.‘s almost-death, or that one scene from Watership Down that gave me nightmares for years: how much more (potentially)-damaging could a VR
experience be? Whether or not it’s truly the case, it’ll only take one or two media circuses about murderous psychopaths who are unable to differentiate their virtual reality from the
real kind before people start getting asked these kind of questions.
As I’m sure I’ve given away these last three blog posts, I’m really interested in the storytelling potential of VR, and you can bet I’ll be bothering you all again with updates of the
things I get to play with later this year (and, in fact, some of the cool technologies I’ve managed to get access to just while I’ve been writing up these blog posts).
If you haven’t had a chance to play with contemporary VR, get yourself a cardboard. It’s dirt-cheap and it’s (relatively) low-tech and it’s nowhere near as awesome as “real” hardware
solutions… but it’s still a great introduction to what I’m talking about and it’s absolutely worth doing. And if you have, I’d love to hear your thoughts on storytelling using
virtual reality, too.
An Easter Sunday adventure for my 2 y/of neice and I brought us out here to find this cache. Yesterday we found the one on the other side of the river and had talked about the ruins we
could see in the distance, so this seemed like an obvious outing!
Wellie’d up we hit the trail, although the little monster’s wellies were mostly unnecessary as she insisted on riding atop my shoulders all the way to the GZ. On the way, she observed
that the old manor house – with fences and warning signs all around but a small agricultural shelter within the grounds – “is for pigs, not for people”.
Fix at GZ was a little shaky so it took us a few minutes to find, but we got there in the end. No sign of the World Travel Geocoin or AEC Routemaster travel bugs, but a different bug in
situ. TNLN, SL, and saw what might well have been another ‘cacher dismount his bike as we headed back to the road: good luck, possibly-fellow-cacher! TFTC.
Visited for an afternoon walk with my two-year-old niece. Easy find, and a nice cache in a nice location. Let the little ‘un choose where we signed the log (ultimately opting for one of
the pictures of a lady): never seen a logbook like this one before! TNLN, TFTC.
When I first saw this appear I thought that I’d have the FTF in the bag this morning. After all: nobody would come out last thing on a Tuesday night to start a multicache series, would
they? Hah.
Well, come the morning I saw that I’d been beaten and opted to, instead of tackling the rush hour traffic, come along for a leisurely lunchtime look-see. The first waypoint was easy:
I’m already equipped for wardriving so I just drove my car past the place (I didn’t even have to stop!) and then checked my laptop later – sure enough, only one detected SSID looked
anything like GPS coordinates. The second waypoint slowed me down: for some reason, it took my GPSr a good long time to latch on to the Chirp and get the data I needed. Next up was the
NFC tag, which was an easy find in the third place I looked. And finally, on to the cache itself which the coordinates lead me straight to.
Fantastic to see people doing things a little beyond the norm: things that take a little effort. Thanks for fab little adventure this lunchtime, and for a nice little cache. FP awarded.
One of the most common pieces of advice you’ll get as a startup is this: Only hire the best. The quality of the people that work at your company will be one of the biggest factors in
your success – or failure. I’ve heard this advice over and over and over at startup events, to…