Coder wants to grow the speech-to-text coding community, uses his fun game to advocate.
Dig Dog is a pretty fun little video game. Call it “Spelunky for kids”—and don’t think of that as a backhanded compliment, either. Dig Dog, which launched
Thursday on iOS, Xbox, Windows, and Mac, shaves away some of the
genre’s complications, controls smoothly, and has depth. It’s as if the modern wave of randomly generated, dig-for-surprises adventures had existed in early ’80s arcades. (And all for
only $3!)
I liked Dig Dog enough when I stumbled upon it at last year’s Fantastic Arcade event in Austin, Texas. But my
interest in the game spiked when its creator reached out ahead of this week’s launch to confirm something I’m not sure any other video game creator has done: coding an entire game by
himself… without using his hands.
Just want to play my game without reading this whole post? Play the game here – press a key, mouse button, or touch the screen to fire the
thrusters, and try to land at less than 4 m/s with as much fuel left over as possible.
In 1969, when all the nerds were still excited by sending humans to the moon instead of flinging cars around the sun, the hottest video game was Rocket (or Lunar) for the PDP-8. Originally implemented in FOCAL by high school student Jim Storer and soon afterwards ported to BASIC (the other dominant language to come as
standard with microcomputers), Rocket became the precursor to an entire genre of video games called “Lunar Lander games“.
Like many pieces of microcomputer software of the time, Rocket was distributed as printed source code that you’d need to carefully type in at the other end.
The aim of these games was to land a spacecraft on the moon or similar body by controlling the thrust (and in some advanced versions, the rotation) of the engine. The spacecraft begins
in freefall towards the surface and will accelerate under gravity: this can be counteracted with thrust, but engaging the engine burns through the player’s limited supply of fuel.
Furthermore, using fuel lowers the total mass of the vessel (a large proportion of the mass of the Apollo landers was fuel for use in the descent stage) which reduces its inertia,
giving the engine more “kick” which must be compensated for during the critical final stages. It sounds dry and maths-y, but I promise that graphical versions can usually be played
entirely “by eye”.
Atari’s 1979 adaptation is perhaps the classic version you’d recognise, although its release was somewhat overshadowed by their other vector-graphics space-themed release in 1979:
Asteroids.
Let’s fast-forward a little. In 1997 I enrolled to do my A-levels at what was then called Preston College, where my Computing tutor was a chap
called Kevin Geldard: you can see him at 49 seconds into this hilariously low-fi video which I guess must have been originally shot on
VHS despite being uploaded to YouTube in 2009. He’s an interesting chap in his own right whose contributions to my career in computing deserve their own blog post, but for the time
being all you need to know is that he was the kind of geek who, like me, writes software “for fun” more often than not. Kevin owned a Psion 3 palmtop – part of a series of devices with
which I also have a long history and interest – and he taught himself to program OPL by reimplementing a favourite game of his younger years on it: his take on the classic mid-70s-style graphical Lunar Lander.
I never owned a Psion Series 3 (pictured), but I bought a Series 5mx in early 2000 out of my second student loan cheque, ultimately wrote most of my undergraduate dissertation using
it, and eventually sold it to a collector in about 2009 for almost as much as I originally paid for it. The 5mx was an amazing bit of kit. But I’ll blog about that another day, I
guess.
My A-level computing class consisted of a competitive group of geeky lads, and we made sort-of a personal extracurricular challenge to ourselves of re-implementing Kevin’s take on
Lunar Lander using Turbo Pascal, the primary language in which our class was taught. Many hours out-of-class were spent
in the computer lab, tweaking and comparing our various implementations (with only ocassional breaks to play Spacy, CivNet, or my adaptation of LORD2): later, some of us would extend our competition by
going on to re-re-implement in Delphi, Visual Basic, or Java, or by adding additional levels relating to orbital rendezvous or landing on other planetary bodies. I was quite
proud of mine at the time: it was highly-playable, fun, and – at least on your first few goes – moderately challenging.
I sometimes wonder what it would have looked like if I’d have implemented my 1997 Lunar Lander today. So I did.
Always game to try old new things, and ocassionally finding time between the many things that I do to code, I decided to expand upon my recently-discovered
interest in canvas coding to bring back my extracurricular Lunar Lander game of two decades ago in a modern format. My goals were:
A one-button version of a classic “straight descent only” lunar lander game (unlike my 1997 version, which had 10 engine power levels, this remake has just “on” and “off”)
An implementation based initially on real physics (although not necessarily graphically to scale)… and then adapted as necessary to give a fun/playability balance that feels good
Runs in a standards-compliant browser without need for plugins: HTML5, Canvas, Javascript
Adapts gracefully to any device, screen resolution, and orientation with graceful degredation/progressive enhancement
You can have a go at my game right here in your web browser! The aim is to reach the ground travelling at a velocity of no more than 4 m/s
with the maximum amount of fuel left over: this, if anything, is your “score”. My record is 52% of fuel remaining, but honestly anything in the 40%+ range is very good. Touch the screen
(if it’s a touchscreen) or press a mouse button or any key to engage your thrusters and slow your descent.
“Houston, the Eagle has landed.” Kerbal Space Program, it isn’t. Here’s a very good landing: 3 m/s with 48% of the fuel tank remaining.
And of course it’s all open-source, so you’re more than welcome to take it, rip it apart, learn from it, or make something better out
of it.
Official Post from The Video Game History Foundation: Something pretty fun happened yesterday that I wanted to share with you all: a bot on Twitter accidentally provided the clue
that finally solved a 28-year-old mystery about a DOS game that never shipped.Yesterday, the VGHF Twitter account was tagged in a thread by @awesomonster, who was frantically
Something pretty fun happened yesterday that I wanted to share with you all: a bot on Twitter accidentally provided the clue that finally solved a 28-year-old mystery about a DOS game
that never shipped.
Yesterday, the VGHF Twitter account was tagged in a thread by
@awesomonster, who was frantically trying to figure out the origins of a screenshot:
The year was 1995, and CompuServe’s online service cost $4.95 per hour. Yet thousands of people logged into this virtual world daily.
WorldsAway was born 20 years ago, when Fujitsu Cultural Technologies, a subsidiary of Japanese electronics giant Fujitsu, released this online experiment in multiplayer communities.
It debuted as part of the CompuServe online service in September, 1995. Users needed a special client to connect; once online, they could chat with others while represented onscreen
as a graphical avatar.
I was already a veteran of BBSes (I even started my own), Prodigy, CompuServe, and the Internet when I saw an advertisement for WorldsAway in CompuServe magazine (one of my favorite
magazines at the time). It promised a technicolor online world where you could be anything you wanted, and share a virtual city with people all over the globe. I signed up to receive
the client software CD. Right after its launch in September, I was up and running in the new world. It blew my young mind.
Sid Meier’s Alpha Centauri[1] (which we fondly refer to here as SMAC,
both as an acronym and in reference to its potent addictive properties) opens in an odd way for a science fiction game. Most such games open with spaceships, star travel,
or some futuristic technology. They seek to hook the imagination. But our game begins much more humbly.
SMAC begins with a largely static image of the stars as a woman reads a passage from the book of Genesis, telling the story of man’s final and irrevocable expulsion from the Garden of
Eden. The reading goes on for about twenty seconds, which is long enough for the lack of action to be quite noticeable. The effect is that we, the players, are being
invited to join the woman in literary contemplation. This, in and of itself, is a strange thing to find in a game – and a strategy game, no less!
The attentive viewer will notice that as the woman ends her quotation, she cites her source as “The Conclave Bible, Datalinks”. Odd … one would normally expect chapter and verse
from a bible quote. What are the Datalinks? And which edition is the Conclave Bible?
There isn’t much time to dwell on those questions, though. As the woman finishes, the music strikes up and we are treated to a series of disjointed images from the Earth we
know. The context isn’t clear, but the message certainly is. These are scenes of chaos: fire; military equipment; rioting crowds; nuclear explosions; escalating debt –
each one flashes by just after it has time to register. The world is out of control. It’s literally on fire. And it’s hurtling toward calamity…
As you may know, I’ve lately found an excuse to play with some new web technologies, and I’ve also taken the opportunity to try to gain a deeper
understanding of some less bleeding-edge technologies that I think have some interesting potential. And so it was that, while I was staffing the Three Rings stall at last week’s NCVO conference, I made use of the
time that the conference delegates were all off listening to a presentation to throw together a tech demo I call Steer!
A player uses their mobile phone to steer a car on a desktop computer, using nothing more than a web browser.
As you can see from the GIF above, Steer! is a driving game. The track and your car are displayed in a web browser on a large screen,
for example a desktop or laptop computer, television, or tablet, and your mobile phone is used to steer the car by tilting it to swerve around a gradually-narrowing weaving road. It’s
pretty fun, but what really makes it interesting to me is the combination of moderately-new technologies I’ve woven together to make it possible, specifically:
The Device Orientation API, which enables a web application to detect the angle at which
you’re holding your mobile phone
Websockets as a mechanism to send that data in near-real-time from the phone to the browser, via a web server: for the
fastest, laziest possible development, I used Firebase for this, but I’m aware that I could probably get better performance by running a
local server on the LAN shared by both devices
The desktop browser does all of the real work: it takes the orientation of the device and uses that, and the car’s current speed, to determine how it’s position changes over the time
that’s elapsed since the screen was last refreshed: we’re aiming for 60 frames a second, of course, but we don’t want the car to travel slower when the game is played on a
slower computer, so we use requestAnimationFrame to get the fastest rate possible and calculate the time between renderings to work out how much of a change has
occurred this ‘tick’. We leave the car’s sprite close to the bottom of the screen at all times but change how much it rotates from side to side, and we use it’s rotated to decide how
much of its motion is lateral versus the amount that’s “along the track”. The latter value determines how much track we move down the screen “behind” it.
The track is generated very simply by the addition of three sine waves of different offset and frequency – a form of very basic procedural generation. Despite the predictability of mathematical curves, this results in a moderately organic-feeling road
because the player only sees a fraction of the resulting curve at any given time: the illustration below shows how these three curves combine to make the resulting road. The difficulty
is ramped up the further the player has travelled by increasing the amplitude of the resulting wave (i.e. making the curves gradually more-agressive) and by making the road itself
gradually narrower. The same mathematics are used to determine whether the car is mostly on the tarmac or mostly on the grass and adjust its maximum speed accordingly.
In order to help provide a visual sense of the player’s speed, I added dashed lines down the road (dividing it into three lanes to begin with and two later on) which zip past the car
and provide a sense of acceleration, deceleration, overall speed, and the impact of turning ‘sideways’ (which of course reduces the forward momentum to nothing).
This isn’t meant to be a finished game: it’s an experimental prototype to help explore some technologies that I’d not had time to look seriously at before now. However, you’re welcome
to take a copy – it’s all open source – and adapt or expand it. Particular ways in which it’d be fun to improve it might include:
Allowing the player more control, e.g. over their accelerator and brakes
Adding hazards (trees, lamp posts, and others cars) which must be avoided
Adding bonuses like speed boosts
Making it challenging, e.g. giving time limits to get through checkpoints
Day and night cycles (with headlights!)
Multiplayer capability, like a real race?
Smarter handling of multiple simultaneous users: right now they’d share control of the car (which is the major reason I haven’t given you a live online version to play
with and you have to download it yourself!), but it’d be better if they could “queue” until it was their turn, or else each play in their own split-screen view or something
Improving the graphics with textures
Increasing the entropy of the curves used to generate the road, and perhaps adding pre-scripted scenery or points of interest on a mathematically-different procedural generation
algorithm
Switching to a local LAN websocket server, allowing better performance than the dog-leg via Firebase
Greater compatibility: I haven’t tried it on an iPhone, but I gather than iOS devices report their orientation differently from Android ones… and I’ve done nothing to try to make
Steer! handle more-unusual screen sizes and shapes
Anything else? (Don’t expect me to have time to enhance it, though: but if you do so, I’d love to hear about it!)
What do you expect for free? Well whatever you expect, you should expect more. This game takes the engine and content you know and love from Half-Life 2, updates it, and dumps you into
a whole new narrative with some fun new concepts (like using lights and darkness to manipulate enemies) and battles that should challenge even the most-hardened Half-Life player.
And did I mention it’s free? Go play it, and then go make a donation to the charity that the author recommends on their website. Worth every penny.
The most intense and engaging VR experience I’ve ever had.
Whether you’re dodging and diving behind cover while you fire your pistol or you’re getting up-close with the androids as you swing your laser sword, you’re always on the move in this
immersive, high-energy VR shooter. The teleport mechanic minimises motion sickness even for those who suffer badly, the graphics are
nothing short of beautiful, and there’s nothing quite so terrifying as the moment that you realise that THERE’S ONE OF THEM BEHIND YOU! MOVE!
Oh… Sir! The Insult Simulator is a light-hearted, (quite literally) Monty Python-silly timed-turns insult-spitting game for one or two players. It’s the perfect casual
luck-heavy puzzler for anybody whose hovercraft is full of eels, whose parrot is pining for the fjords, or who would like to learn The Meaning of Life. There are fun unlockables to keep
you playing for a couple of hours, and it’s worth every penny of the £1.43 I paid for it (I’d have loved it at £1.50, too, except that I wouldn’t have seen it in the first place were it
not on sale).
So the next time somebody tells you that you have a silly walk or you decide that you’d like to have an argument, just remember to tell them: “Your mother secretly admires your liver,
and will soon be dead.” That ought to put them in their place! But until that day, give Oh… Sir! The Insult Simulator a go.
I’ve been happy with my 2016 HTPC, but the situation has changed, largely because of something I mentioned in passing back in November: The Xbox One and PS4 are effectively plain old
PCs, built on: Intel Atom class (aka slow) AMD 8-core x86 CPU 8 GB RAM AMD Radeon 77xx / 78xx GPUs cheap commodity…
This is the (long-overdue) last in a three-part blog post about telling stories using virtual reality. Read all of the
parts here.
For the first time in two decades, I’ve been playing with virtual reality. This time around, I’ve been using new and upcoming technologies like Google Cardboard and the Oculus Rift. I’m particularly interested in how these new experiences can be
used as a storytelling medium by content creators, and the lessons we’ll learn about immersive storytelling by experimenting with them.
There are few user interfaces as simple as moving your own head. Even Annabel – who struggles with the idea that some screens aren’t touchscreens – manages.
It seems to me that the biggest questions that VR content creators will need to start thinking about as we collectively begin to explore this new (or newly-accessible) medium are:
How do we make intuitive user interfaces?
This question mostly relates to creators making “interactive” experiences. Superficially, VR gives user experience designers a running start because there’s little that’s as intuitive
as “turning your head to look around” (and, in fact, trying the technology out on a toddler convinced me that it’s adults – who already have an anticipation of what a computer interface
ought to be – who are the only ones who’ll find this challenging). On the other hand, most interactive experiences demand more user interaction than simply looking around, and
therein lies the challenge. Using a keyboard while you’re wearing a headset is close to impossible (trust me, I’ve tried), although the augmented-reality approach of the Hololens and potentially even the front-facing webcam that’s been added to the HTC Vive PRE
might be used to mitigate this. A gamepad is workable, but it’s slightly immersion-breaking in some experiences to hold your hands in a conventional “gamer pose”, as I discovered while
playing my Gone Home hackalong: this was the major reason I switched to using a Wiimote.
All of the major VR manufacturers are working on single-handed controllers with spatial awareness and accessible buttons. Some also support haptic feedback so that you can “feel” UI
components.
So far, I’ve seen a few attempts that don’t seem to work, though. The (otherwise) excellent educational solar system exploration tool Titans of Space makes players stare at on-screen buttons for a few seconds to “press” them, which is clunky and unintuitive: in the
real world, we don’t press buttons with our eyes! I understand why they’ve done this: they’re ensuring that their software has the absolute minimum interface requirement that’s shared
between the platforms that it supports, but that’s a concern too! If content creators plan to target two or more of the competing systems that will launch this year alone, will they
have to make usability compromises?
There’s also the question of how we provide ancillary information to players: the long-established paradigms of “health in the bottom left, ammo in the bottom right” don’t work so
obviously when they’re hidden in your peripheral vision. Games like Elite Dangerous have tackled this problem from their inception
by making a virtualised “real” user interface comprised of the “screens” in the spaceship around you, but it’s an ongoing challenge for titles that target both VR and conventional
platforms in future. Wareable made some great observations about these kinds of concerns, too.
How do we tell stories without forced visual framing?
In my previous blog post, I talked about a documentary that used 360° cameras to “place” the viewer among the protesters that formed the subject of the documentary. In order to provide
some context and to reduce the disorientation experienced by “jumping” from location to location, the creator opted to insert “title slides” between scenes with text explaining what
would be seen next. But title slides necessitate that the viewer is looking in a particular direction! In the case of this documentary and several other similar projects I’ve seen, the
solution was to put the title in four places – at each of the four cardinal directions – so that no matter which way you were looking you’ll probably be able to find one. But
title slides are only a small part of the picture.
Does anybody else see photos like this and get reminded of the pictures of hooded captives at interrogation camps?
Directors producing content – whether interactive or not – for virtual reality will have to think hard about the implications of the fact that their camera (whether a physical camera or
– slightly easier and indeed more-controllable – a simulated camera in a 3D-rendered world) can look in any direction. Sets must be designed to be all-encompassing, which poses
huge challenges for the traditional methods of producing film and television programmes. Characters’ exits and entrances must be through believable portals: they can’t simply walk off
to the left and stop. And, of course, the content creator must find a way to get the audience’s attention when they need it: watching the first few minutes of Backstage with an Elite Ballerina, for example, puts you in a spacious dance studio with a spritely ballerina to follow… but
there’s nothing to stop you looking the other way (perhaps by accident), and – if you do – you might miss some of the action or find it difficult to work out where you’re
supposed to be looking. Expand that to a complex, busy scene like, say… the ballroom scene in Labyrinth… and you might find yourself feeling completely lost within a matter of minutes (of course, a feeling of being
lost might be the emotional response that the director intends, and hey – VR is great for that!).
You’re looking the wrong way. Turn around, and you’ll see the best part of the movie.
The potential for VR in some kinds of stories is immense, though. How about a murder mystery story played out in front of you in a dollhouse (showing VR content “in minature” can help
with the motion sickness some people feel if they’re “dragged” from scene to scene): you can move your head to peep in to any room and witness the conversations going on, but the murder
itself happens during a power cut or otherwise out-of-sight and the surviving characters are left to deduce the clues. In such a (non-interactive) experience the spectator has the
option to follow the action in whatever way they like, and perhaps even differently on different playthroughs, putting the focus on the rooms and characters and clues that interest them
most… which might affect whether or not they agree with the detective’s assertions at the end…
What new storytelling mechanisms can this medium provide?
As I mentioned in the previous blog post, we’ve already seen the evolution of storytelling media on several occasions, such as the jump from theatre to cinema and the opportunities that
this change eventually provided. Early screenwriters couldn’t have conceived of some of the tools used in modern films, like the use of long flowing takes for establishing shots or the
use of fragmented hand-held shots to add an excited energy to fight scenes. It wasn’t for lack of imagination (Georges
Méliès realised back in the nineteenth century that timelapse photography could be used to produce special effects not possible in theatre) but rather a lack of the
technology and more-importantly a lack of the maturity of the field. There’s an ongoing artistic process whereby storytellers find new ways to manage their medium from one another:
Romeo Must Die may have made clever use of a “zoom-to-X-ray” when a combatant’s bones were broken, but it wouldn’t
have been possible if The Matrix hadn’t shown the potential for “bullet time” the previous year. And if we’re going down
that road: have you seen the bullet time scene in Zotz!, a film
that’s older than the Wachowskis themselves?
Clearly, we’re going to discover new ways of telling stories that aren’t possible with traditional “flat screen” media nor with more-immersive traditional theatre: that’s what
makes VR as a storytelling tool so exciting.
The original use of bullet time still wasn’t entirely new, as the original bullet predates it by hundreds of years.
Of course, we don’t yet know what storytelling tools we’ll find in this medium, but some ideas I’ve been thinking about are:
Triggering empathetic responses by encouraging the audience to more-closely relate to the situation of characters by putting them more-directly “in their shoes”.
That Dragon, Cancer, an autobiographical game about the experience of a child’s terminal cancer, is an incredibly emotive
experience… but only begins to touch upon the emotional journeys possible through virtual reality: what’s it really like to be close to somebody who’s terminally ill?
Allowing spectators to spectate a story in their own way, or from a perspective that they choose and control. We’ve already begun to explore this as a concept with
the (little-used) multi-angle feature on DVDs: for example, if you’ve got the special edition of Die Hard then you can
rewatch certain scenes and flick between different cameras as you watch. But that’s nothing on the potential for future animated films to allow you to walk or fly around and watch
from any angle… or in the case of interactive experiences, to influence the direction that the story takes by your actions or even just by your presence: how about a heist story in
which the burglars will only carry out their plan if they can’t tell that you’re watching them, forcing you to be surreptitious in your glances over to
see what they’re up to?
There’s no need to build a rollercoaster at all: a good motion simulator plus a VR headset can probably provide a similar experience.
Combining VR with motion simulation: Alton Towers is leading the way here, with their announcement that they’re going to re-engineer the Air rollercoaster into Galactica, upon which the ride gives the sensation of motion while a
Samsung Gear VR headset simulates an otherwise-impossible spacefaring experience, and I’m hugely excited about
the prospect. But a more-adaptable and economical way to achieve a similar result would be to repurpose a motion simulator: the good ones can provide the sensation of g-forces on
almost any vector for an extended period of time; the really good ones can provide short bursts of g-forces at levels other than that provided by Earth’s gravity (usually by flinging
the carriage around on a programmable shuttle arm, although weightlessness is still unfeasible while you remain on the ground). If you didn’t think that 2013’s Gravity was nauseating enough when it was merely in 3D, wait until you try a similar experience in motion-assisted virtual
reality.
Point-of-view framing: this paradigm has always been at least a little unsatisfying in regular movies. I mean, it might have been the best moment in Doom, but that’s more to do with how apalling that film was than how good the technique is! But the potential for stepping
in to the viewpoint of another human and being able to look around has great potential for immersion-building without allowing the participant to stray too-far from the main
storyline. Something that people who haven’t yet experienced VR don’t often appreciate is that a few little things can really improve the experience of immersion… things like being
able to move your head, even just being a few degrees, make you feel like you’re “there”. There are some big challenges to overcome with this, of course, such as how to make the
movement of the camera not make the watcher feel ‘dragged along’, especially if their experience is of moving sideways… but these are challenges that will probably be solved for us
quickly by the porn industry, who’re working very hard on making this kind of experience seamless.
Just like the leaps and bounds we took with streaming video, yet again technology will get to thank peoples’ love of porn for advancing what home computers are capable of.
Nothing in this GIF reflects how people will genuinely watch VR porn. There’ll be a lot more lube and a lot fewer clothes, I guarantee it.
Exploring therapeutic experiences: until I really started trying out different VR gear, I didn’t think that it would be sufficiently engaging to be able to trigger a
strong enough response to be useful in a therapeutic capacity. But after the first time I came out of a 10-minute game of Caaaaardboard! feeling genuinely wobbly at the knees in the same way
as after my first parachute jump, I realised that modern VR really can produce an experience that results in a
psychosomatic response. And that’s really important, because it provides a whole new medium in which we can treat (and, I suppose, study), for example, phobias in a controlled and
‘escapable’ environment. Of course, that raises other questions too, such as: is it possible to cause disorders like PTSD with virtual reality? If it’s simply the case that optimally-made VR is more-immersive than the best possible “flat screen” experiences
and that it’s this that can improve its therapeutic potential, then surely it can be more-traumatic, too: I know enough people that were emotionally-scarred by Bambi‘s
mother’s death, E.T.‘s almost-death, or that one scene from Watership Down that gave me nightmares for years: how much more (potentially)-damaging could a VR
experience be? Whether or not it’s truly the case, it’ll only take one or two media circuses about murderous psychopaths who are unable to differentiate their virtual reality from the
real kind before people start getting asked these kind of questions.
As I’m sure I’ve given away these last three blog posts, I’m really interested in the storytelling potential of VR, and you can bet I’ll be bothering you all again with updates of the
things I get to play with later this year (and, in fact, some of the cool technologies I’ve managed to get access to just while I’ve been writing up these blog posts).
If you haven’t had a chance to play with contemporary VR, get yourself a cardboard. It’s dirt-cheap and it’s (relatively) low-tech and it’s nowhere near as awesome as “real” hardware
solutions… but it’s still a great introduction to what I’m talking about and it’s absolutely worth doing. And if you have, I’d love to hear your thoughts on storytelling using
virtual reality, too.
This is the second in a three-part blog post about telling stories using virtual reality. Read all of the parts
here.
I’m still waiting to get in on the Oculus Rift and HTC Vive magic when they’re made
generally-available, later this year. But for the meantime, I’m enjoying quite how hackable VR technologies are. I chucked my Samsung Galaxy S6 edge into an I Am Cardboard DSCVR, paired it with a gaming PC using TrinusVR, used GlovePIE to hook up a Wii remote (playing games with a keyboard or even a gamepad is challenging if your headset doesn’t have a
headstrap, so a one-handed control is needed), and played a game of Gone Home. It’s a cheap and simple way to jump into VR
gaming, especially if – like me – you already own the electronic components: the phone, PC, and Wiimote.
My VR system is more-ghetto than yours.
While the media seems to mostly fixate on the value of VR in “action” gaming – shoot-’em-ups, flight simulators, etc. – I actually think there’s possibly greater value in it more
story-driven genres. I chose Gone Home for my experiment, above, because it’s an adventure that you play at your own pace, where the amount you get out of it as a story depends
on your level of attention to detail, not how quickly you can pull a trigger. Especially on this kind of highly-affordable VR gear, “twitchy” experiences that require rapid head turning
are particularly unsatisfying, not-least because the response time of even the fastest screens is always going to be significantly slower than that of real life. But as a storytelling
medium (especially in an affordable form) it’s got incredible potential.
Nothing quite gives you a feel of the human scale of the Hong Kong protests like being able to look around you, as if you’re stood in the middle of them.
I was really pleased to discover that some content creators are already experimenting with the storytelling potential of immersive VR experiences. An example would be the video
Hong Kong Unrest – a 360° Virtual Reality Documentary, freely-available on YouTube. Standing his camera (presumably a
Jump camera rig, or something similar) amongst the crowds of the 2014 Hong Kong protests, the creator of this documentary gives us a great opportunity to feel as though we’re standing
right there with the protesters. The sense of immersion of being “with” the protesters is, in itself, a storytelling statement that shows the filmmaker’s bias: you’re encouraged to
empathise with the disenfranchised Hong Kong voters, to feel like you’re not only with them in a virtual sense, but emotionally with them in support of their situation. I’m afraid that
watching the click-and-drag version of the video doesn’t do it justice: strap a Cardboard to your head to get the full experience.
Don’t go thinking that I’m not paying attention to the development of the Hololens, too: I am, because it looks amazing. I just don’t know… what it’s for. And, I suspect, neither does
Microsoft.
But aside from the opportunities it presents, Virtual Reality brings huge new challenges for content creators, too. Consider that iconic spaghetti western The Good, The Bad, And The Ugly. The opening scene drops us right into one of the artistic themes of the film –
the balance of wide and close-up shots – when it initially shows us a wide open expanse but then quickly fills the frame with the face of Tuco (“The Ugly”), giving us the experience of
feeling suddenly cornered and trapped by this dangerous man. That’s a hugely valuable shot (and a director’s wet dream), but it represents something that we simply don’t have a way of
translating into an immersive VR setting! Aside from the obvious fact that the viewer could simply turn their head and ruin the surprise of the shot, it’s just not possible to fill the
frame with the actor’s face in this kind of way without forcing the focal depth to shift uncomfortably.
Sergio Leone’s masterpiece makes strategic use of alternating close and wide shots (and shots like the opening, which initially feels open but rapidly becomes claustrophobic).
That’s not to say that there exist stories that we can’t tell using virtual reality… just that we’re only just beginning to find out feet with this new medium. When stage directors took
their first steps into filmography in the early years of the 20th century, they originally tried to shoot films “as if” they were theatre (albeit, initially, silent theatre): static
cameras shooting an entire production from a single angle. Later, they discovered ways in which this new medium could provide new ways to tell stories: using title cards to set the
scene, close-ups to show actors’ faces more-clearly, panning shots, and so on.
Similarly: so long as we treat the current generation of VR as something different from the faltering steps we took two and a half decades ago, we’re in frontier territory and feeling
our way in VR, too. Do you remember when smartphone gaming first became a thing and nobody knew how to make proper user interfaces for it? Often your tiny mobile screen would simply try
to emulate classic controllers, with a “d-pad” and “buttons” in the corners of the screen, and it was awful… but nowadays, we better-understand the relationship that people have with
their phones and have adapted accordingly (perhaps the ultimate example of this, in my opinion, is the addictive One More Line, a minimalist game with a single-action “press anywhere” interface).
A few seconds after this photograph was taken, a T-rex came bounding out from the treeline and I literally jumped.
I borrowed an Oculus Rift DK2 from a co-worker’s partner (have I mentioned lately that I have the most awesome co-workers?) to get a little experience with it, and it’s honestly one of
the coolest bits of technology I’ve ever had the priviledge of playing with: the graphics, comfort, and responsiveness blows Cardboard out of the water. One of my first adventures –
Crytek’s tech demo Back to Dinosaur Island – was a visual spectacle even despite my apparently-underpowered
computer (I’d hooked the kit up to Gina, my two-month old 4K-capable media centre/gaming PC: I suspect that Cosmo, my multi-GPU watercooled beast might have fared
better). But I’ll have more to say about that – and the lessons I’ve learned – in the final part of this blog post.