Sid Meier’s Alpha Centauri (which we fondly refer to here as SMAC, both as an acronym and in reference to its potent addictive properties) opens in an odd way for a science fiction game. Most such games open with spaceships, star travel, or some futuristic technology. They seek to hook the imagination. But our game begins much more humbly.
SMAC begins with a largely static image of the stars as a woman reads a passage from the book of Genesis, telling the story of man’s final and irrevocable expulsion from the Garden of Eden. The reading goes on for about twenty seconds, which is long enough for the lack of action to be quite noticeable. The effect is that we, the players, are being invited to join the woman in literary contemplation. This, in and of itself, is a strange thing to find in a game – and a strategy game, no less!
The attentive viewer will notice that as the woman ends her quotation, she cites her source as “The Conclave Bible, Datalinks”. Odd … one would normally expect chapter and verse from a bible quote. What are the Datalinks? And which edition is the Conclave Bible?
There isn’t much time to dwell on those questions, though. As the woman finishes, the music strikes up and we are treated to a series of disjointed images from the Earth we know. The context isn’t clear, but the message certainly is. These are scenes of chaos: fire; military equipment; rioting crowds; nuclear explosions; escalating debt – each one flashes by just after it has time to register. The world is out of control. It’s literally on fire. And it’s hurtling toward calamity…
As you may know, I’ve lately found an excuse to play with some new web technologies, and I’ve also taken the opportunity to try to gain a deeper understanding of some less bleeding-edge technologies that I think have some interesting potential. And so it was that, while I was staffing the Three Rings stall at last week’s NCVO conference, I made use of the time that the conference delegates were all off listening to a presentation to throw together a tech demo I call Steer!
As you can see from the GIF above, Steer! is a driving game. The track and your car are displayed in a web browser on a large screen, for example a desktop or laptop computer, television, or tablet, and your mobile phone is used to steer the car by tilting it to swerve around a gradually-narrowing weaving road. It’s pretty fun, but what really makes it interesting to me is the combination of moderately-new technologies I’ve woven together to make it possible, specifically:
- The Device Orientation API, which enables a web application to detect the angle at which you’re holding your mobile phone
- Websockets as a mechanism to send that data in near-real-time from the phone to the browser, via a web server: for the fastest, laziest possible development, I used Firebase for this, but I’m aware that I could probably get better performance by running a local server on the LAN shared by both devices
- The Canvas API to draw the output to the screen
The desktop browser does all of the real work: it takes the orientation of the device and uses that, and the car’s current speed, to determine how it’s position changes over the time that’s elapsed since the screen was last refreshed: we’re aiming for 60 frames a second, of course, but we don’t want the car to travel slower when the game is played on a slower computer, so we use
requestAnimationFrame to get the fastest rate possible and calculate the time between renderings to work out how much of a change has occurred this ‘tick’. We leave the car’s sprite close to the bottom of the screen at all times but change how much it rotates from side to side, and we use it’s rotated to decide how much of its motion is lateral versus the amount that’s “along the track”. The latter value determines how much track we move down the screen “behind” it.
The track is generated very simply by the addition of three sine waves of different offset and frequency – a form of very basic procedural generation. Despite the predictability of mathematical curves, this results in a moderately organic-feeling road because the player only sees a fraction of the resulting curve at any given time: the illustration below shows how these three curves combine to make the resulting road. The difficulty is ramped up the further the player has travelled by increasing the amplitude of the resulting wave (i.e. making the curves gradually more-agressive) and by making the road itself gradually narrower. The same mathematics are used to determine whether the car is mostly on the tarmac or mostly on the grass and adjust its maximum speed accordingly.
In order to help provide a visual sense of the player’s speed, I added dashed lines down the road (dividing it into three lanes to begin with and two later on) which zip past the car and provide a sense of acceleration, deceleration, overall speed, and the impact of turning ‘sideways’ (which of course reduces the forward momentum to nothing).
This isn’t meant to be a finished game: it’s an experimental prototype to help explore some technologies that I’d not had time to look seriously at before now. However, you’re welcome to take a copy – it’s all open source – and adapt or expand it. Particular ways in which it’d be fun to improve it might include:
- Allowing the player more control, e.g. over their accelerator and brakes
- Adding hazards (trees, lamp posts, and others cars) which must be avoided
- Adding bonuses like speed boosts
- Making it challenging, e.g. giving time limits to get through checkpoints
- Day and night cycles (with headlights!)
- Multiplayer capability, like a real race?
- Smarter handling of multiple simultaneous users: right now they’d share control of the car (which is the major reason I haven’t given you a live online version to play with and you have to download it yourself!), but it’d be better if they could “queue” until it was their turn, or else each play in their own split-screen view or something
- Improving the graphics with textures
- Increasing the entropy of the curves used to generate the road, and perhaps adding pre-scripted scenery or points of interest on a mathematically-different procedural generation algorithm
- Switching to a local LAN websocket server, allowing better performance than the dog-leg via Firebase
- Greater compatibility: I haven’t tried it on an iPhone, but I gather than iOS devices report their orientation differently from Android ones… and I’ve done nothing to try to make Steer! handle more-unusual screen sizes and shapes
- Anything else? (Don’t expect me to have time to enhance it, though: but if you do so, I’d love to hear about it!)
I’ve been happy with my 2016 HTPC, but the situation has changed, largely because of something I mentioned in passing back in November: The Xbox One and PS4 are effectively plain old PCs, built on: Intel Atom class (aka slow) AMD 8-core x86 CPU 8 GB RAM AMD Radeon 77xx / 78xx GPUs cheap commodity…
This is the (long-overdue) last in a three-part blog post about telling stories using virtual reality. Read all of the parts here.
For the first time in two decades, I’ve been playing with virtual reality. This time around, I’ve been using new and upcoming technologies like Google Cardboard and the Oculus Rift. I’m particularly interested in how these new experiences can be used as a storytelling medium by content creators, and the lessons we’ll learn about immersive storytelling by experimenting with them.
It seems to me that the biggest questions that VR content creators will need to start thinking about as we collectively begin to explore this new (or newly-accessible) medium are:
How do we make intuitive user interfaces?
This question mostly relates to creators making “interactive” experiences. Superficially, VR gives user experience designers a running start because there’s little that’s as intuitive as “turning your head to look around” (and, in fact, trying the technology out on a toddler convinced me that it’s adults – who already have an anticipation of what a computer interface ought to be – who are the only ones who’ll find this challenging). On the other hand, most interactive experiences demand more user interaction than simply looking around, and therein lies the challenge. Using a keyboard while you’re wearing a headset is close to impossible (trust me, I’ve tried), although the augmented-reality approach of the Hololens and potentially even the front-facing webcam that’s been added to the HTC Vive PRE might be used to mitigate this. A gamepad is workable, but it’s slightly immersion-breaking in some experiences to hold your hands in a conventional “gamer pose”, as I discovered while playing my Gone Home hackalong: this was the major reason I switched to using a Wiimote.
So far, I’ve seen a few attempts that don’t seem to work, though. The (otherwise) excellent educational solar system exploration tool Titans of Space makes players stare at on-screen buttons for a few seconds to “press” them, which is clunky and unintuitive: in the real world, we don’t press buttons with our eyes! I understand why they’ve done this: they’re ensuring that their software has the absolute minimum interface requirement that’s shared between the platforms that it supports, but that’s a concern too! If content creators plan to target two or more of the competing systems that will launch this year alone, will they have to make usability compromises?
There’s also the question of how we provide ancillary information to players: the long-established paradigms of “health in the bottom left, ammo in the bottom right” don’t work so obviously when they’re hidden in your peripheral vision. Games like Elite Dangerous have tackled this problem from their inception by making a virtualised “real” user interface comprised of the “screens” in the spaceship around you, but it’s an ongoing challenge for titles that target both VR and conventional platforms in future. Wareable made some great observations about these kinds of concerns, too.
How do we tell stories without forced visual framing?
In my previous blog post, I talked about a documentary that used 360° cameras to “place” the viewer among the protesters that formed the subject of the documentary. In order to provide some context and to reduce the disorientation experienced by “jumping” from location to location, the creator opted to insert “title slides” between scenes with text explaining what would be seen next. But title slides necessitate that the viewer is looking in a particular direction! In the case of this documentary and several other similar projects I’ve seen, the solution was to put the title in four places – at each of the four cardinal directions – so that no matter which way you were looking you’ll probably be able to find one. But title slides are only a small part of the picture.
Directors producing content – whether interactive or not – for virtual reality will have to think hard about the implications of the fact that their camera (whether a physical camera or – slightly easier and indeed more-controllable – a simulated camera in a 3D-rendered world) can look in any direction. Sets must be designed to be all-encompassing, which poses huge challenges for the traditional methods of producing film and television programmes. Characters’ exits and entrances must be through believable portals: they can’t simply walk off to the left and stop. And, of course, the content creator must find a way to get the audience’s attention when they need it: watching the first few minutes of Backstage with an Elite Ballerina, for example, puts you in a spacious dance studio with a spritely ballerina to follow… but there’s nothing to stop you looking the other way (perhaps by accident), and – if you do – you might miss some of the action or find it difficult to work out where you’re supposed to be looking. Expand that to a complex, busy scene like, say… the ballroom scene in Labyrinth… and you might find yourself feeling completely lost within a matter of minutes (of course, a feeling of being lost might be the emotional response that the director intends, and hey – VR is great for that!).
The potential for VR in some kinds of stories is immense, though. How about a murder mystery story played out in front of you in a dollhouse (showing VR content “in minature” can help with the motion sickness some people feel if they’re “dragged” from scene to scene): you can move your head to peep in to any room and witness the conversations going on, but the murder itself happens during a power cut or otherwise out-of-sight and the surviving characters are left to deduce the clues. In such a (non-interactive) experience the spectator has the option to follow the action in whatever way they like, and perhaps even differently on different playthroughs, putting the focus on the rooms and characters and clues that interest them most… which might affect whether or not they agree with the detective’s assertions at the end…
What new storytelling mechanisms can this medium provide?
As I mentioned in the previous blog post, we’ve already seen the evolution of storytelling media on several ocassions, such as the jump from theatre to cinema and the opportunities that this change eventually provided. Early screenwriters couldn’t have conceived of some of the tools used in modern films, like the use of long flowing takes for establishing shots or the use of fragmented hand-held shots to add an excited energy to fight scenes. It wasn’t for lack of imagination (Georges Méliès realised back in the nineteenth century that timelapse photography could be used to produce special effects not possible in theatre) but rather a lack of the technology and more-importantly a lack of the maturity of the field. There’s an ongoing artistic process whereby storytellers find new ways to manage their medium from one another: Romeo Must Die may have made clever use of a “zoom-to-X-ray” when a combatant’s bones were broken, but it wouldn’t have been possible if The Matrix hadn’t shown the potential for “bullet time” the previous year. And if we’re going down that road: have you seen the bullet time scene in Zotz!, a film that’s older than the Wachowskis themselves?
Clearly, we’re going to discover new ways of telling stories that aren’t possible with traditional “flat screen” media nor with more-immersive traditional theatre: that’s what makes VR as a storytelling tool so exciting.
Of course, we don’t yet know what storytelling tools we’ll find in this medium, but some ideas I’ve been thinking about are:
- Triggering empathetic responses by encouraging the audience to more-closely relate to the situation of characters by putting them more-directly “in their shoes”. That Dragon, Cancer, an autobiographical game about the experience of a child’s terminal cancer, is an incredibly emotive experience… but only begins to touch upon the emotional journeys possible through virtual reality: what’s it really like to be close to somebody who’s terminally ill?
- Allowing spectators to spectate a story in their own way, or from a perspective that they choose and control. We’ve already begun to explore this as a concept with the (little-used) multi-angle feature on DVDs: for example, if you’ve got the special edition of Die Hard then you can rewatch certain scenes and flick between different cameras as you watch. But that’s nothing on the potential for future animated films to allow you to walk or fly around and watch from any angle… or in the case of interactive experiences, to influence the direction that the story takes by your actions or even just by your presence: how about a heist story in which the burglars will only carry out their plan if they can’t tell that you’re watching them, forcing you to be surreptitious in your glances over to see what they’re up to?
- Combining VR with motion simulation: Alton Towers is leading the way here, with their announcement that they’re going to re-engineer the Air rollercoaster into Galactica, upon which the ride gives the sensation of motion while a Samsung Gear VR headset simulates an otherwise-impossible spacefaring experience, and I’m hugely excited about the prospect. But a more-adaptable and economical way to achieve a similar result would be to repurpose a motion simulator: the good ones can provide the sensation of g-forces on almost any vector for an extended period of time; the really good ones can provide short bursts of g-forces at levels other than that provided by Earth’s gravity (usually by flinging the carriage around on a programmable shuttle arm, although weightlessness is still unfeasible while you remain on the ground). If you didn’t think that 2013’s Gravity was nauseating enough when it was merely in 3D, wait until you try a similar experience in motion-assisted virtual reality.
- Point-of-view framing: this paradigm has always been at least a little unsatisfying in regular movies. I mean, it might have been the best moment in Doom, but that’s more to do with how apalling that film was than how good the technique is! But the potential for stepping in to the viewpoint of another human and being able to look around has great potential for immersion-building without allowing the participant to stray too-far from the main storyline. Something that people who haven’t yet experienced VR don’t often appreciate is that a few little things can really improve the experience of immersion… things like being able to move your head, even just being a few degrees, make you feel like you’re “there”. There are some big challenges to overcome with this, of course, such as how to make the movement of the camera not make the watcher feel ‘dragged along’, especially if their experience is of moving sideways… but these are challenges that will probably be solved for us quickly by the porn industry, who’re working very hard on making this kind of experience seamless. Just like the leaps and bounds we took with streaming video, yet again technology will get to thank peoples’ love of porn for advancing what home computers are capable of.
- Exploring therapeutic experiences: until I really started trying out different VR gear, I didn’t think that it would be sufficiently engaging to be able to trigger a strong enough response to be useful in a therapeutic capacity. But after the first time I came out of a 10-minute game of Caaaaardboard! feeling genuinely wobbly at the knees in the same way as after my first parachute jump, I realised that modern VR really can produce an experience that results in a psychosomatic response. And that’s really important, because it provides a whole new medium in which we can treat (and, I suppose, study), for example, phobias in a controlled and ‘escapable’ environment. Of course, that raises other questions too, such as: is it possible to cause disorders like PTSD with virtual reality? If it’s simply the case that optimally-made VR is more-immersive than the best possible “flat screen” experiences and that it’s this that can improve its therapeutic potential, then surely it can be more-traumatic, too: I know enough people that were emotionally-scarred by Bambi‘s mother’s death, E.T.‘s almost-death, or that one scene from Watership Down that gave me nightmares for years: how much more (potentially)-damaging could a VR experience be? Whether or not it’s truly the case, it’ll only take one or two media circuses about murderous psychopaths who are unable to differentiate their virtual reality from the real kind before people start getting asked these kind of questions.
- Oh, and this webcomic, of course.
As I’m sure I’ve given away these last three blog posts, I’m really interested in the storytelling potential of VR, and you can bet I’ll be bothering you all again with updates of the things I get to play with later this year (and, in fact, some of the cool technologies I’ve managed to get access to just while I’ve been writing up these blog posts).
If you haven’t had a chance to play with contemprary VR, get yourself a cardboard. It’s dirt-cheap and it’s (relatively) low-tech and it’s nowhere near as awesome as “real” hardware solutions… but it’s still a great introduction to what I’m talking about and it’s absolutely worth doing. And if you have, I’d love to hear your thoughts on storytelling using virtual reality, too.
This is the second in a three-part blog post about telling stories using virtual reality. Read all of the parts here.
I’m still waiting to get in on the Oculus Rift and HTC Vive magic when they’re made generally-available, later this year. But for the meantime, I’m enjoying quite how hackable VR technologies are. I chucked my Samsung Galaxy S6 edge into an I Am Cardboard DSCVR, paired it with a gaming PC using TrinusVR, used GlovePIE to hook up a Wii remote (playing games with a keyboard or even a gamepad is challenging if your headset doesn’t have a headstrap, so a one-handed control is needed), and played a game of Gone Home. It’s a cheap and simple way to jump into VR gaming, especially if – like me – you already own the electronic components: the phone, PC, and Wiimote.
While the media seems to mostly fixate on the value of VR in “action” gaming – shoot-’em-ups, flight simulators, etc. – I actually think there’s possibly greater value in it more story-driven genres. I chose Gone Home for my experiment, above, because it’s an adventure that you play at your own pace, where the amount you get out of it as a story depends on your level of attention to detail, not how quickly you can pull a trigger. Especially on this kind of highly-affordable VR gear, “twitchy” experiences that require rapid head turning are particularly unsatisfying, not-least because the response time of even the fastest screens is always going to be significantly slower than that of real life. But as a storytelling medium (especially in an affordable form) it’s got incredible potential.
I was really pleased to discover that some content creators are already experimenting with the storytelling potential of immersive VR experiences. An example would be the video Hong Kong Unrest – a 360° Virtual Reality Documentary, freely-available on YouTube. Standing his camera (presumably a Jump camera rig, or something similar) amongst the crowds of the 2014 Hong Kong protests, the creator of this documentary gives us a great opportunity to feel as though we’re standing right there with the protesters. The sense of immersion of being “with” the protesters is, in itself, a storytelling statement that shows the filmmaker’s bias: you’re encouraged to empathise with the disenfranchised Hong Kong voters, to feel like you’re not only with them in a virtual sense, but emotionally with them in support of their situation. I’m afraid that watching the click-and-drag version of the video doesn’t do it justice: strap a Cardboard to your head to get the full experience.
But aside from the opportunities it presents, Virtual Reality brings huge new challenges for content creators, too. Consider that iconic spaghetti western The Good, The Bad, And The Ugly. The opening scene drops us right into one of the artistic themes of the film – the balance of wide and close-up shots – when it initially shows us a wide open expanse but then quickly fills the frame with the face of Tuco (“The Ugly”), giving us the experience of feeling suddenly cornered and trapped by this dangerous man. That’s a hugely valuable shot (and a director’s wet dream), but it represents something that we simply don’t have a way of translating into an immersive VR setting! Aside from the obvious fact that the viewer could simply turn their head and ruin the surprise of the shot, it’s just not possible to fill the frame with the actor’s face in this kind of way without forcing the focal depth to shift uncomfortably.
That’s not to say that there exist stories that we can’t tell using virtual reality… just that we’re only just beginning to find out feet with this new medium. When stage directors took their first steps into filmography in the early years of the 20th century, they originally tried to shoot films “as if” they were theatre (albeit, initially, silent theatre): static cameras shooting an entire production from a single angle. Later, they discovered ways in which this new medium could provide new ways to tell stories: using title cards to set the scene, close-ups to show actors’ faces more-clearly, panning shots, and so on.
Similarly: so long as we treat the current generation of VR as something different from the faltering steps we took two and a half decades ago, we’re in frontier territory and feeling our way in VR, too. Do you remember when smartphone gaming first became a thing and nobody knew how to make proper user interfaces for it? Often your tiny mobile screen would simply try to emulate classic controllers, with a “d-pad” and “buttons” in the corners of the screen, and it was awful… but nowadays, we better-understand the relationship that people have with their phones and have adapted accordingly (perhaps the ultimate example of this, in my opinion, is the addictive One More Line, a minimalist game with a single-action “press anywhere” interface).
I borrowed an Oculus Rift DK2 from a co-worker’s partner (have I mentioned lately that I have the most awesome co-workers?) to get a little experience with it, and it’s honestly one of the coolest bits of technology I’ve ever had the priviledge of playing with: the graphics, comfort, and responsiveness blows Cardboard out of the water. One of my first adventures – Crytek’s tech demo Back to Dinosaur Island – was a visual spectacle even despite my apparently-underpowered computer (I’d hooked the kit up to Gina, my two-month old 4K-capable media centre/gaming PC: I suspect that Cosmo, my multi-GPU watercooled beast might have fared better). But I’ll have more to say about that – and the lessons I’ve learned – in the final part of this blog post.
This is the first in a three-part blog post about telling stories using virtual reality. Read all of the parts here.
As part of my work at the Bodleian… but to a greater extent “just for fun”… I’ve spent the last few weeks playing with virtual reality. But first, a history lesson.
This isn’t the first time I’ve used virtual reality. The first time, for me, was in the early 1990s, at the Future Entertainment Show, where I queued for a shot at Grid Busters on a Virtuality 1000-CS. The Virtuality 1000 was powered by an “Expality”: functionally an Amiga 3000 with specially-written software for reading the (electromagnetically-sensed) facing of the headset and the accompanying “space joystick”… and providing output via a pair of graphics cards (one for each eye) to LCD screens. The screens were embedded in chunky bits on the sides of the helmet and projected towards mirrors and lenses at the far end – this apparently being an effort to reduce how “front-heavy” it felt, but I can tell you that in practice a Virtuality headset felt weighty on your neck, even for its era!
Nonetheless, the experience stuck with me: I returned to school and became the envy of my friends (the nerdy ones, at least) when I told them about my VR adventure, and – not least thanks to programs like Tomorrow’s World and, of course, the episode of Bad Influence that reminded me quite how badly I wanted to get myself down to Nottingham for a go at Legend Quest – I was genuinely filled with optimism that within the decade, playing a VR game would have gone from the fringes of science fiction to being something where everybody-knew-somebody who did it routinely.
I never managed to get to play Legend Quest, and that first “VR revolution” swiftly fell flat. My generation was promised all of the hi-tech science, immersion, and magical experience of The Lawnmower Man, but all we were left with was the overblown promises, expensive effects, and ill-considered user experience of, well… The Lawnmower Man. I discovered Virtuality machines in arcades once or twice, but they seemed to be out-of-order more often than not, and they quickly disappeared. You can’t really blame the owners of arcades: if a machine costs you in the region of £40,000 to buy and you can charge, say, £1 for a 3-minute go on it (bear in mind that even the most-expensive digital arcade machines tended to charge only around 30p, at this time, and most were 10p or 20p), and it needs supervision, and it can’t be maintained by your regular guy… well, that swiftly begins to feel like a bad investment.
Plus, the fifth generation of games consoles came along: the (original) Sony PlayStation, the Nintendo N64, and – if you really wanted the highest-technology system (with the absolute least imaginative developers) – the Sega Saturn. These consoles came at price points that made them suitable Christmas gifts for the good boys and girls of middle-class parents and sported 3D polygon graphics of the type that had previously only been seen in arcades, and the slow decline of the video arcade accelerated dramatically. But home buyers couldn’t afford five-figure (still moderately-experimental) VR systems, and the market for VR dried up in a matter of years. Nowadays, if you want to play on a Virtuality machine like the one I did, you need to find a collector (you might start with this guy from Leicester, whose website was so useful in jogging my memory while I wrote this blog post).
2016 is the year in which this might change. The need for ubiquitous cheap computing has made RAM and even processors so economical that we throw them away when we’re done with them. The demands of modern gaming computers and consoles has given us fast but affordable graphics rendering hardware. And the battle for the hottest new smartphones each year has helped to produce light, bright, high-resolution screens no bigger than the palm of your hand.
In fact, smartphones are now the simplest and cheapest way to play with VR. Under the assumption that you’ve already got a smartphone, you’re only a couple of cheap plastic lenses and a bit of cardboard away from doing it for yourself. So that’s how my team and I started out playing: with the wonderfully-named Google Cardboard. I know that Google Cardboard is old-hat now and all the early adopters have even got their grandmothers using it now, but it’s still a beautiful example of how economical VR threatens to become if this second “VR revolution” takes hold. Even if you didn’t already own a compatible smartphone, you could buy a second-hand one on eBay for as little as £30: that’s an enormous difference from the £40K Virtuality machines of my youth, which had only a fraction of the power.
I’m going somewhere with this, I promise: but I wanted to have a jumping-off point from which to talk about virtual reality more-broadly first and it felt like I’d be overstretching if I jumped right in at the middle. Y’know, like the second act of The Lawnmower Man. In the next part of this series, I’d like to talk about the storytelling opportunities that modern VR offers us, and some of the challenges that come with it, and share my experience of playing with some “proper” modern hardware – an Oculus Rift.
Rust, a 2013 indie survival game from Facepunch Studios, plays like a cross between Minecraft and Grand Theft Auto. Players find themselves “born” into a mysterious wilderness, naked and alone, forced to forage for resources and to craft clothing, supplies and shelter for themselves. They must contend with starvation, hypothermia and animal attacks, but by far the most dangerous threat comes from other players who roam the island.
When the game was first opened up, all players were given the same default avatar: a bald white man. With the most recent update, Rust’s lead developer, Garry Newman, introduced different avatars of different racial origins into the mix. However, they did so with a twist — unlike typical massively multiplayer online role-playing games, Rust does not allow players to choose the race of their avatar. Instead, they are assigned one at random. Newman explained the change in a blog post…
Commissioned, a webcomic I’ve been reading for many years now, recently made a couple of observations on the nature of “fetch quests” in contemporary computer role-playing games. And naturally – because my brain works that way – I ended up taking this thought way beyond its natural conclusion.
Today’s children are presumably being saturated with “fetch quests” in RPGs all across the spectrum from fantasy Skyrim-a-likes over to modern-day Grand Theft Auto clones and science fiction Mass Effect-style video games. And the little devil on my left shoulder asks me how this can be manipulated for fun and profit.
The traditional “fetch quest” goes as follows: I’ll give you what you need (the sword that can kill the monster, the job that you need to impress your gang, the name of the star that the invasion fleet are orbiting, or whatever), in exchange for you doing a delivery for me. Either I want you to take something somewhere, or I want you to pick something up, or – in the most overused and thankfully falling out of fashion example – I want you to bring me X number of Y object… 9 shards of triforce, 5 orc skulls, $10,000, or whatever. Needless to say, about 50% of the time there’ll be some kind of challenge along the way (you need to steal the item from a locked safe, you’ll be offered a bribe to “lose” the item, or perhaps you’ll just be mobbed by ninja robots as you ride along on your hypercycle), which is probably for the best because it’s the only thing that adds fun to role-playing a postman. I wonder if being attacked by mage princes is something that real-life couriers dream about?
This really doesn’t tally with normality. When you want something in the real world, you pay for it, or you don’t get it. But somehow in computer RPGs – even ones which allegedly try to model the real world – you’ll find yourself acting as an over-armed deliveryman every ten to fifteen minutes. And who wants to be a Level 38 Dark Elf Florist and Dog Walker?
So perhaps… just perhaps… this will begin to shape the future of our reality. If the children of today start to see the “fetch quest” as a perfectly normal way to introduce yourself to somebody, then maybe someday it will be socially acceptable.
I’m going to try it. The next time that somebody significantly younger than me looks impatient in the queue for the self-service checkouts at Tesco, I’m going to offer to let them go in front of me… but only if they can bring me a tin of sweetcorn! “I can’t go myself, you see,” I’ll say, “Because I need to hold my place in the queue!” A tin of sweetcorn may not be as impressive-sounding as, say, the Staff of Fire Elemental Control, but it gets the job done. And it’s one of your five-a-day, too.
Or when somebody asks me for help fixing their broken website, I’ll say “Okay, I’ll help; but you have to do something for me. Bring me the bodies of five doughnuts, to prove yourself worthy of my assistance.”
It’s going to be a big thing, I promise.
Last week, I wrote about two of the big-name video games I’ve been playing since I suddenly discovered a window of free time in my life, again. Today, I’d like to tell you about some of the smaller independent titles that have captured my interest:
Suppose you’ve been living on another planet all year and so you haven’t heard of Minecraft. Here’s what you need to know: it’s a game, and it’s also a software toy, depending on how you choose to play it. Assuming you’re not playing in “creative mode” (which is a whole other story), then it’s a first-person game of exploration, resource gathering and management, construction, combat, and (if you’re paying multiplayer, which is completely optional) cooperation.
Your character is plunged at dawn into a landscape of rolling (well, stepped) hills, oceans, tundra, and deserts, with infinite blocks extending in every direction. It’s a reasonably safe place during the daytime, but at night zombies and skeletons and giant spiders roam the land, so your first task is to build a shelter. Wood or earth are common starting materials; stone if you’ve got time to start a mine; bricks later on if you’ve got clay close to hand; but seriously: you go build your house out of anything you’d like. Then begins your adventure: explore, mine, and find resources with which to build better tools, and unlock the mysteries of the world (and the worlds beyond). And if you get stuck, just remember that Minecraft backwards is the same as Skyrim forwards.
Parts of it remind me of NetHack, which is one of the computer games that consumed my life: the open world, the randomly-generated terrain, and the scope of the experience put me in mind of this classic Rougelike. Also perhaps Dwarf Fortress or Dungeon Keeper: there’s plenty of opportunities for mining, construction, trap-making, and defensive structures, as well as for subterranean exploration. There are obvious similarities to Terraria, too.
I think that there’s something for everybody in Minecraft, although the learning curve might be steeper than some players are used to.
I first heard about Limbo when it appeared on the XBox last year, because it got a lot of press at the time for it’s dark stylistic imagery and “trial and death” style. But, of course, the developers had done a deal with the devil and made it an XBox-only release to begin with, putting off the versions for other consoles and desktop computers until 2011.
But now it’s out, as Paul was keen to advise me, and it’s awesome. You’ll die – a lot – when you play it, but the game auto-saves quietly at very-frequent strategic points, so it’s easy to “just keep playing” (a little like the equally-fabulous Super Meat Boy), but the real charm in this game comes from the sharp contrast between the light, simple platformer interface and the dark, oppressive environment of the levels. Truly, it’s the stuff that nightmares are made of, and it’s beautiful.
While at first it feels a little simplistic (how often nowadays do you get a game whose controls consist of the classic four-button “left”, “right”, “climb/jump”, and “action” options?), the game actually uses these controls to great effect. Sure, you’ll spend a fair amount of time just running to the right, in old-school platformer style, but all the while you’ll be getting drawn in to the shady world of the game, set on-edge by its atmospheric and gloomy soundtrack. And then, suddenly, right when you least expect it: snap!, and you’re dead again.
The puzzles are pretty good: they’re sometimes a little easy, but that’s better in a game like this than ones which might otherwise put you off having “one more go” at a level. There’s a good deal of variety in the puzzle types, stretching the interface as far as it will go. I’ve not quite finished it yet, but I certainly will: it’s a lot of fun, and it’s a nice bit of “lightweight” gaming for those 5-minute gaps between tasks that I seem to find so many of.
I know, I know… as an interactive fiction geek I really should have gotten around to finishing Blue Lacuna sooner. I first played it a few years ago, when it was released, but it was only recently that I found time to pick it up again and play it to, well, it’s logical conclusion.
What do you need to know to enjoy this game? Well: firstly, that it’s free. As in: really free – you don’t have to pay to get it, and anybody can download the complete source code (I’d recommend finishing the game first, because the source code is, of course, spoiler-heavy!) under a Creative Commons license and learn from or adapt it themselves. That’s pretty awesome, and something we don’t see enough of.
Secondly, it’s a text-based adventure. I’ve recommended a few of these before, because I’m a big fan of the medium. This one’s less-challenging for beginners than some that I’ve recommended: it uses an unusual user interface feature that the developer calls Wayfaring, to make it easy and intuitive to dive in. There isn’t an inventory (at least, not in the conventional adventure game sense – although there is one optional exception to this), and most players won’t feel the need to make a map (although keeping notes is advisable!). All-in-all, so far it just sounds like a modern, accessible piece of interactive fiction.
But what makes this particular piece so special is it’s sheer size and scope. The world of the game is nothing short of epic, and more-than almost any text-based game I’ve played before, it feels alive: it’s as much fun to explore the world as it is to advance the story. The “simplified” interface (as described above) initially feels a little limiting to an experienced IFer like myself, but that quickly gives way as you realise how many other factors (other than what you’re carrying) can be used to solve problems. Time of day, tides, weather, who you’ve spoken to and about what, where you’ve been, when you last slept and what you dreamed about… all of these things can be factors in the way that your character experiences the world in Blue Lacuna, and it leads to an incredibly deep experience.
It describes itself as being an explorable story in the tradition of interactive fiction and text adventures… a novel about discovery, loss, and choice.. a game about words and emotions, not guns. And that’s exactly right.
It’s available for MacOS, Windows, Linux, and just about every other platform, and you should totally give it a go.
As I previously indicated, I’ve recently found myself with a little free videogaming time, and I thought I’d share some of the things that have occupied my time, over the course of two blog posts:
Well; here’s the big one. This game eats time for breakfast. It’s like World Of Warcraft for people who don’t have friends. No, wait…
Seriously, though, Bethesda have really kicked arse with this one. I only played a little of the earlier games in the series, because they didn’t “click” with me (although I thoroughly enjoyed the entire Fallout series), but Skyrim goes a whole extra mile. The game world feels truly epic and “living”: you don’t have to squint more than a little to get the illusion that the whole world would carry on without you, with people eating and sleeping and going to work and gossiping about all the dragon attacks. The plot is solid, the engine is beautiful, and there’s so much content that it’s simply impossible to feel that you’re taking it all in at once.
It’s not perfect. It’s been designed with console controls in mind, and it shows (the user interface for skills upgrades is clunky as hell, even when I tried it on my XBox controller). The AI still does some damn stupid things (not standing-and-talking-to-walls stupid, but still bad enough that your so-called “friends” will get in your way, fire area-effect weapons at enemies you’re meleeing with, and so on). Dragons are glitchy (the first time I beat an Elder Dragon it was mostly only because it landed in a river and got its head stuck underwater, like it was seeing how long it could hold it’s breath while I gradually sliced its tail into salami).
But it’s still a huge and beautiful game that’s paid for itself in the 55+ hours of entertainment it’s provided so far. Recommended.
Update: between first drafting and actually publishing this list, I’ve finished the main questline of Skyrim, which was fun. 85 hours and counting.
Modern Warfare 3
I should confess, first, that I’m a Call Of Duty fanboy. Not one of the these modern CoD fanboys, who rack up kills in multiplayer matchups orchestrated by ability-ranking machines in server farms, shouting “noob” as they teabag one another’s corpses. I mean I’m a purist CoD fanboy. When I got my copy of the first Call Of Duty game, broadband was just beginning to take off, and games with both single-player and multiplayer aspects still had to sell themselves on the strength of the single-player aspects, because most of their users would only ever play it that way.
And the Call of Duty series has always had something that’s been rare in action-heavy first-person shooters: a plot. A good plot. A plot that you can actually get behind and care about. Okay, so we all know how the World War II ones end (spoiler: the allies win), and if you’ve seen Enemy At The Gates then you also know how every single Russian mission goes, too, but they’ve still got a fun story and they work hard to get you emotionally-invested. The first time I finished Call of Duty 2, I cried. And then I started over and shot another thousand Nazis, like I was some form of human tank.
Modern Warfare was fantastic, bringing the franchise (complete with Captain Price) right into the era of nuclear threats and international terrorism. Modern Warfare 2 built on this and took it even further, somehow having a final boss fight that surpassed even the excellence of its predecessor (“boss fights” being notoriously difficult to do well in first-person shooters inspired by the real world). Modern Warfare 3… well…
It was okay. As a fanboy, I loved the fact that they finally closed the story arc started by the two previous MW games (and did so in a beautiful way: I maintain that Yuri is my favourite character, simply because of the way his story is woven into the arc). The chemical weapon attacks weren’t quite so impressive as the nuclear bomb in MW2, and the final fight wasn’t quite as good as the previous ones, but they’re all “good enough”. The big disappointment was the length of the campaign. The game finished downloading and unlocked at 11pm, and by 4am I was tucked up in bed, having finished it in a single sitting. “Was that it?” I asked.
Recommendation: play it if you’re a fan and want to see how the story ends, or else wait until it’s on sale and play it then.
Part Two will come when I find time, along with some games that you’re less-likely to have come across already.
It’s that time again, the highlight of the interactive fiction year (for me, at least), and IFComp 2011 is upon us. I’ve been playing my way through this year’s entries, and – as I have in previous years – I’ll be sharing with you any that leap out at me as “things you really ought to try.”
The first of which is The Play, by Deirdra Kiai. This entry stands out for a few reasons. Firstly, it’s one of those uncommon (but growing in popularity) pieces of hypertext IF. I remain not-completely convinced by hypertext IF: perhaps as a result of the medium, the games often feel shorter than they might otherwise be, and while I don’t miss playing “hunt the verb” to try to find exactly the word the designer hoped I would, having the option to click on any one of just a handful of links seems a little… simple.
That’s not to say that I don’t like the medium: hell, I feel like I was a pioneer in it, thanks to things like Troma Night Adventure (originally on the long-dead RockMonkey Wiki, and revived in its own engine last year). It’s just… different from most IF that I play, and that difference is stark.
If there’s one big advantage to hypertext interactive fiction, though, it’s that it lowers the barrier to entry. Everybody knows how to use a web browser, there’s nothing to install or set up, and they typically play really well on mobile devices, which is a growing market for this kind of game. I’m excited to see tools like Twine/Twee, ChoiceScript, and Undum (the latter of which powers The Play) appearing, which make creating this kind of game reasonably easy.
Secondly; it’s unusual. And I do enjoy a bit of quirky fiction: something that takes the genre in a new direction. And The Play does that. You play as Ainsley M. Warrington, the director of a disaster-ridden play on the eve of the first night, orchestrating a last-second dress rehearsal. The story is told through alternating segments of your experience and “script”: segments of the play as they are performed (which may vary, depending on how lenient you allow the cast the be with the script and how much goes awry), and this is a wonderful use of the semi-graphical nature of the medium.
Mostly, you’re trying to balance and improve the moods of your cast members (and your stage manager), in order to gain a good review. This is made challenging by the fact that they all have quite different ideas and attitudes towards the nature of the play, how it should be performed, and so on. They only thing that they all seem to agree on is that you’re not doing a very good job.
But beyond that theoretical (and, frankly, self-imposed) goal, it’s actually a lot of fun just to play off the different actors against one another, to experiment with how much you can improvise the ending, and to see how things turn out if you try different choices. And that’s exactly how interactive fiction ought to be. Like a good book, I want to be able to read it again and again. But unlike traditional fiction, I can enjoy it in profoundly different ways based on my moods and whims.
It’s a little short, but quite beautiful for it. There’s certainly plenty of reasonably well-written text to amuse and entertain. I’d thoroughly recommend that you give it a go, whether you’re an IF veteran or if you’ve never played this kind of game before in your life: play The Play in your web browser. And then play it again to see how much of a difference you can make in this well-crafted and inspiring little world.