If you’ve ever applied for a job with my employer, the University of Oxford, you’ll have come across recruit.ox.ac.uk, one of the most-frustrating websites in the world. Of its many problems, the biggest (in my mind) is that it makes it really
hard to share or save the web address of a particular job listing. That’s because instead of using individual web addresses to correspond to individual jobs, like any
sanely-designed system would, it uses Javascript hackery and black magic to undermine the way your web browser was designed to work (which is why, you’ll find, you can’t “open in new
tab” properly either), and instead provides its own, inferior, interface.
Some day I might get around to writing e.g. a userscript and/or browser plugin that “fixes” the site – from a user’s perspective, at least. But for the time being, because this morning
I needed to share via social media a link to a UX developer post we’ve just
advertised, I’ve come up with a little bookmarklet to fix this single problem:
recruit.ox.ac.uk Permalink Generator
This tool makes it easy to get permalinks (web addresses you can save or share) for job listings on recruit.ox.ac.uk. It might be adaptable to make it work with other CoreHR-powered
systems, if it turns out that this missing feature comes from the underlying software that powers the site: it could also form the basis of a future userscript that would automatically
fix the site “on the fly”. Here’s how to use it:
Drag the link below into your browser’s bookmarks (e.g. the bookmarks toolbar).
When you’re on a recruit.ox.ac.uk job page, click on the bookmark. A permalink will appear at the top of the page, for your convenience. If you’re using a modern browser, the
permalink will also appear in the address bar.
Copy the permalink and use it wherever you need it, e.g. to share the link to a job listing.
If you have any difficulty with it or want help adapting it for use with other CoreHR systems, give me a shout.
An annual tradition at Three Rings is DevCamp, an event that borrows from the “hackathon” concept and expands it to a week-long code-producing factory for the
volunteers of the Three Rings development team. Motivating volunteers is a very different game to motivating paid employees: you can’t offer to pay them more for working harder nor
threaten to stop paying them if they don’t work hard enough, so it’s necessary to tap in to whatever it is that drives them to be a volunteer, and help them get more of that out of
their volunteering.
At least part of what appeals to all of our developers is a sense of achievement – of producing something that has practical value – as well as of learning new things, applying
what they’ve learned, and having a degree of control over the parts of the project they contribute most-directly to. Incidentally, these are the same things that motivate paid
developers, too, if a Google search for studies on the subject is to believed. It’s just that employers are rarely able to willing to offer all of those things (and even if they can,
you can’t use them to pay your mortgage), so they have to put money on the table too. With my team at Three Rings, I don’t have money to give them, so I have to make up for it with a
surplus of those things that developers actually want.
It seems strange to me in hindsight that for the last seven years I’ve spent a week of my year taking leave from my day job in order to work longer, harder, and unpaid for a
voluntary project… but that I haven’t yet blogged about it. Over the same timescale I’ve spent about twice as long at DevCamp than I have, for example, skiing, yet I’ve managed
to knock out several blog posts on that subject. Part of that might be borne out of the secretive nature of Three Rings, especially in its early days (when
involvement with Three Rings pretty-much fingered you as being a Nightline volunteer, which was frowned upon), but nowadays we’ve got a couple of
dozen volunteers with backgrounds in a variety of organisations: and many of those of us that ever were Nightliner volunteers have long since graduated and moved-on to other
volunteering work besides.
Part of the motivation – one of the perks of being a Three Rings developer – for me at least, is DevCamp itself. Because it’s an opportunity to drop all of my “day job” stuff
for a week, go to some beatiful far-flung corner of the country, and (between early-morning geocaching/hiking expeditions and late night drinking tomfoolery) get to spend long days
contributing to something awesome. And hanging out with like-minded people while I do so. I like I good hackathon of any variety, but I love me some Three Rings DevCamp!
So yeah: DevCamp is awesome. It’s more than a little different than those days back in 2003 when I wrote all the code and Kit worked hard
at distracting me with facts about the laws of Hawaii – for the majority of DevCamp 2016 we had half a dozen developers plus two documentation writers
in attendance! – but it’s still fundamentally about the same thing: producing a piece of software that helps about 25,000 volunteers do amazing things and make the world a better place.
We’ve collectively given tens, maybe hundreds of thousands of hours of time in developing and supporting it, but that in turn has helped to streamline the organisation of about 16
million person-hours of other volunteering.
So that’s nice.
Oh, and I was delighted that one of my contributions this DevCamp was that I’ve finally gotten around to expanding the functionality of the “gender” property so that there are now more
than three options. That’s almost more-exciting than the geocaches. Almost.
Edit: added a missing word in the sentence about how much time our volunteers had given, making it both more-believable and more-impressive.
This is the (long-overdue) last in a three-part blog post about telling stories using virtual reality. Read all of the
parts here.
For the first time in two decades, I’ve been playing with virtual reality. This time around, I’ve been using new and upcoming technologies like Google Cardboard and the Oculus Rift. I’m particularly interested in how these new experiences can be
used as a storytelling medium by content creators, and the lessons we’ll learn about immersive storytelling by experimenting with them.
It seems to me that the biggest questions that VR content creators will need to start thinking about as we collectively begin to explore this new (or newly-accessible) medium are:
How do we make intuitive user interfaces?
This question mostly relates to creators making “interactive” experiences. Superficially, VR gives user experience designers a running start because there’s little that’s as intuitive
as “turning your head to look around” (and, in fact, trying the technology out on a toddler convinced me that it’s adults – who already have an anticipation of what a computer interface
ought to be – who are the only ones who’ll find this challenging). On the other hand, most interactive experiences demand more user interaction than simply looking around, and
therein lies the challenge. Using a keyboard while you’re wearing a headset is close to impossible (trust me, I’ve tried), although the augmented-reality approach of the Hololens and potentially even the front-facing webcam that’s been added to the HTC Vive PRE
might be used to mitigate this. A gamepad is workable, but it’s slightly immersion-breaking in some experiences to hold your hands in a conventional “gamer pose”, as I discovered while
playing my Gone Home hackalong: this was the major reason I switched to using a Wiimote.
So far, I’ve seen a few attempts that don’t seem to work, though. The (otherwise) excellent educational solar system exploration tool Titans of Space makes players stare at on-screen buttons for a few seconds to “press” them, which is clunky and unintuitive: in the
real world, we don’t press buttons with our eyes! I understand why they’ve done this: they’re ensuring that their software has the absolute minimum interface requirement that’s shared
between the platforms that it supports, but that’s a concern too! If content creators plan to target two or more of the competing systems that will launch this year alone, will they
have to make usability compromises?
There’s also the question of how we provide ancillary information to players: the long-established paradigms of “health in the bottom left, ammo in the bottom right” don’t work so
obviously when they’re hidden in your peripheral vision. Games like Elite Dangerous have tackled this problem from their inception
by making a virtualised “real” user interface comprised of the “screens” in the spaceship around you, but it’s an ongoing challenge for titles that target both VR and conventional
platforms in future. Wareable made some great observations about these kinds of concerns, too.
How do we tell stories without forced visual framing?
In my previous blog post, I talked about a documentary that used 360° cameras to “place” the viewer among the protesters that formed the subject of the documentary. In order to provide
some context and to reduce the disorientation experienced by “jumping” from location to location, the creator opted to insert “title slides” between scenes with text explaining what
would be seen next. But title slides necessitate that the viewer is looking in a particular direction! In the case of this documentary and several other similar projects I’ve seen, the
solution was to put the title in four places – at each of the four cardinal directions – so that no matter which way you were looking you’ll probably be able to find one. But
title slides are only a small part of the picture.
Directors producing content – whether interactive or not – for virtual reality will have to think hard about the implications of the fact that their camera (whether a physical camera or
– slightly easier and indeed more-controllable – a simulated camera in a 3D-rendered world) can look in any direction. Sets must be designed to be all-encompassing, which poses
huge challenges for the traditional methods of producing film and television programmes. Characters’ exits and entrances must be through believable portals: they can’t simply walk off
to the left and stop. And, of course, the content creator must find a way to get the audience’s attention when they need it: watching the first few minutes of Backstage with an Elite Ballerina, for example, puts you in a spacious dance studio with a spritely ballerina to follow… but
there’s nothing to stop you looking the other way (perhaps by accident), and – if you do – you might miss some of the action or find it difficult to work out where you’re
supposed to be looking. Expand that to a complex, busy scene like, say… the ballroom scene in Labyrinth… and you might find yourself feeling completely lost within a matter of minutes (of course, a feeling of being
lost might be the emotional response that the director intends, and hey – VR is great for that!).
The potential for VR in some kinds of stories is immense, though. How about a murder mystery story played out in front of you in a dollhouse (showing VR content “in minature” can help
with the motion sickness some people feel if they’re “dragged” from scene to scene): you can move your head to peep in to any room and witness the conversations going on, but the murder
itself happens during a power cut or otherwise out-of-sight and the surviving characters are left to deduce the clues. In such a (non-interactive) experience the spectator has the
option to follow the action in whatever way they like, and perhaps even differently on different playthroughs, putting the focus on the rooms and characters and clues that interest them
most… which might affect whether or not they agree with the detective’s assertions at the end…
What new storytelling mechanisms can this medium provide?
As I mentioned in the previous blog post, we’ve already seen the evolution of storytelling media on several occasions, such as the jump from theatre to cinema and the opportunities that
this change eventually provided. Early screenwriters couldn’t have conceived of some of the tools used in modern films, like the use of long flowing takes for establishing shots or the
use of fragmented hand-held shots to add an excited energy to fight scenes. It wasn’t for lack of imagination (Georges
Méliès realised back in the nineteenth century that timelapse photography could be used to produce special effects not possible in theatre) but rather a lack of the
technology and more-importantly a lack of the maturity of the field. There’s an ongoing artistic process whereby storytellers find new ways to manage their medium from one another:
Romeo Must Die may have made clever use of a “zoom-to-X-ray” when a combatant’s bones were broken, but it wouldn’t
have been possible if The Matrix hadn’t shown the potential for “bullet time” the previous year. And if we’re going down
that road: have you seen the bullet time scene in Zotz!, a film
that’s older than the Wachowskis themselves?
Clearly, we’re going to discover new ways of telling stories that aren’t possible with traditional “flat screen” media nor with more-immersive traditional theatre: that’s what
makes VR as a storytelling tool so exciting.
Of course, we don’t yet know what storytelling tools we’ll find in this medium, but some ideas I’ve been thinking about are:
Triggering empathetic responses by encouraging the audience to more-closely relate to the situation of characters by putting them more-directly “in their shoes”.
That Dragon, Cancer, an autobiographical game about the experience of a child’s terminal cancer, is an incredibly emotive
experience… but only begins to touch upon the emotional journeys possible through virtual reality: what’s it really like to be close to somebody who’s terminally ill?
Allowing spectators to spectate a story in their own way, or from a perspective that they choose and control. We’ve already begun to explore this as a concept with
the (little-used) multi-angle feature on DVDs: for example, if you’ve got the special edition of Die Hard then you can
rewatch certain scenes and flick between different cameras as you watch. But that’s nothing on the potential for future animated films to allow you to walk or fly around and watch
from any angle… or in the case of interactive experiences, to influence the direction that the story takes by your actions or even just by your presence: how about a heist story in
which the burglars will only carry out their plan if they can’t tell that you’re watching them, forcing you to be surreptitious in your glances over to
see what they’re up to?
Combining VR with motion simulation: Alton Towers is leading the way here, with their announcement that they’re going to re-engineer the Air rollercoaster into Galactica, upon which the ride gives the sensation of motion while a
Samsung Gear VR headset simulates an otherwise-impossible spacefaring experience, and I’m hugely excited about
the prospect. But a more-adaptable and economical way to achieve a similar result would be to repurpose a motion simulator: the good ones can provide the sensation of g-forces on
almost any vector for an extended period of time; the really good ones can provide short bursts of g-forces at levels other than that provided by Earth’s gravity (usually by flinging
the carriage around on a programmable shuttle arm, although weightlessness is still unfeasible while you remain on the ground). If you didn’t think that 2013’s Gravity was nauseating enough when it was merely in 3D, wait until you try a similar experience in motion-assisted virtual
reality.
Point-of-view framing: this paradigm has always been at least a little unsatisfying in regular movies. I mean, it might have been the best moment in Doom, but that’s more to do with how apalling that film was than how good the technique is! But the potential for stepping
in to the viewpoint of another human and being able to look around has great potential for immersion-building without allowing the participant to stray too-far from the main
storyline. Something that people who haven’t yet experienced VR don’t often appreciate is that a few little things can really improve the experience of immersion… things like being
able to move your head, even just being a few degrees, make you feel like you’re “there”. There are some big challenges to overcome with this, of course, such as how to make the
movement of the camera not make the watcher feel ‘dragged along’, especially if their experience is of moving sideways… but these are challenges that will probably be solved for us
quickly by the porn industry, who’re working very hard on making this kind of experience seamless.
Just like the leaps and bounds we took with streaming video, yet again technology will get to thank peoples’ love of porn for advancing what home computers are capable of.
Exploring therapeutic experiences: until I really started trying out different VR gear, I didn’t think that it would be sufficiently engaging to be able to trigger a
strong enough response to be useful in a therapeutic capacity. But after the first time I came out of a 10-minute game of Caaaaardboard! feeling genuinely wobbly at the knees in the same way
as after my first parachute jump, I realised that modern VR really can produce an experience that results in a
psychosomatic response. And that’s really important, because it provides a whole new medium in which we can treat (and, I suppose, study), for example, phobias in a controlled and
‘escapable’ environment. Of course, that raises other questions too, such as: is it possible to cause disorders like PTSD with virtual reality? If it’s simply the case that optimally-made VR is more-immersive than the best possible “flat screen” experiences
and that it’s this that can improve its therapeutic potential, then surely it can be more-traumatic, too: I know enough people that were emotionally-scarred by Bambi‘s
mother’s death, E.T.‘s almost-death, or that one scene from Watership Down that gave me nightmares for years: how much more (potentially)-damaging could a VR
experience be? Whether or not it’s truly the case, it’ll only take one or two media circuses about murderous psychopaths who are unable to differentiate their virtual reality from the
real kind before people start getting asked these kind of questions.
As I’m sure I’ve given away these last three blog posts, I’m really interested in the storytelling potential of VR, and you can bet I’ll be bothering you all again with updates of the
things I get to play with later this year (and, in fact, some of the cool technologies I’ve managed to get access to just while I’ve been writing up these blog posts).
If you haven’t had a chance to play with contemporary VR, get yourself a cardboard. It’s dirt-cheap and it’s (relatively) low-tech and it’s nowhere near as awesome as “real” hardware
solutions… but it’s still a great introduction to what I’m talking about and it’s absolutely worth doing. And if you have, I’d love to hear your thoughts on storytelling using
virtual reality, too.
This is the second in a three-part blog post about telling stories using virtual reality. Read all of the parts
here.
I’m still waiting to get in on the Oculus Rift and HTC Vive magic when they’re made
generally-available, later this year. But for the meantime, I’m enjoying quite how hackable VR technologies are. I chucked my Samsung Galaxy S6 edge into an I Am Cardboard DSCVR, paired it with a gaming PC using TrinusVR, used GlovePIE to hook up a Wii remote (playing games with a keyboard or even a gamepad is challenging if your headset doesn’t have a
headstrap, so a one-handed control is needed), and played a game of Gone Home. It’s a cheap and simple way to jump into VR
gaming, especially if – like me – you already own the electronic components: the phone, PC, and Wiimote.
While the media seems to mostly fixate on the value of VR in “action” gaming – shoot-’em-ups, flight simulators, etc. – I actually think there’s possibly greater value in it more
story-driven genres. I chose Gone Home for my experiment, above, because it’s an adventure that you play at your own pace, where the amount you get out of it as a story depends
on your level of attention to detail, not how quickly you can pull a trigger. Especially on this kind of highly-affordable VR gear, “twitchy” experiences that require rapid head turning
are particularly unsatisfying, not-least because the response time of even the fastest screens is always going to be significantly slower than that of real life. But as a storytelling
medium (especially in an affordable form) it’s got incredible potential.
I was really pleased to discover that some content creators are already experimenting with the storytelling potential of immersive VR experiences. An example would be the video
Hong Kong Unrest – a 360° Virtual Reality Documentary, freely-available on YouTube. Standing his camera (presumably a
Jump camera rig, or something similar) amongst the crowds of the 2014 Hong Kong protests, the creator of this documentary gives us a great opportunity to feel as though we’re standing
right there with the protesters. The sense of immersion of being “with” the protesters is, in itself, a storytelling statement that shows the filmmaker’s bias: you’re encouraged to
empathise with the disenfranchised Hong Kong voters, to feel like you’re not only with them in a virtual sense, but emotionally with them in support of their situation. I’m afraid that
watching the click-and-drag version of the video doesn’t do it justice: strap a Cardboard to your head to get the full experience.
But aside from the opportunities it presents, Virtual Reality brings huge new challenges for content creators, too. Consider that iconic spaghetti western The Good, The Bad, And The Ugly. The opening scene drops us right into one of the artistic themes of the film –
the balance of wide and close-up shots – when it initially shows us a wide open expanse but then quickly fills the frame with the face of Tuco (“The Ugly”), giving us the experience of
feeling suddenly cornered and trapped by this dangerous man. That’s a hugely valuable shot (and a director’s wet dream), but it represents something that we simply don’t have a way of
translating into an immersive VR setting! Aside from the obvious fact that the viewer could simply turn their head and ruin the surprise of the shot, it’s just not possible to fill the
frame with the actor’s face in this kind of way without forcing the focal depth to shift uncomfortably.
That’s not to say that there exist stories that we can’t tell using virtual reality… just that we’re only just beginning to find out feet with this new medium. When stage directors took
their first steps into filmography in the early years of the 20th century, they originally tried to shoot films “as if” they were theatre (albeit, initially, silent theatre): static
cameras shooting an entire production from a single angle. Later, they discovered ways in which this new medium could provide new ways to tell stories: using title cards to set the
scene, close-ups to show actors’ faces more-clearly, panning shots, and so on.
Similarly: so long as we treat the current generation of VR as something different from the faltering steps we took two and a half decades ago, we’re in frontier territory and feeling
our way in VR, too. Do you remember when smartphone gaming first became a thing and nobody knew how to make proper user interfaces for it? Often your tiny mobile screen would simply try
to emulate classic controllers, with a “d-pad” and “buttons” in the corners of the screen, and it was awful… but nowadays, we better-understand the relationship that people have with
their phones and have adapted accordingly (perhaps the ultimate example of this, in my opinion, is the addictive One More Line, a minimalist game with a single-action “press anywhere” interface).
I borrowed an Oculus Rift DK2 from a co-worker’s partner (have I mentioned lately that I have the most awesome co-workers?) to get a little experience with it, and it’s honestly one of
the coolest bits of technology I’ve ever had the priviledge of playing with: the graphics, comfort, and responsiveness blows Cardboard out of the water. One of my first adventures –
Crytek’s tech demo Back to Dinosaur Island – was a visual spectacle even despite my apparently-underpowered
computer (I’d hooked the kit up to Gina, my two-month old 4K-capable media centre/gaming PC: I suspect that Cosmo, my multi-GPU watercooled beast might have fared
better). But I’ll have more to say about that – and the lessons I’ve learned – in the final part of this blog post.
This is the first in a three-part blog post about telling stories using virtual reality. Read all of the parts
here.
As part of my work at the Bodleian… but to a greater extent “just for fun”… I’ve spent the last few weeks playing with virtual reality. But
first, a history lesson.
This isn’t the first time I’ve used virtual reality. The first time, for me, was in the early 1990s, at the Future Entertainment
Show, where I queued for a shot at Grid Busters on a Virtuality 1000-CS. The Virtuality 1000 was powered by an
“Expality”: functionally an Amiga 3000 with specially-written software for reading the (electromagnetically-sensed) facing of the
headset and the accompanying “space joystick”… and providing output via a pair of graphics cards (one for each eye) to LCD screens. The screens were embedded in chunky bits on the sides
of the helmet and projected towards mirrors and lenses at the far end – this apparently being an effort to reduce how “front-heavy” it felt, but I can tell you that in practice a
Virtuality headset felt weighty on your neck, even for its era!
Nonetheless, the experience stuck with me: I returned to school and became the envy of my friends (the nerdy ones, at least) when I told them about my VR adventure, and – not least
thanks to programs like Tomorrow’s World and, of course, the episode of Bad Influence that reminded
me quite how badly I wanted to get myself down to Nottingham for a go at Legend Quest – I was genuinely filled with optimism that within the decade, playing a VR game would
have gone from the fringes of science fiction to being something where everybody-knew-somebody who did it routinely.
I never managed to get to play Legend Quest, and that first “VR revolution” swiftly fell flat. My generation was promised all of the hi-tech science, immersion, and magical
experience of The Lawnmower Man, but all we were left with was the overblown promises, expensive effects, and ill-considered user experience of, well… The Lawnmower
Man. I discovered Virtuality machines in arcades once or twice, but they seemed to be out-of-order more often than not, and they quickly disappeared. You can’t really blame the
owners of arcades: if a machine costs you in the region of £40,000 to buy and you can charge, say, £1 for a 3-minute go on it (bear in mind that even the most-expensive digital arcade
machines tended to charge only around 30p, at this time, and most were 10p or 20p), and it needs supervision, and it can’t be maintained by your regular guy… well, that swiftly begins
to feel like a bad investment.
Plus, the fifth generation of games consoles came along: the (original) Sony PlayStation, the
Nintendo N64, and – if you really wanted the highest-technology system (with the absolute least imaginative developers) – the Sega Saturn. These consoles came at price points that made
them suitable Christmas gifts for the good boys and girls of middle-class parents and sported 3D polygon graphics of the type that had previously only been seen in arcades, and the slow
decline of the video arcade accelerated dramatically. But home buyers couldn’t afford five-figure (still moderately-experimental) VR systems, and the market for VR dried up in a matter
of years. Nowadays, if you want to play on a Virtuality machine like the one I did, you need to find a collector (you might start with this guy from
Leicester, whose website was so useful in jogging my memory while I wrote this blog post).
2016 is the year in which this might change. The need for ubiquitous cheap computing has made RAM and even processors so economical that we throw them away when we’re done with
them. The demands of modern gaming computers and consoles has given us fast but affordable graphics rendering hardware. And the battle for the hottest new smartphones each year has
helped to produce light, bright, high-resolution screens no bigger than the palm of your hand.
In fact, smartphones are now the simplest and cheapest way to play with VR. Under the assumption that you’ve already got a smartphone, you’re only a couple of cheap
plastic lenses and a bit of cardboard away from doing it for yourself. So that’s how my team and I started out playing: with the wonderfully-named Google Cardboard. I know that Google Cardboard is old-hat now and all the early adopters have even got their grandmothers using it now, but
it’s still a beautiful example of how economical VR threatens to become if this second “VR revolution” takes hold. Even if you didn’t already own a compatible
smartphone, you could buy a second-hand one on eBay for as little as £30: that’s an enormous difference from the £40K Virtuality machines of my youth, which had only a fraction of the
power.
I’m going somewhere with this, I promise: but I wanted to have a jumping-off point from which to talk about virtual reality more-broadly first and it felt like I’d be overstretching if
I jumped right in at the middle. Y’know, like the second act of The Lawnmower Man. In the next part of this series, I’d like to talk about the storytelling opportunities that
modern VR offers us, and some of the challenges that come with it, and share my experience of playing with some “proper” modern hardware – an Oculus Rift.
Despite a full workload and a backlog of both work, personal, volunteering and study emails to deal with, 2016 is off to a pretty good start so far. Here’s some highlights:
In Sainsburys at the weekend, I got carded. Less than a week before my thirty-fifth birthday and for the first time in well over a decade, somebody asked me to prove my age when I
was trying to buy alcohol*. It’s even more-impressive when you consider that I was buying about £90 worth of shopping and a
single small bottle of kirsch… oh, and I had a toddler with me. That would have been an incredible amount of effort for somebody who very-definitely looks like he’s in his thirties.
Delighted.
This week, I’ve been mostly working on a project to make interactive digital content to support an exhibition on board games that we’re about to launch at my workplace. When my head
of department first mentioned the upcoming exhibition, there was no way you could have held me back fast enough.
Annabel has recently decided that she deserves a beard like her father and her Uncle Dan. Her new game is encouraging people to draw
them on her with washable pens. Aww.
I hope everybody else’s year is kicking off just as well.
* With one possible exception: the other year, an overenthusiastic bouncer insisted that I join a queue of one in turn
to show him my ID before he let me into a nightclub at 9:30pm on a Wednesday night. Like I said, overenthusiastic.
As you’re no-doubt aware, Home Secretary Theresa May is probably going to get her way with her “snooper’s
charter” by capitalising on events in Paris (even though that makes no sense), and before long, people working for
law enforcement will be able to read your Internet usage history without so much as a warrant (or, to put it as the UN’s privacy chief put it, it’s “worse than scary”).
In a revelation that we should be thankful of as much as we’re terrified by, our government does not understand how the Internet works. And that’s why it’s really easy for
somebody with only a modicum of geekery to almost-completely hide their online activities from observation by their government and simultaneously from hackers. Here’s a device that I
built the other weekend, and below I’ll tell you how to do it yourself (and how it keeps you safe online from a variety of threats, as well as potentially giving you certain other
advantages online):
I call it “Iceland”, for reasons that will become clear later. But a more-descriptive name would be a “Raspberry Pi VPN Hotspot”. Here’s what you’ll need if you want to build one:
A Raspberry Pi Model B (or later) – you can get these from less than £30 online and it’ll come with an SD card that’ll let it boot Raspbian, which is the Linux
distribution I’ve used in my example: there’s no reason you couldn’t use another one if you’re familiar with it
A USB WiFi dongle that supports “access point” mode – I’m using an Edimax one that cost me under a fiver – but it took a little hacking to make it work – I’ve heard
that Panda and RALink dongles are easier
A subscription to a VPN with OpenVPN support and at least one endpoint outside of the UK – I’m using VyprVPN because
I have a special offer, but there are lots of cheaper options: here’s a great article about
choosing one
A basic familiarity with a *nix command line, an elementary understanding of IP networking, and a spare 20 minutes.
From here on, this post gets pretty geeky. Unless you plan on building your own little box to encrypt all of your home’s WiFi traffic until it’s well out of the UK and
close-to-impossible to link to you personally (which you should!), then you probably ought to come back to it another time.
Here’s how it’s done:
1. Plug in, boot, and install some prerequisites
Plug the WiFi dongle into a USB port and connect the Ethernet port to your Internet router. Boot your Raspberry Pi into Raspbian (as described in the helpsheet that comes with
it), and run:
If, like me, you’re using an Edimax dongle, you need to do an extra couple of steps to make it work as an access point. Skip this bit if you’re using one of the other dongles I listed
or if you know better.
Get OpenVPN configuration files from your VPN provider: often these will be available under the iOS downloads. There’ll probably be one for each available endpoint. I chose the one for
Reyjkavik, because Iceland’s got moderately sensible privacy laws and I’m pretty confident that it would take judicial oversight for British law enforcement to collaborate with
Icelandic authorities on getting a wiretap in place, which is the kind of level of privacy I’m happy with. Copy your file to /etc/openvpn/openvpn.conf and edit it: you may find that you
need to put your VPN username and password into it to make it work.
sudo service openvpn start
You can now test your VPN’s working, if you like. I suggest connecting to the awesome icanhazip.com and asking it where you are (you can use your
favourite GeoIP website to tell you what country it thinks you’re in, based on that):
curl -4 icanhazip.com
Another option would be to check with a GeoIP service directly:
curl freegeoip.net/json/
4. Set up your firewall and restart the VPN connection
Unless your VPN provider gives you DNAT (and even if they do, if you’re paranoid), you should set up a firewall to allow only outgoing connections to be established, and then restart
your VPN connection:
sudo iptables -A INPUT -i tun0 -m conntrack --ctstate RELATED,ESTABLISHED -j ACCEPT
sudo iptables -A INPUT -i tun0 -j DROP
sudo sh -c "iptables-save > /etc/iptables.nat.vpn.secure"
sudo sh -c "echo 'up iptables-restore < /etc/iptables.nat.vpn.secure' >> /etc/network/interfaces"
sudo service openvpn restart
5. Configure your WiFi hotspot
Configure bind as your DNS server, caching responses on behalf of Google’s DNS servers, or another DNS server that you trust. Alternatively, you can just configure your DHCP clients to
use Google’s DNS servers directly, but caching will probably improve your performance overall. To do this, add a forwarder to /etc/bind/named.conf.options:
forwarders {
8.8.8.8;
8.8.4.4;
};
Restart bind, and make sure it loads on boot:
sudo service bind9 restart
sudo update-rc.d bind9 enable
Edit /etc/udhcpd.conf. As a minimum, you should have a configuration along these lines (you might need to tweak your IP address assignments to fit with your local network – the “router”
and “dns” settings should be set to the IP address you’ll give to your Raspberry Pi):
start 192.168.0.2
end 192.168.0.254
interface wlan0
remaining yes
opt dns 192.168.0.1
option subnet 255.255.255.0
opt router 192.168.0.1
option lease 864000 # 10 days
Enable DHCP by uncommenting (remove the hash!) the following line in /etc/default/udhcpd:
#DHCPD_ENABLED="yes"
Set a static IP address on your Raspberry Pi in the same subnet as you configured above (but not between the start and end of the DHCP list):
sudo ifconfig wlan0 192.168.0.1
And edit your /etc/network/interfaces file to configure it to retain this on reboot (you’ll need to use tabs, not spaces, for indentation):
Right – onto hostapd, the fiddliest of the tools you’ll have to configure. Create or edit /etc/hostapd/hostapd.conf as follows, but substitute in your own SSID, hotspot password, and
channel (to minimise interference, which can slow your network down, I recommend using WiFi scanner tool on your mobile to find which channels your neighbours aren’t using, and
use one of those – you should probably avoid the channel your normal WiFi uses, too, so you don’t slow your own connection down with crosstalk):
Hook up this configuration by editing /etc/default/hostapd:
DAEMON_CONF="/etc/hostapd/hostapd.conf"
Fire up the hotspot, and make sure it runs on reboot:
sudo service hostapd start
sudo service udhcpd start
sudo update-rc.d hostapd enable
sudo update-rc.d udhcpd enable
Finally, set up NAT so that people connecting to your new hotspot are fowarded through the IP tunnel of your VPN connection:
sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
sudo sh -c "echo net.ipv4.ip_forward=1 >> /etc/sysctl.conf"
sudo iptables -t nat -A POSTROUTING -o tun0 -j MASQUERADE
sudo sh -c "iptables-save > /etc/iptables.nat.vpn.secure"
6. Give it a go!
Connect to your new WiFi hotspot, and go to your favourite GeoIP service. Or, if your VPN endpoint gives you access to geographically-limited services, give those a go (you’d be amazed
how different the Netflix catalogues are in different parts of the world). And give me a shout if you need any help or if you have any clever ideas about how this magic little box can
be improved.
There’s a wonderful tool for making web-based “choose your own adventure”-type games, called Twine. One of the best things about it is that it’s so
accessible: if you wanted to, you could be underway writing your first ever story with it in about 5 minutes from now, without installing anything at all, and when it was done you could
publish it on the web and it would just work.
But the problem with Twine is that, in its latest and best versions, you’re trapped into using the Twine IDE. The Twine IDE
is an easy-to-use, highly visual, ‘drag-and-drop’ interface for making interactive stories. Which is probably great if you’re into IDEs or if you don’t “know better”… but for those of us who prefer to do our writing in a nice clean, empty text editor like Sublime or TextMate or to script/automate our builds, it’s just frustrating to lose access to the tools we love.
Plus, highly-visual IDEs make it notoriously hard to collaborate with other authors on the same work without simply passing
it back and forwards between you: unless they’ve been built with this goal in mind, you generally can’t have two people working in the same file at the same time.
Earlier versions of Twine had a command-line tool called Twee that perfectly filled this gap. But the shiny new versions don’t. That’s where I came in.
In that way that people who know me are probably used to by now, I was very-slightly unsatisfied with one aspect of an otherwise fantastic product and decided that the
correct course of action was to reimplement it myself. So that’s how, a few weeks ago, I came to release Twee2.
If you’re interested in writing your own “Choose Your Own Adventure”-type interactive fiction, whether for the world or
just for friends, but you find user-friendly IDEs like Twine limiting (or you just prefer a good old-fashioned text editor), then give Twee2 a go. I’ve written a simple 2-minute tutorial to get you
started, it works on Windows, MacOS, Linux, and just-about everything else, and it’s completely open-source if you’d like to expand or
change it yourself.
(there are further discussions about the concept and my tool on Reddit here, here, here and here, and on the Twinery forums here, here and here)
Warning: this blog post contains pictures of urine, invasive equipment, and the inside of a bladder. It’s probably safe for all audiences, but you might like to put
your glass of apple juice down for a minute or two. The short of it all is that I’m probably healthy.
Since my hospitalisation the other month with a renal system infection, I’ve undergone a series of investigations to try to determine if
there’s an underlying reason that I fell ill. As my doctor explained to me, it’s quite possible that what I’d experienced was a random opportunistic infection (perhaps aided by
a course of unrelated antibiotics I’d been on earlier this year or by certain lifestyle habits), but if that wasn’t the case – if there were some deeper explanation for my health
problems – it was important to find out sooner, rather than later.
Early on I had several ultrasound scans of my bladder (at a number of different times and at a variety of levels of fullness) and one of my kidneys, the latter of which revealed some
“minor scarring” of one of them which apparently isn’t something I should be worried about… although I wish they’d started the two-page letter I got with that rather than
opening with, effectively, “Contrary to what we told you at the hospital, we did later see something wrong with you…” But still, good to be reassured that this is probably not
an issue.
More recently, I went to the hospital to have a “flow rate test” and a cystoscopy. The flow rate test involved the most-ghetto looking piece of NHS equipment I’ve ever seen:
functionally, it seemed to be little more than a funnel on top of a large measuring beaker, in turn on top of a pressure-sensitive digital scale. The scale was connected up to the only
fancy-looking bit of equipment in the room, a graphing printer that output the calculated volume (based on their weight) of the same and, more-importantly, the rate of change: the “flow
rate” of the stream of urine.
I suppose one advantage of using equipment like this is that it basically operates itself. Which meant that the nurse was able to give me five seconds worth of instruction and then
leave the room, which saved us from our own Britishness forcing us to make small-talk while I urinated in front of her or something. Ultimately, I turned out to be within the range of
normalcy here, too, although I was a little disappointed to find that the ward didn’t maintain a daily “score board” of flow rates, as sort-of a science-backed literal pissing contest.
Finally came the cystoscopy, and this was the bit that I’d been most-nervous about. This procedure involves the insertion of a long flexible tube into the urethra at the tip of the
penis, under local anasthetic, and pushing it all the way down, through the sphincter, down through the prostate and then back up into the bladder. It’s then used as a channel to pump
water into the bladder, filling it to capacity and stretching out the sides, after which the fibreoptic cord (and light) that runs along its length is used to look around inside the
bladder to inspect for any of a plethora of different problems.
The doctor invited me to watch with him on the monitor, which I initially assumed was because I was clearly interested in everything and kept asking questions, but in hindsight I wonder
if it’s just that he – quite rightly – assumed that I might have panicked if I’d have been looking in the direction of the piece of equipment he brought in and jabbed at my penis with.
I only looked at it while it was on its way out, and my god its a scary-looking thing: sort of like a cross between a tyre pressure gauge and a blowtorch. The first few inches were
painless – the local anasthetic had made me completely numb right up to and including the external sphincter, which is at the base of the penis. However, what I can only assume was the
second sphincter complained of the discomfort, and it stung pretty sharply any time the doctor would twist the cystoscope to change the angle of the picture.
Seeing the inside of your own body is an amazing experience. I mean: it’s not amazing enough to even be worth the experience of a cystoscopy, never mind the illness that in my case
preceeded it… but it’s still pretty cool. The ultrasounds were interesting, but there’s nothing quite so immersive as seeing a picture of the inside of your own bladder, gritting your
teeth while the doctor points to an indentation and explains that it’s the opening to the ureter that connects to your own left kidney!
Unfortunately I neglected to take my phone into the operating room, having put it into a locker when I changed into a gown, and so I wasn’t able to (as I’d hoped) take photos of the
inside of my own bladder. So you’ll have to make do with this video I found, which approximates the experience pretty well. The
good news is that there’s probably nothing wrong with me, now that the infection from earlier this year has passed: nothing to suggest that there’s any deeper underlying issue
that caused me to get sick, anyway!
The bad news is that while the procedure itself was shorter and more-bearable than I’d expected, the recovery’s been a real drag. A week later, it still hurts a lot to urinate (although
I’ve stopped yelping out loud when I do so) and my crotch is still too sore for me to be able to cycle. I’ve also discovered that an errection can be painful enough to wake me up, which
is definitely not the most-pleasant way I’ve been roused by a penis. But it’s getting better, day by day, and at least I know for sure that I’m more-or-less “right” in the renal system,
now.
My geek-crush Ben Foxall posted on Twitter on Monday morning to share
that he’d had a moment of fun nostalgia when he’d come into the office to discover that somebody in his team had covered his monitor with two layers of Post-It notes. The bottom layer
contained numbers – and bombs! – to represent the result of a Minesweeper board, and the upper layer ‘covered’ them so that individual Post-Its could be removed to reveal what lay
beneath. Awesome.
Not to be outdone, I hunted around my office and found some mini-Post-Its. Being smaller meant that I could fit more of them onto a monitor and thus make a more-sophisticated (and
more-challenging!) play space. But how to generate the board? Sure: I could do it by hand, but that doesn’t seem very elegant at all – plus, humans make really bad random number generators! I didn’t need quantum-tunnelling-seeded Minesweeper (yes, that’s a thing) levels of entropy, sure, but it’d still be nice to outsource the heavy lifting
to a computer, right?
So naturally, I wrote a program to do it for me. Want to see? It’s at danq.me/minesweeper. Just line up some Post-Its on a co-worker’s monitor to work out
how many you can fit across it in each dimension (I found that I could get 6 × 4 standard-sized Post-Its but 7× 5 or even 8× 5 mini-sized Post-Its very comfortablyonto one of the typical widescreen monitors in my office), decide how many mines you want, and click
Generate. Don’t like the board you get? Click it again!
And because I was looking for a fresh excuse to play with Periscope, I broadcast the first game I set up live to the Internet. In the end, 66
people ended up watching some or all of a paper-based game of Minesweeper played by my colleague Liz, including moments of cheering her on
and, in one weird moment, despair at the revelation that she was married. The internet’s strange, yo.
Anyway: in case you missed the Periscope broadcast, I’ve put it on YouTube. Sorry about the portrait-orientation filming: I
think it’s awful, too, but it’s a Periscope thing and I haven’t installed the new update that
fixes it yet.
Now go set up a game of Post-It Minesweeper for a friend or co-worker.
Three weeks ago was (give or take a few weeks because we’ve never bothered with accuracy) the end of Ruth and I’s 8th year together, and
we marked the occasion with a mini-break away for a few nights. We spent the first two nights in a ‘showman’-style gypsy caravan in Herefordshire, and it was amazing enough that I
wanted to share it with you:
The place we went was Wriggles Brook, a ‘glamping’-style site in the shadow of the Forest of Dean. In a long field that twists its way
alongside a babbling brook, the owners have set up a trio of traditional horse-drawn caravans, each in a wooded clearing that isolates it from the others. Two of the caravans are
smaller, designed just for couples (who are clearly the target market for this romantic getaway spot), but we took the third, larger, (centenarian!) one, which sported a separate living
room and bedroom.
The bedroom was set up so that children could be accommodated in a bunk under the adults (with their own string of fairy lights and teeny-tiny windows, but after she bumped her head on
the underside of the beams Annabel decided that she didn’t want to sleep there, so we set up her travel cot in the living room.
So yeah: a beautiful setting, imaginative and ecologically-friendly accommodation, and about a billion activities on your doorstep. Even the almost-complete lack of phone signal into
the valley was pretty delightful, although it did make consulting Google Maps difficult when we got lost about 20 minutes out from the place! But if there’s one thing that really does
deserve extra-special mention, it’s the food!
Our hosts were able to put on a spectacular breakfast and evening meal for us each night, including a variety of freshly-grown produce from their own land. We generally ate in their
mini dining room – itself a greenhouse for their grapevines – but it was equally-nice to have pancakes delivered to the picnic table right outside our caravan. And speaking as somebody
who’s had their fair share of second-rate veggie breakfasts over the last… what, four and a half
years?… it was a great relief to enjoy a quite-brilliant variety of vegetarian cuisine from a clearly-talented chef.
So yeah – five stars for Wriggles Brook in Herefordshire if you’re looking for an awesome romantic getaway, with or without an accompanying toddler. Ruth and I later palmed the little
one off on JTA so that we could have a night away without her, too, which – while fun (even if we didn’t get to try all 280+ gins at the restaurant we ate at) – wasn’t quite so worthy of mention as the unusual gypsy-caravan-escape that
had preceded it. I’m hoping that we’ll get out to Wriggles Brook again.
As I mentioned last week, I’ve been ill. For those who wanted the grisly details, well: here you go.
Warning: this blog post contains frank subjective descriptions of the symptoms of slightly-icky medical conditions including photographs. It’s probably safe for all
audiences, but you might not want to be drinking anything while you read it.
The trouble began, I suppose, about a month and a half ago, when a contracted what seemed to be a minor urinary tract
infection. If you’re a woman then, statistically-speaking, I probably don’t need to tell you what that is, but for the benefit of the men: it’s what happens when bacteria (or,
sometimes, a fungus or virus) infects the renal system: the kidneys, bladder, ureters, and especially the urethra. It’s not pleasant: it gives you the feeling of needing to pee the
whole time, makes it harder to pee, and – when you do – it feels a little bit like you’re piss is made of lava.
Despite it not being common for men (more on that later), I’ve had mild UTIs on a couple of occasions in my life, and I’d always found
that ensuring that I got plenty of water and a full RDA of vitamin C was more than enough to make it clear up all by itself
within a couple of days. So that’s what I started doing. But then things took a turn for the worse: I started getting a stabbing pain in my left kidney. Recognising this as being
pyelonephritis, I went to the doctor who prescribed me a course of the antibiotic ciprofloxacin. Within a couple of days I was
feeling right as rain (of course I continued to finish the course of drugs, although I was interested to see that that advice is
starting to become controversial).
Naturally I was a little disappointed when, the week before last, I started getting UTI-like pain again, followed very swiftly
this time by pain in my bladder that constantly felt a little like I was recovering from being punched. Back to the doctor I went, where (after the usual tests to work out what the
most-likely best-antibiotic to use was) I was prescribed a course of nitrofurantoin. I’d never had this particular drug
before, and it wasn’t initially clear which of the escalating ill-effects I was experiencing were symptoms of the infection and which were side-effects of the medication: it started
with joint pain, then nausea, then diarrhoea, then a full-on fever. It was at the point that I was fully-clothed in bed, running a temperature and soaked in sweat but still feeling cold
and shivering that Ruth called 111, who told her to take me to A&E.
(Which, like her care for me in general, she did fabulously well, except for a little bit where she sort-of ran me over in the car park of the hospital: thankfully some
friendly paramedics were standing around and were able to drag me into the building. Anyway, I don’t have much memory of that bit and I certainly don’t have any amusing photos, so I’ll
skip over it.)
A few tests later, the medical staff seemed confident that what I was experiencing was not an allergic reaction to the antibiotic (however, I see that they still made a note of it as a
risk on my notes!) but was a progression of the infection, which seemed to have crossed over from the tissues of my renal system and into my bloodstream and was now generally causing
havoc by attacking the rest of my body. They hooked me up to a drip of an intravenous antibiotic and kept me stocked with painkillers, then sent me up to the urology ward and set me up
with a “bed” (urology mostly deals with outpatients, and so my ‘bed’ was actually a trolley, but they wanted to keep me close to the urologists in case of any complications).
A consultant switched me to a week’s course of yet-another different antibiotic – co-amoxiclav – and recommended
keeping me in for another night. Now, I think that co-amoxiclav is a really interesting drug, so I’m going to be a bit of a nerd and tell you about that for a bit (I promise we’ll get
back to my health in a moment: if you don’t want the science bit, just scroll past the diagrams to the next photo).
Co-amoxiclav is a mixture of two drugs. The first is the antibiotic amoxicillin. Amoxicillin belongs to a class of antibiotics
(which includes penicillin) called β-lactams, which is the most-commonly used family of antibiotics. These antibiotics contain a four-point lactam ‘ring’ (highlighted in blue above),
and the way that they work is that this part of the molecule bonds with a particular protein common to all gram-positive
bacteria. Normally this protein is responsible for producing peptidoglycan, which is an essential ingredient in the cell walls
of these kinds of bacteria, but when it gets locked to a β-lactam ring it stops working. As a result, when the bacterium reproduces the new child doesn’t have a proper cell wall, and
can’t survive long in even the least-hostile environments.
Of course, we’re in a medical arms race right now because some of the bacteria which we’re targetting with antibiotics are becoming resistant. And here’s one what that they’re
doing so: some of these bacteria have evolved to produce beta-lactamase, also bonds with beta-lactam rings, adding an
OH to them and making them useless. Bummer, eh?
The second drug in co-amoxiclav, then, is clavulanic acid, which was discovered in the 1970s and started being added to drugs
in the 1980s. Despite having a β-lactam ring (as you’ll see in blue above), clavulanic acid by itself it isn’t an effective antibiotic (for reasons I can’t quite get my head
around – anyone want to help me?). But what it’s great at is bonding that lactam ring to beta-lactamase, thereby deactivating the bacterial counter-offensive and allowing the
amoxicillin to carry on working, combating resistance.
So what you’ve got in co-amoxiclav is a an antibiotic and a chemical that counteracts the effects of a chemical that deactivates that antibiotic. Wow! It’s things like this
that really make me wish I had a brain for biology!
I was eventually discharged from hospital and released to go home for lots of bed rest and water, along with a further week’s course of co-amoxiclav. Unfortunately it turns out that I’m
one of the unlucky folks for whom amoxicillin makes me dizzy, so I spent most of that week lying down in-between wobbly vertigo-filled trips to and from the bathroom. But it worked!
Within a few days I was feeling much better and by the end of last week I was able to work from home (and actually feel like I was useful again!).
Free of symptoms and off the drugs, I returned to work properly on Monday morning and everything seemed fine. Until, late in the morning, I went to the bathroom and started pissing
blood.
Now apparently blood in your urine, while horrifying when it happens to you unexpectedly, isn’t actually a
sign of a medical emergency. I was starting to get bladder pain again, quite intensely, so I excused myself from work and called the urology ward, who decided that I wasn’t in bad
enough a condition to go and see them but sent me straight to my GP, who gave me another fortnight’s worth of co-amoxiclav. They’re monitoring my progress with urine and blood samples
and if by Friday it’s not having an impact, they’re going to want to send me back to hospital (hopefully only as an outpatient) and pump me full of the intravenous stuff again. So…
fingers crossed for a good result out of these drugs.
I was hoping that by this point I’d be writing this blog post and telling you all about how I’d fought the bladder monster and won. But it looks like I won’t be able to claim that
victory for another week or two, yet. All I know is that I searched for “bladder monster” and found
this. Yeah: that feels about right.
So, I’ve not been well lately. And because a few days lying on my back with insufficient mental stimulation is a quick route to insanity for me, I’ve been trying to spend my
most-conscious moment doing things that keep my brain ticking over. And that’s how I ended up calculating pi.
Pi (or π) is, of course, the ratio of the circumference of a circle to its diameter, for every circle. You’ll probably have learned it in school as 3.14, 3.142, or 3.14159, unless you
were one of those creepy kids who tried to memorise a lot more digits. Over the years, we’ve been able to calculate it to increasing precision, and although there’s no practical or theoretical reason that we need to know it beyond the 32 digits worked out by
Ludolph van Ceulen in the 16th Century, it’s still a fascinating topic that attracts research and debate.
Most of the computer-based systems we use today are hard to explain, but there’s a really fun computer-based
experimental method that can be used to estimate the value of pi that I’m going to share with you. As I’ve been stuck in bed (and often asleep) for the last few days, I’ve not
been able to do much productive work, but I have found myself able to implement an example of how to calculate pi. Recovery like a nerd, am I right?
Remember in school, when you’ll have learned that the formula to describe a circle (of radius 1) on a cartesian coordinate system is x2 + y2 = 1? Well you can work
this backwards, too: if you have a point on a grid, (x,y), then you can tell whether it’s inside or outside that circle. If x2 + y2 < 1, it’s inside, and if
x2 + y2 > 1, it’s outside. Meanwhile, the difference between the area of a circle and the area of a square that exactly contains it is π/4.
Take those two facts together and you can develop an experimental way to determine pi, called a Monte Carlo
method. Take a circle of radius 1 inside a square that exactly contains it. Then randomly choose points within the square. Statistically speaking, these random points have a
π/4 chance of occurring within the circle (rather than outside it). So if we take the number of points that lie within the circle, divide that by the total number of
points, and then multiply by 4, we should get something that approaches the value of pi. You could even do it by hand!
The software illustration I’ve written is raw JavaScript, HTML, and SVG, and should work in any modern web browser (though it can get a little slow once it’s drawn a few thousand
points!). Give it a go, here! When you go to that page, your browser will start drawing dots at random points, colouring them red if
the sum of the squares of their coordinates is less than 1, which is the radius of the circle (and the width of the square that encompasses it). As it goes along, it uses the formula I
described above to approximate the value of pi. You’ll probably get as far as 3.14 before you get bored, but there’s no reason that this method couldn’t be used to go as far as
you like: it’s not the best tool for the job, but it’s super-easy to understand and explain.
Oh, and it’s all completely open-source, so you’re welcome to take it and do with it what you wish. Turn off the graphical output
to make it run faster, and see if you can get an accurate approximation to 5 digits of pi! Or slow it down so you can see how the appearance of each and every point affects the
calculation. Or adapt it into a teaching tool and show your maths students one way that pi can be derived experimentally. It’s all yours: have fun.
And I’ll update you on my health at some other point.
It’s been almost five years since Sainsbury’s supermarkets pioneered the “brand match” idea, which rivals Tesco and Asda later adopted into their own schemes, and I maintain that it’s
one of the cleverest pieces of marketing that I’ve ever seen. In case you’ve not come across it before, the principle is this: if your shopping would have been cheaper at one of their
major competitors, these supermarkets will give you a voucher for the difference right there at the checkout. Properly advertised (e.g. not in ways that get banned for being misleading), these schemes are an
incredibly-compelling tool: no consumer should say no to getting the best possible prices without having to shop around, right?
But it’s nowhere near as simple as that. For a start, the terms and conditions (Asda, Sainsbury’s, Tesco) put
significant limitations on how the schemes work. You need to buy at least a certain number of items (8 at Asda, 10 at the other two). Those items must be directly-comparable to
competitors’ items: which basically means that only branded products count, but even among them, the competitor must stock the exact same size or else it doesn’t count, even if it would
have been cheaper to buy two half-sized products there. There are upper limits to the value of the vouchers (usually £10) and the number that you can use per transaction or per month.
“Buy X get Y free” offers are excluded. And there’s a huge list of not-compared products which may include batteries, toys, DVDs, some alcoholic drinks, cosmetics, homeware, flowers,
baby formula, light bulbs, books, and anything (even non-medicines) from the pharmacy aisle.
But even if it only applies to some of your shopping – the stuff that’s easy to directly compare – it’s still a good deal, right? You’re getting money back towards what you
would have saved if you’d have gone up the road? Not necessarily. Let us assume that on average the prices of these three supermarket giants are pretty much the same.
Individual products might each be a little more expensive here and a little cheaper there, but if you buy a large enough trolley-load you’re not going to notice the difference.
Following me so far? What does this mean for the voucher: it means that it no longer remotely represents what you would have saved if you’d actually been “shopping around”. Let’s take a
concrete example:
Suppose that this is my somewhat-eccentric shopping list (I wanted to select a variety of comparable branded products), and I’m considering shopping at either Sainsbury’s or Tesco:
Mozzarella
Fish fingers
Clover spread
Whole milk
Crunch corner yoghurts
Fromage frais
Cadbury Mini Rolls
Frozen chips
Frozen petit pois
Goodfella’s deep pan pizza
Dough balls
Chocolate-dipped flapjacks
Dry white wine
Bagels
Multigrain wraps
Red Bull multipack
Angel Slices
Cheerios
Windolene
Cornettos
Not too unreasonable, right? I’ve made a spreadsheet showing my
working, where you’ll see today’s prices for each of these items (along with the actual brands and package sizes I’ve selected), if you’d like to check my maths, because here comes
the clever bit.
Based on my calculation, taking my imaginary shopping list to Sainsbury’s will ultimately cost me £52.85. Taking it to Tesco will cost me £54.13. Pretty close, right, and I’m not likely to care about
the difference because Tesco would give me a £1.28 voucher off my next shop which makes up for the difference (note that Sainsbury’s wouldn’t reciprocate in kind if it were the other
way around, after a policy change they made late last year). But that’s not
actually a true representation of the value of ‘shopping around’. As my spreadsheet shows, if I were to buy each item on my list at the supermarket that was cheapest, it’d only cost me
a total of £43.75: that’s a saving of £9.10 (or about 17% off my entire shop) compared to the cheapest of these supermarkets. These schemes don’t give you a
real “best of all worlds”. Instead, they give you, at most, a “best of all worlds, assuming that you’re still going to be lazy enough to only shop in one place”.
If you’re particularly devious of mind, you can exploit this. For example, suppose I went to Tesco but when I reached the checkout I split my shopping into two transactions.
The first transaction contains the frozen goods, milk, wine, dough balls, flapjacks, and mini rolls. This comes out at £33.73, which is £10.38 more than Sainsbury’s would charge me for the same goods. Tesco therefore gives me a £10 voucher, which I
immediately use on the second batch of shopping: the one which contains goods that are cheaper than their Sainsbury’s equivalents. The total price of my shopping:
£44.13 – only 38p more than if I’d gone to both supermarkets and bought only the best-value
goods from each (the 38p discrepancy comes from the fact that Tesco won’t ever give you a voucher worth more than £10, no matter how much you’re losing out).
It’s not even that hard to do. Obviously, somebody’s probably written an app for it, but even if you’re just doing it by guesswork you can get a better result than just piling all of
your shopping onto the conveyor belt together. Simply put the things which seem like a good deal (all of the discounted products, plus anything that feels like it’s good value) at one
end of your trolley, and unload those things last. Making sure that you’ve got at least ten items on the conveyor, ‘split’ your shopping somewhere towards the beginning of
these items. Then take any voucher you get from your first load, and apply it to the second.
It’s pretty easy, so long as you don’t mind looking like a bit of a tool at the checkout.
But to most people, most of the time, this is nothing more than a strong and compelling piece of marketing. Either you get reminded that you allegedly “saved money”, on a piece of paper
that probably goes into your wallet and helps to combat buyer’s remorse, or else we get told that we paid a particular
amount more than we needed to, and are offered the difference back so long as we return to the same store within the next fortnight. Either way, the supermarket wins your loyalty, which
– for a couple of pence on each transaction (assuming that the customer doesn’t lose the voucher or otherwise fail to get an opportunity to use it) – is a miniscule price to pay.
At the very end of last year, right before the subsidy rate
dropped in January, I had solar panels installed: you may remember that I blogged about it at the time. I thought you might be interested to
know how that’s working out for us.
Because I’m a data nerd, I decided to monitor our energy usage, production, and total cost in order to fully understand the economic impact of our tiny power station. I appreciate that
many of you might not be able to appreciate how cool this kind of data is, but that’s because you don’t have as good an appreciation of how fun statistics can be… it is cool,
damn it!
If you look at the chart above, for example (click for a bigger version), you’ll notice a few things:
We use a lot more KWh of gas than electricity (note that’s not units of gas: our gas meter measures in cubic feet, which means we have to multiply by around… 31.5936106… to
get the KWh… yes, really – more information here), but electricity is correspondingly 3.2 times more expensive per KWh – I have a separate chart to measure our daily energy costs, and it is if
anything even more exciting (can you imagine!) than this one.
Our gas usage grows dramatically in the winter – that’s what the big pink “lump” is. That’s sort-of what you’d expect on account of our gas central heating.
Our electricity usage has trended downwards since the beginning of the year, when the solar panels were installed. It’s hard to see with the gas scale throwing it off (but again,
the “cost per day” chart makes it very clear). There’s also a bit near the end where the electricity usage seems to fall of the bottom of the chart… more on that in a moment.
What got me sold on the idea of installing solar panels, though, was their long-term investment potential. I had the money sitting around anyway, and by my calculations we’ll get a
significantly better return-on-investment out of our little roof-mounted power station than I would out of a high-interest savings account or bond. And that’s because of the
oft-forgotten “third way” in which solar panelling pays for itself. Allow me to explain:
Powering appliances: the first and most-obvious way in which solar power makes economic sense is that it powers your appliances. Right now, we generate
almost as much electricity as we use (although because we use significantly more power in the evenings, only about a third of what which we generate goes directly into making our
plethora of computers hum away).
Selling back to the grid (export tariff): as you’re probably aware, it’s possible for a household solar array to feed power back into the National Grid: so
the daylight that we’re collecting at times when we don’t need the electricity is being sold back to our energy company (who in turn is selling it,
most-likely, to our neighbours). Because they’re of an inclination to make a profit, though (and more-importantly, because we can’t commit to making electricity for them when they
need it: only during the day, and dependent upon sunlight), they only buy units from us at about a third of the rate that they sell them to consumers. As a result, it’s worth our
while trying to use the power we generate (e.g. to charge batteries and to run things that can be run “at any point” during the day like the dishwasher, etc.)
rather than to sell it only to have to buy it back.
From a government subsidy (feed-in tariff): here’s the pleasant surprise – as part of government efforts to increase the proportion of the country’s energy that
is produced from renewable sources, they subsidise renewable microgeneration. So if you install a wind turbine in your garden or a solar array on your roof, you’ll get a kickback for
each unit of electricity that you generate. And that’s true whether you use it to power appliances or sell it back to the grid – in the latter case, you’re basically
being paid twice for it! The rate that you get paid as a subsidy gets locked-in for ~20 years after you build your array, but it’s gradually decreasing. We’re getting paid a little
over 14.5p per unit of electricity generated, per day.
As the seasons have changed from Winter through Spring we’ve steadily seen our generation levels climbing. On a typical day, we now make more electricity than we use. We’re still having
to buy power from the grid, of course, because we use more electricity in the evening than we’re able to generate when the sun is low in the sky: however, if (one day) technology like
Tesla’s PowerWall becomes widely-available at reasonable prices, there’s no reason that a house like ours couldn’t be totally
independent of the grid for 6-8 months of the year.
So: what are we saving/making? Well, looking at the last week of April and the first week of May, and comparing them to the same period last year:
Powering appliances: we’re saving about 60p per day on electricity costs (down to about £1.30 per day).
Selling back to the grid: we’re earning about 50p per day in exports.
From a government subsidy: we’re earning about £2.37 per day in subsidies.
As I’m sure you can see: this isn’t peanuts. When you include the subsidy then it’s possible to consider our energy as being functionally “free”, even after you compensate for the
shorter days of the winter. Of course, there’s a significant up-front cost in installing solar panels! It’s hard to say exactly when, at this point, I expect them to have paid for
themselves (from which point I’ll be able to use the expected life of the equipment to more-accurately predict the total return-on-investment): I’m planning to monitor the situation for
at least a year, to cover the variance of the seasons, but I will of course report back when I have more data.
I mentioned that the first graph wasn’t accurate? Yeah: so it turns out that our house’s original electricity meter was of an older design that would run backwards when
electricity was being exported to the grid. Which was great to see, but not something that our electricity company approved of, on account of the fact that they were then paying us for
the electricity we sold back to the grid, twice: for a couple of days of April sunshine, our electricity meter consistently ran backwards throughout the day. So they sent a couple of
engineers out to replace it with a more-modern one, pictured above (which has a different problem: its “fraud light” comes on whenever we’re sending power back to the grid, but
apparently that’s “to be expected”).
In any case, this quirk of our old meter has made some of my numbers from earlier this year more-optimistic than they might otherwise be, and while I’ve tried to compensate
for this it’s hard to be certain that my estimates prior to its replacement are accurate. So it’s probably going to take me a little longer than I’d planned to have an accurate
baseline of exactly how much money solar is making for us.