Have you noticed how the titles printed on the spines of your books are all, for the most part, oriented the same way? That’s not a coincidence.
ISO 6357 defines the standard positioning of titles on the spines of printed books (it’s also codified as British Standard BS6738). If you assume that your book is stood “upright”, the question is one of which way you tilt your head to read the title printed on the spine. If you tilt your head to the right, that’s a descending title (as you read left-to-right, your gaze moves down, towards the surface on which the book stands). If you tilt your head to the left, that’s an ascending title. If you don’t need to tilt your head in either direction, that’s a transverse title.
The standard goes on to dictate that descending titles are to be favoured: this places the title in a readable orientation when the book lays flat on a surface with the cover face-up. Grab the nearest book to you right now and you’ll probably find that it has a descending title.
But if the book is lying on a surface, I can usually read the cover of the book. Only if a book is in a stack am I unable to do so, and stacks are usually relatively short and so it’s easy enough to lift one or more books from the top to see what lies beneath. What really matters when considering the orientation of a spine title is, in my mind, how it appears when it’s shelved.
It feels to me like this standard’s got things backwards. If a shelf of anglophone books is organised into any kind of order (e.g. alphabetically) then it’ll usually be from left to right. If I’m reading the titles from left to right, and the spines are printed descending, then – from the perspective of my eyes – I’m reading from bottom to top: i.e. backwards!
It’s possible that this is one of those things that I overthink.
If you’re reading this post via my blog and using a desktop computer, try opening your browser’s debug console (don’t worry; I’ll wait). If you don’t know how, here’s instructions for Firefox and instructions for Chrome. Other browsers may vary. You ought to see something like this in your debugger:
What sorcery is this?
Principally, though, the console is designed for textual content and nothing else. That said, both Firefox and Chrome’s consoles permit the use of CSS to style blocks of debug output by using the %c escape sequence. For example, I could style some of a message with italic text:
>> console.log('I have some %citalic %ctext', 'font-style: italic;', ''); I have some italic text
Using CSS directives like background, then, it’s easy to see how one could embed an image into the console, and that’s been done before. Instead, though, I wanted to use the lessons I’d learned developing PicInHTML 8¾ years ago to use text and CSS (only) to render a colour picture to the console. First, I created my template image – a hackergotchi of me and an accompanying speech bubble, shrunk to a tiny size and posterised to reduce the number of colours used and saved as a PNG.
Somehow in the intervening years I’ve gotten way out of practice and even more out of shape because our expedition was hard. Partly that was our fault for choosing to climb on one of the shortest days of the year, requiring that we maintain a better-than-par pace throughout to allow us to get up and down before the sun set (which we actually managed with further time in-hand), but mostly it’s the fact that I’ve neglected my climbing: just about the only routine exercise I get these days is cycling, and with changes in my work/life balance I’m now only doing that for about 40 miles in a typical week.
For the longest time my primary mountaineering-buddy was my dad, who was – prior to his death during a hillwalking accident – a bigger climber and hiker than I’ll ever be. Indeed, I’ve been “pushed on” by trying to keep up with my father enough times that fighting to keep up with Robin at the weekend was second nature. If I want to get back to the point where I’m fit enough for ice climbing again I probably need to start by finding the excuse for getting up a hill once in a while more-often than I do, first, too. Perhaps I can lay some of the blame for my being out of practice in the flat, gentle plains of Oxfordshire?
Noticing that our bagel supply was running low and with two kids who’d happily fight to the death for the last one if it came down to it, I decided this weekend to dust off an old recipe and restock using the ingredients in our cupboard. For a festive spin, I opted to make cranberry and cinnamon bagels, and served a few at my family’s regular Sunday brunch. Little did I know that they would turn out to be such a hit that not one from the resupply would survive to the end of the day, and I’ve been pressed into making them again in time for breakfast on Christmas Day (or, as Ruth suggested as she and Robin fought for the last one in a manner more-childish than the children ever would, I could “make them any time I feel like it; every week maybe?”).
If you’d like to make your own, and you totally should, the recipe’s below. I prefer volumetric measurements to weight for bread-making: if you’re not used to doing so, be sure to give your dry ingredients a stir/shake to help them settle when measuring.
Whisk the yeast into the water and set aside for a few minutes to activate.
Combine the flour, one quarter of the sugar, and salt.
Make a well, and gradually introduce the water/yeast, mixing thoroughly to integrate all the flour into a sticky wet dough.
Add the vanilla extract and mix through.
Knead thoroughly: I used a mixer with a dough hook, but you could do it by hand if you prefer. After 5-10 minutes, when the dough becomes stretchy, introduce the dried fruit and continue to knead until well integrated. The dough will be very wet.
Mix the cinnamon into the remaining sugar and scatter over a clean surface. Using well-floured fingers, form the dough into a ball and press into the sugar/cinnamon mixture. Fold and knead against the mixture until it’s all picked-up by the dough: this approach forms attractive pockets and rivulets of cinnamon throughout the dough.
Rub a large bowl with oil. Ball the dough and put it into the bowl, cover tightly, and leave at room temperature for up to two hours until doubled in size.
When it’s ready, fill a large pan about 6cm deep with water, add the honey, and bring to a simmer. Pre-heat a hot oven (gas mark 7, 220°)
On a lightly-floured surface and with well-floured fingertips, extract the ball of dough and divide into eight (halve, halve, halve again). Shape each ball into a bagel by pushing-through the middle with your thumb and stretching out the hole as you rotate it.
Submerge each bagel into the hot water for about a minute on each side, then transfer to baking sheet lined with greaseproof paper.
Thin the egg white with a few drops of water, stir, then brush each bagel with the egg mix.
Bake for about 25 minutes until golden brown. Cool on a wire rack.
Mostly this recipe’s here for my own reference, but if you make some then let me know how they turn out for you. (Oh, and for those of you who prefer when my blog posts are technical, this page is marked up in h-recipe.)
An increasing number of people are reportedly suffering from an allergy to the meat and other products of nonhuman mammals, reports Mosaic Science this week, and we’re increasingly confident that the cause is a sensitivity to alpha-gal (Galactose-alpha-1,3-galactose), a carbohydrate produced in the bodies of virtually all mammals except for us and our cousin apes, monkeys, and simians (and one of the reasons you can’t transplant tissue from pigs to humans, for example).
The interesting thing is that the most-common cause of alpha-gal sensitivity appears to be the bite of one of a small number of species of tick. The most-likely hypothesis seems to be that being bitten by such a tick after it’s bitten e.g. deer or cattle may introduce that species’ alpha-gal directly to your bloodstream. This exposure triggers an immune response through all future exposure, even if it’s is more minor, e.g. consuming milk products or even skin contact with an animal.
That’s nuts, isn’t it? The Mosaic Science article describes the reaction of Tami McGraw, whose symptoms began in 2010:
[She] asked her doctor to order a little-known blood test that would show if her immune system was reacting to a component of mammal meat. The test result was so strongly positive, her doctor called her at home to tell her to step away from the stove.
That should have been the end of her problems. Instead it launched her on an odyssey of discovering just how much mammal material is present in everyday life. One time, she took capsules of liquid painkiller and woke up in the middle of the night, itching and covered in hives provoked by the drug’s gelatine covering.
When she bought an unfamiliar lip balm, the lanolin in it made her mouth peel and blister. She planned to spend an afternoon gardening, spreading fertiliser and planting flowers, but passed out on the grass and had to be revived with an EpiPen. She had reacted to manure and bone meal that were enrichments in bagged compost she had bought.
Of course, this isn’t the only nor even the most-unusual (or most-severe) animal-induced allergy-to-a-different-animal we’re aware of. The hilariously-named but terribly-dangerous Pork-Cat syndrome is caused, though we’re not sure how, by exposure to cats and results in a severe allergy to pork. But what makes alpha-gal sensitivity really interesting is that it’s increasing in frequency at quite a dramatic rate. The culprit? Climate change. Probably.
It’s impossible to talk to physicians encountering alpha-gal cases without hearing that something has changed to make the tick that transmits it more common – even though they don’t know what that something might be.
“Climate change is likely playing a role in the northward expansion,” Ostfeld adds, but acknowledges that we don’t know what else could also be contributing.
A little dated, perhaps: I’m sure that nobody needs to be told nowadays that one of the biggest things a Westerner can do to reduce their personal carbon footprint (after from breeding less or not at all, which I maintain is the biggest, or avoiding air travel, which Statto argues for) is to reduce or refrain from consumption of meat (especially pork and beef) and dairy products.
Indeed, environmental impact was the biggest factor in my vegetarianism (now weekday-vegetarianism) for the last eight years, and it’s an outlook that I’ve seen continue to grow in others over the same period.
Seeing these two stories side-by-side in my RSS reader put the Gaia hypothesis in my mind.
If you’re not familiar with the Gaia hypothesis, the basic idea is this: by some mechanism, the Earth and all of the life on it act in synergy to maintain homeostasis. Organisms not only co-evolve with one another but also with the planet itself, affecting their environment in a way that in turn affects their future evolution in a perpetual symbiotic relationship of life and its habitat.
Its advocates point to negative feedback loops in nature such as plankton blooms affecting the weather in ways that inhibit plankton blooms and to simplistic theoretical models like the Daisyworld Simulation (cute video). A minority of its proponents go a step further and describe the Earth’s changes teleologically, implying a conscious Earth with an intention to protect its ecosystems (yes, these hypotheses were born out of the late 1960s, why do you ask?). Regardless, the essence is the same: life’s effect on its environment affects the environment’s hospitality to life, and vice-versa.
There’s an attractive symmetry to it, isn’t there, in light of the growth in alpha-gal allergies? Like:
Today – climate change causes ticks to spread more-widely and bite more humans.
Tomorrow – tick bites cause humans to consume less products farmed from mammals?
That’s not to say that I buy it, mind. The Gaia hypothesis has a number of problems, and – almost as bad – it encourages a complacent “it’ll all be okay, the Earth will fix itself” mindset to climate change (which, even if it’s true, doesn’t bode well for the humans residing on it).
But it was a fun parallel to land in my news reader this morning, so I thought I’d share it with you. And, by proxy, make you just a little bit warier of ticks than you might have been already. /shudders/
I’m a huge fan of multifactor authentication. If you’re using it, you’re probably familiar with using an app on your phone (or receiving a text or email) in addition to a username and password when logging in to a service like your email, social network, or a bank. If you’re not using it then, well, you should be.
Ruth recently had a problem when she lost her phone and couldn’t connect to a service for which she usually used an authenticator app like the one pictured above, so I thought I’d share with you my personal strategy for managing multifactor authentication, in case it’s of any use to anybody else. After all: the issue of not-having-the-right-second-factor-to-hand has happened to me before, it’s certainly now happened to Ruth, and it’s probably something that’s happened to other people I know by now, too.
Here’s my strategy:
Favour fewer different multifactor solutions. Instead of using e.g. text messaging for one, an app for another, a different app for a third, a hardware token for a fourth, and so on, try to find the fewest number of different solutions that work for your personal digital life. This makes backing up and maintenance easier.
I use RFC6238/TOTP (better known as “Google Authenticator”) for almost all second factor purposes: the only exceptions are my online bank (who use a proprietary variant of RFC6238 that I’ve not finished reverse-engineering) and Steam (who use a proprietary implementation of RFC6238 with a larger character set, for some reason, in their Steam Guard app).
Have a backup plan. Here’s the important bit. If you use your phone to authenticate, and you lose access to your phone for a period of time (broken, lost, stolen, out of battery, in use by a small child playing a game), you can’t authenticate. That’s why it’s important that you have a backup plan.
Some suggested backup strategies to consider (slightly biased towards TOTP):
Multiple devices: (Assuming you’re using TOTP or something like it) there’s nothing to stop you setting up multiple devices to access the same account. Depending on how the service you’re accessing provides the code you need to set it up, you might feel like you have to set them all up at the same time, but that’s not strictly true: there’s another way…
Consider setting up a backdoor: Some systems will allow you to print e.g. a set of “backup codes” and store them in a safe place for later use should you lose access to your second factor. Depending on the other strategies you employ, you should consider doing this: for most (normal) people, this could be the single safest way to retain access to your account in the event that you lose access to your second factor. Either way, you should understand the backdoors available: if your online bank’s policy is to email you replacement credentials on-demand then your online bank account’s security is only as good as your email account’s security: follow the chain to work out where the weak links are.
Retain a copy of the code: The code you’re given to configure your device remains valid forever: indeed, the way that it works is that the service provider retains a copy of the same code so they can generate numbers at the same time as you, and thus check that you’re generating the same numbers as them. If you keep a copy of the backup code (somewhere very safe!) you can set up any device you want, whenever you want. Personally, I keep copies of all TOTP configuration codes in my password safe (you’re using a password safe, right?).
Set up the infrastructure what works for you: To maximise my logging-on convenience, I have my password safe enter my TOTP numbers for me: I’m using KeeOTP for KeePass, but since 2016 LastPass users can do basically the same thing. I’ve also implemented my own TOTP client in Ruby to run on desktop computers I control (just be careful to protect the secrets file), because sometimes you just want a command-line solution. The code’s below, and I think you’ll agree that it’s simple enough that you can audit it for your own safety too.
I’ve occasionally been asked whether my approach actually yields me any of the benefits of two-factor authentication. After all, people say, aren’t I weakening its benefits by storing the TOTP generation key in the same place as my usernames and passwords rather than restricting it to my mobile device. This is true, and it is weaker to do this than to keep the two separately, but it’s not true to say that all of the benefits are negated: replay attacks by an attacker who intercepts a password are mitigated by this approach, for example, and these are a far more-common vector for identity theft than the theft and decryption of password safes.
Everybody has to make their own decisions on the balance of their convenience versus their security, but for me the sweet spot comes here: in preventing many of the most-common attacks against the kinds of accounts that I use and reinforcing my existing username/strong-unique-passwords approach without preventing me from getting stuff done. You’ll have to make your own decisions, but if you take one thing away from this, let it be that there’s nothing to stop you having multiple ways to produce TOTP/Google Authenticator credentials, and you should consider doing so.
Our eldest, 4, started school this year and this week saw her first parents’ evening. This provided an opportunity for we, her parents, to “come out” to her teacher about our slightly-unconventional relationship structure. And everything was fine, which is nice.
I’m sure the first few months of every child’s school life are a time that’s interesting and full of change, but it’s been particularly fascinating to see the ways in which our young academic’s language has adapted to fit in with and be understood by her peers.
I first became aware of these changes, I think, when I overheard her describing me to one of her school friends as her “dad”: previously she’d always referred to me as her “Uncle Dan”. I asked her about it afterwards and she explained that I was like a dad, and that her friend didn’t have an “Uncle Dan” so she used words that her friend would know. I’m not sure whether I was prouder about the fact that she’d independently come to think of me as being like a bonus father figure, or the fact that she demonstrated such astute audience management.
I don’t object to being assigned this (on-again, off-again, since then) nickname. My moniker of Uncle Dan came about as a combination of an effort to limit ambiguity (“wait… which dad?”) and an attempt not to tread on the toes of actual-father JTA: the kids themselves are welcome to call me pretty-much whatever they’re comfortable with. Indeed, they’d be carrying on a family tradition if they chose-for-themselves what to call me: Ruth and her brothers Robin and Owen address their father not by a paternal noun but by his first name, Tom, and this kids have followed suit by adopting “Grand-Tom” as their identifier for him.
Knowing that we were unusual, though, we’d taken the time to do some groundwork before our eldest started school. For example we shared a book about and spent a while talking about how families differ from one another: we figure that an understanding that families come in all kinds of shapes and sizes is a useful concept in general from a perspective of diversity and and acceptance. In fact, you can hear how this teaching pays-off in the language she uses to describe other aspects of the differences she sees in her friends and their families, too.
Still, it was a little bit of a surprise to find myself referred to as a “dad” after four years of “Uncle Dan”.
Nonetheless: in light of the fact that she’d clearly been talking about her family at school and might have caused her teacher some confusion, when all three of us “parents” turned up to parents’ evening we opted to introduce ourselves and our relationship. Which was all fine (as you’d hope: as I mentioned the other day, our unusual relationship structure is pretty boring, really), and the only awkwardness was in having to find an additional chair than the teacher had been expecting to use with which to sit at the table.
There’s sometimes a shortage of happy “we did a thing, and it went basically the same as it would for a family with monogamous parents” poly-family stories online, so I thought this one was worth sharing.
And better yet: apparently she’s doing admirably at school. So we all celebrated with an after-school trip to one of our favourite local soft play centres.
As each door is opened, a different part of a (distinctly-Bodleian/Oxford) winter scene unfolds, complete with an array of fascinating characters connected to the history, tradition, mythology and literature of the area. It’s pretty cool, and you should give it a go.
If you want to make one of your own – for next year, presumably, unless you’ve an inclination to count-down in this fashion to something else that you’re celebrating 25 days hence – I’ve shared a version of the code that you can adapt for yourself.
Features that make this implementation a good starting point if you want to make your own digital advent calendar include:
Secure: your server’s clock dictates which doors are eligible to be opened, and only content legitimately visible on a given date can be obtained (no path-traversal, URL-guessing, or traffic inspection holes).
Responsive: calendar adapts all the way down to tiny mobiles and all the way up to 4K fullscreen along with optimised images for key resolutions.
Debuggable: a password-protected debug mode makes it easy for you to test, even on a production server, without exposing the secret messages behind each door.
Expandable: lots of scope for the future, e.g. a progressive web app version that you can keep “on you” and which notifies you when a new door’s ready to be opened, was one of the things I’d hoped to add in time for this year but didn’t quite get around to.
The younger generation of web developers are likely to hail this as good news: one fewer engine to develop for and test in, they’re all already using Chrome or something similar (and certainly not Edge) for development and debugging anyway, etc. The problem comes perhaps because they’re too young to remember the First Browser War and its aftermath. Let me summarise:
Once upon a time – let’s call it the mid-1990s – there were several web browsers: Netscape Navigator, Internet Explorer, Opera, etc. They all used different rendering engines and so development was sometimes a bit of a pain, but only if you wanted to use the latest most cutting-edge features: if you were happy with the standard, established features of the Web then your site would work anywhere, as has always been the case.
Then, everybody starting using just one browser: following some shady dealings and monopoly abuse, 90%+ of Web users started using just one web browser, Internet Explorer. By the time anybody took notice, their rivals had been economically crippled beyond any reasonable chance of recovery, but the worst had yet to come…
Developers started targeting only that one browser: instead of making websites, developers started making “Internet Explorer sites” which were only tested in that one browser or, worse yet, only worked at all in that browser, actively undermining the Web’s position as an open platform. As the grip of the monopoly grew tighter, technological innovation was centred around this single platform, leading to decade-long knock-on effects.
The Web ceased to grow new features: from the release of Internet Explorer 6 there were no significant developments in the technology of the Web for many years. The lack of competition pushed us into a period of stagnation. A decade and a half later, we’re only just (finally) finishing shaking off this unpleasant bit of our history.
History looks set to repeat itself. Substitute Chrome in place of Internet Explorer and update the references to other web browsers and the steps above could be our future history, too. Right now, we’re somewhere in or around step #2 – Chrome is the dominant browser – and we’re starting to see the beginnings of step #3: more and more “Chrome only” sites. More-alarmingly this time around, Google’s position in providing many major Web services allows them to “push” even harder for this kind of change, even just subtly: if you make the switch from Chrome to e.g. Firefox (and you absolutely should) you might find that YouTube runs slower for you because YouTube’s (Google) engineers favour Google’s web browser.
So these are the three browser engines we have: WebKit/Blink, Gecko, and EdgeHTML. We are unlikely to get any brand new bloodlines in the foreseeable future. This is it.
If we lose one of those browser engines, we lose its lineage, every permutation of that engine that would follow, and the unique takes on the Web it could allow for.
And it’s not likely to be replaced.
Imagine a planet populated only by hummingbirds, dolphins, and horses. Say all the dolphins died out. In the far, far future, hummingbirds or horses could evolve into something that could swim in the ocean like a dolphin. Indeed, ichthyosaurs in the era of dinosaurs looked much like dolphins. But that creature would be very different from a true dolphin: even ichthyosaurs never developed echolocation. We would wait a very long time (possibly forever) for a bloodline to evolve the traits we already have present in other bloodlines today. So, why is it ok to stand by or even encourage the extinction of one of these valuable, unique lineages?
We have already lost one.
We used to have four major rendering engines, but Opera halted development of its own rendering engine Presto before adopting Blink.
Three left. Spend them wisely.
As much as I don’t like having to work-around the quirks in all of the different browsers I test in, daily, it’s way preferable to a return to the dark days of the Web circa most of the first decade of this century. Please help keep browsers diverse: nobody wants to start seeing this shit –
The team responsible for digital archiving had plans to spend World Digital Preservation Day running a stand in Blackwell Hall for some time before I got involved. They’d asked my department about using the Heritage Window – the Bodleian’s 15-screen video wall – to show a carousel of slides with relevant content over the course of the day. Or, they added, half-jokingly, “perhaps we could have Pong up there as it’ll be its 46th birthday?”
But I didn’t take it as a joke. I took it as a challenge.
Emulating Pong is pretty easy. Emulating Pong perfectly is pretty hard. Indeed, a lot of the challenge in the preservation of (especially digital) archives in general is in finding the best possible compromise in situations where perfect preservation is not possible. If these 8″ disks are degrading, is is acceptable to copy them onto a different medium? If this video file is unreadable in modern devices, is it acceptable to re-encode it in a contemporary format? These are the kinds of questions that digital preservation specialists have to ask themselves all the damn time.
Emulating Pong in a way that would work on the Heritage Window but be true to the original raised all kinds of complications. (Original) Pong’s aspect ratio doesn’t fit nicely on a 16:9 widescreen, much less on a 27:80 ultrawide. Like most games of its era, the speed is tied to the clock rate of the processor. And of course, it should be controlled using a “dial”.
By the time I realised that there was no way that I could thoroughly replicate the experience of the original game, I decided to take a different track. Instead, I opted to reimplement Pong. A reimplementation could stay true to the idea of Pong but serve as a jumping-off point for discussion about how the experience of playing the game may be superficially “like Pong” but that this still wasn’t an example of digital preservation.
Here’s the skinny:
A web page, displayed full-screen, contains both a <canvas> (for the game, sized appropriately for a 3 × 3 section of the video wall) and a <div> full of “slides” of static content to carousel alongside (filling a 2 × 3 section).
A pair of SNES controllers adapted for use as USB controllers which I happened to own already.
I felt that the day, event, and game were a success. A few dozen people played Pong and explored the other technology on display. Some got nostalgic about punch tape, huge floppy disks, and even mechanical calculators. Many more talked to the digital archives folks and I about the challenges and importance of digital archiving. And a good time was had by all.
I’ve open-sourced the entire thing with a super-permissive license so you can deploy it yourself (you know, on your ultrawide video wall) or adapt it as you see fit. Or if you’d just like to see it for yourself on your own computer, you can (but unless you’re using a 4K monitor you’ll probably need to use your browser’s mobile/responsive design simulator set to 3200 × 1080 to make it fit your screen). If you don’t have controllers attached, use W/S to control player 1 and the cursor keys for player 2 in a 2-player game.
It’s always been a bit of an inconvenience to have to do these things, but it’s never been a terrible burden: even when I fly internationally – which is probably the hardest part of having my name – I’ve learned the tricks I need to minimise how often I’m selected for an excessive amount of unwanted “special treatment”.
This year, though, for the very first time, my (stupid bloody) unusual name paid for itself. And not just in the trivial ways I’m used to, like being able to spot my badge instantly on the registration table at conferences I go to or being able to fill out paper forms way faster than normal people. I mean in a concrete, financially-measurable way. Wanna hear?
So: I’ve a routine of checking my credit report with the major credit reference agencies every few years. I’ve been doing so since long before doing so became free (thanks GDPR); long even before I changed my name: it just feels like good personal data housekeeping, and it’s interesting to see what shows up.
And so I noticed that my credit report with Equifax said that I wasn’t on the electoral roll. Which I clearly am. Given that my credit report’s pretty glowing, I wasn’t too worried, but I thought I’d drop them an email and ask them to get it fixed: after all, sometimes lenders take this kind of thing into account. I wasn’t in any hurry, but then, it seems: neither were they –
2 February 2016 – I originally contacted them
18 February 2016 – they emailed to say that they were looking into it and that it was taking a while
22 February 2016 – they emailed to say that they were still looking into it
13 July 2016 – they emailed to say that they were still looking into it (which was a bit of a surprise, because after so long I’d almost forgotten that I’d even asked)
14 July 2016 – they marked the issue as “closed”… wait, what?
I wasn’t in a hurry, and 2017 was a bit of a crazy year for me (for Equifax too, as it happens), so I ignored it for a bit, and then picked up the trail right after the GDPR came into force. After all, they were storing personal information about me which was demonstrably incorrect and, continued to store and process it even after they’d been told that it was incorrect (it’d have been a violation of principle 4 of the DPA 1998, too, but the GDPR‘s got bigger teeth: if you’re going to sick the law on somebody, it’s better that it has bark and bite).
My anticipation was that my message of 13 July 2018 would get them to sit up and fix the issue. I’d assumed that it was probably related to my unusual name and that bugs in their software were preventing them from joining-the-dots between my credit report and the Electoral Roll. I’d also assumed that this nudge would have them either fix their software… or failing that, manually fix my data: that can’t be too hard, can it?
Apparently it can:
Equifax’s suggested solution to the problem on my credit report? Change my name on the Electoral Roll to match the (incorrect) name they store in their systems (to work around a limitation that prevents them from entering single-character surnames)!
At this point, they turned my send-a-complaint-once-every-few-years project into a a full blown rage. It’s one thing if you need me to be understanding of the time it can take to fix the problems in your computer systems – I routinely develop software for large and bureaucratic organisations, I know the drill! – but telling me that your bugs are my problems and telling me that I should lie to the government to work around them definitely isn’t okay.
At this point, I was still expecting them to just fix the problem: if not the underlying technical issue then instead just hack a correction into my report. But clearly they considered this, worked out what it’d cost them to do so, and decided that it was probably cheaper to negotiate with me to pay me to go away.
Which it was.
This week, I accepted a three-figure sum from Equifax as compensation for the inconvenience of the problem with my credit report (which now also has a note of correction, not that my alleged absence from the Electoral Roll has ever caused my otherwise-fine report any trouble in the past anyway). Curiously, they didn’t attach any strings to the deal, such as not courting publicity, so it’s perfectly okay for me to tell you about the experience. Maybe you know somebody who’s similarly afflicted: that their “unusual” name means that a credit reference company can’t accurately report on all of their data. If so, perhaps you’d like to suggest that they take a look at their credit report too… just saying.
Apparently Equifax think it’s cheaper to pay each individual they annoy than it is to fix their database problems. I’ll bet that, in the long run, that isn’t true. But in the meantime, if they want to fund my recent trip to Cornwall, that’s fine by me.
After the success of Challenge Robin this summer – where Ruth and I blindfolded her brother Robin and drove him out to the middle of nowhere without his phone or money and challenged him to find his way home – Robin’s been angling for a sequel. He even went so far as to suffix the title of his blog post on the subject “(part 1)”, in the hopes perhaps that we’d find a way to repeat the experience.
In response to an email sent a week in advance of the challenge, Robin quickly prepared and returned a “permission slip” from his parents, although I’m skeptical that either of them actually saw this document, let alone signed it.
With about a day to go before the challenge began, Robin’s phone will have received a number of instructional messages from a sender-ID called “Control”, instructing him of his first actions and kicking off his adventure. He’d already committed to going to work on Friday with a bag fully-packed for whatever we might have in store for him, but he doubtless wouldn’t have guessed quite how much work had been put into this operation.
By 18:06 he still hadn’t arrived to meet his contact. Had this adventure fallen at the first hurdle? Only time would tell…
Update – Friday 9 November 18:45: Robin arrived late and apologetic to find his contact, “Frowny”, at the GBK at that address, was played by JTA.
The pair ate and drank and “Frowny” handed Robin his first clue: a map pointing to Cornwall Gardens in Kensington and instructions on where to find the next clue when he got there. The game was afoot!
Clearly he’d taken the idea of being prepared for anything to a level beyond what we’d expected. Among his other provisions, he was carrying a tent, sleeping bag, and passport! “Clearly my mistake,” he told his contact, “Was giving intelligent people a challenge and then leaving them three months to plan.”
Update – Friday 9 November 19:53: In Cornwall Gardens, Robin found the note (delayed somewhat, perhaps by the growing dark) and began his treasure trail.
Soon after, though, things started to go very wrong…
Update – Friday 9 Novembr 20:40: Let’s take a diversion, and I’ll rely on JTA to keep Robin’s eyes away from this post for a bit. Here’s what was supposed to happen:
Robin would follow a trail of clues around London which would give him key words whose names alluded to literature about Paddington (station) and Penzance. Eventually he’d find a puzzle box and, upon solving it, discover inside tickets for the Paddington-to-Penzance overnight sleeper train.
Meanwhile, I’ve been rushing around the countryside near Penzance setting up an epic extension to the previous trail complete with puzzles, mixed-terrain hikes, highlands, islands, lions, tigers and bears (oh my). Some of those might not really have been in the plan.
So now we’re working out what to do next. Right now I’m holed-up in an undisclosed location near Penzance (the ultimate target of the challenge) and Robin’s all the way over in London. We’re working on it, but this hasn’t been so successful as we might have liked.
Update – Saturday 10 November 07:58: We’ve managed to get Robin onto a series of different trains rather than the sleeper, so he’ll still get to Penzance… eventually! Meanwhile, I’m adjusting the planned order of stages at this end to ensure that he can still get a decent hike in (weather permitting).
Update – Saturday 10 November 10:45: Originally, when Robin had been expected to arrive in Penzance via a sleeper train about three hours ago, he’d have received his first instruction via The Gadget, which JTA gave him in London:
The Gadget’s primary purpose is to send realtime updates on Robin’s position so that I can plot it on a map (and report it to you guys!) and to issue him with hints so that he knows where he’s headed next, without giving him access to a phone, Internet, navigation, maps, etc. The first instruction would be to head to Sullivan’s Diner for breakfast (where I’ve asked staff to issue him with his first clue): cool eh? But now he’s only going to be arriving in the afternoon so I’m going to have to adapt on-the-fly. Luckily I’ve got a plan.
I’m going to meet Robin off his train and suggest he skips this first leg of the challenge, because the second leg is… sort-of time-critical…
Update – Saturday 10 November 13:29: Robin finally arrives in Penzance on a (further-delayed) train. I’ve given him a sausage sandwich at Sullivan’s Diner (who then gave him the clue above), turned on The Gadget (so I’ve got live tracking of his location), and given him the next clue (the one he’d have gotten at Roger’s Tower) and its accompanying prop.
Armed with the clue, Robin quickly saw the challenge that faced him…
After all of these delays, there’s only about an hour and a half until the tide comes in enough to submerge the causeway to St. Michael’s Mount: the island he’s being sent to. And he’s got to get there (about an hour’s walk away), across the causeway, find the next clue, and back across the causeway to avoid being stranded. The race is on.
Luckily, he’d been able to open the puzzle box and knows broadly where to look on the island for the next clue. How will he do? We’ll have to wait and see…
Update – Saturday 10 November 14:18: Robin made spectacular time sprinting along the coast to Longrock (although his route might have been suboptimal). At 14:18 he began to cross to the island, but with only a little over half an hour until the tide began to cover the causeway, he’d have to hunt quickly for the password he needed.
At 14:22 he retreived the clue and put the password into The Gadget: now he had a new set of instructions – to get to a particular location without being told what it was… only a real-time display of how far away it was. 7.5km and counting…
Update – Saturday 10 November 14:57: Robin’s making his way along the coast (at a more-reasonable pace now!). He’s sticking to the road rather than the more-attractive coast path, so I’ve had “Control” give him a nudge via The Gadget to see if he’d prefer to try the more-scenic route: he’ll need to, at some point, if he’s to find his way to the box I’ve hidden.
Update – Saturday 10 November 16:50: No idea where Robin is; The Gadget’s GPS has gone all screwy.
But it looks like he probably made it to Cudden Point, where the final clue was hidden. And then kept moving, suggesting that he retreived it without plunging over the cliff and into the sea.
In it, he’ll have found a clue as to broadly where his bed is tonight, plus a final (very devious) puzzle box with the exact location.
The sun is setting and Robin’s into the final stretch. Will he make it?
Update – Saturday 10 November 17:25: He’s going to make it! He’s actually going to make it! Looks like he’s under a mile away and heading in the right direction!
Update – Saturday 10 November 17:55: He made it!
We’ve both got lots to say, so a full debrief will take place in a separate blog post.
You may recall that on Halloween I mentioned that the Bodleian had released a mini choose-your-own-adventure-like adventure game book, available freely online. I decided that this didn’t go quite far enough and I’ve adapted it into a hypertext game, below. (This was also an excuse for me to play with Chapbook, Chris Klimas‘s new under-development story format for Twine.
If the thing you were waiting for before you experienced Shadows Out of Time was it to be playable in your browser, wait no longer: click here to play the game…
Unless they happened to bump into each other at QParty, the first time Ruth and JTA met my school friend Gary was at my dad’s funeral. Gary had seen mention of the death in the local paper and came to the wake. About 30 seconds later, Gary and I were reminiscing, exchanging anecdotes about our misspent youths, when suddenly JTA blurted out: “Oh my God… you’re Sc… Sc-gary?”
Ever since then, my internal monologue has referred to Gary by the new nickname “Scgary”, but to understand why requires a little bit of history…
Despite having been close for over a decade, Gary and I drifted apart somewhat after I moved to Aberystwyth in 1999, especially as I became more and more deeply involved with volunteering at Aberystwyth Nightline and the resulting change in my social circle which soon was 90% comprised of fellow volunteers, (ultimately resulting in JTA’s “What, Everyone?” moment). We still kept in touch, but our once more-intense relationship – which started in a primary school playground! – was put on a backburner as we tackled the next big things in our lives.
Something I was always particularly interested both at Nightline and in the helplines I volunteered with subsequently was training. At Nightline, I proposed and pushed forward a reimplementation of their traditional training programme that put a far greater focus on experience and practical skills and less on topical presentations. My experience as a trainee and as a helpline volunteer had given me an appreciation of the fundamentals of listening and I wanted future trainees to be able to benefit from this by giving them less time talking about listening and more time practising listening.
The primary mechanism by which helplines facilitate such practical training is through roleplaying. A trainer will pretend to be a caller and will talk to a trainee, after which the pair (along with any other trainers or trainees who are observing) will debrief and talk about how it went. The only problem with switching wholesale to a roleplay/skills-driven approach to training at Aberystwyth Nightline, as I saw it, was the approach that was historically taken to the generation of roleplay material, which favoured the use of anonymised adaptations of real or imagined calls.
Roleplay scenarios must be realistic (so that they simulate the experience of genuine calls with sufficient accuracy that they are meaningful) but they must also be effective (at promoting the growth of the skills that are needed to best-support callers). Those two criteria often come into conflict in roleplay scenarios: a caller who sits in near-silence for 20 minutes may well be realistic, but there’s a limit to how much you can learn from sitting in silence; a roleplay which tests every facet of a trainee’s practical knowledge provides efficiency, but does not reflect the content of any call that has ever really happened.
I spent some time outlining the characteristics of best-practice roleplays and providing guidelines to help “train the trainers”. These included ideas, some of which were (then) a little radical, like:
A roleplay should be based upon a character, not a story: if the trainer knows how the call is going to end, this constrains the opportunity for the trainee to explore the space and experiment with listening concepts. A roleplay is necessarily improvisational: get into your character, let go of your preconceptions.
Avoid using emotionally-charged experiences from your own life: use your own experience, certainly, but put your own emotional baggage aside. Not only is it unfair to your trainee (they’re not your therapist!) but it can be a can of worms in its own right – I’ve seen a (great) trainee help a trainer to make a personal breakthrough for which they were perhaps not yet ready.
Don’t be afraid to make mistakes: you’re not infallible, and you neither need to be nor to present yourself as a perfect example of a volunteer. Be willing to learn from the trainees (I’ve definitely made use of things I’ve learned from trainees in real calls I’ve taken at Samaritans) and create a space in which you can collectively discuss how roleplays went, rather than simply critiquing them.
In order to demonstrate the concepts I was promoting, I wrote and demonstrated a significant number of sample roleplay ideas, many of which I (or others) would then go on to flesh-out into full roleplays at training sessions. One of these for which I became well-known was entitled My Friend Scott.
The caller in this roleplay presents with suicidal ideation fuelled by feelings of guilt and loneliness following the accidental death, about six months prior, of his best friend Scott, for which he feels responsible. Scott had been the caller’s best friend since childhood, and he’s fixated on the adventures that they’d had together. He clearly has a huge admiration for his dead friend, bordering on infatuation, and blames himself not only for the death but for the resulting fracturing of their shared friendship group and his subsequent isolation.
(We’re close to getting back to the “Scgary story”, I promise. Hang in here.)
When I would perform this roleplay as the caller, I’d routinely flesh out Scott and the caller’s backstory with anecdotes from my own childhood and early-adulthood: it seemed important to be able to fill in these kinds of details in order to demonstrate how important Scott was to the caller’s life. Things that I really did with any of several of my childhood friends found their way, with or without embellishment, into the roleplay, like:
Building a raft on the local duck pond and paddling out to an island, only to have the raft disintegrate and have to swim back
An effort to dye a friend’s hair bright red which didn’t produce a terribly satisfactory result but did stain many parts of a bathroom
Camping in the garden, dragging out a desktop computer and extension cable to fully replicate the “in the wild” experience
Flooding my mother’s garden (which at that time was a long slope on clay soil) in order to make a muddy waterslide
Generating fake credit card numbers to facilitate repeated month-long free trials of an ISP‘s services
Riding on the bonnet of a friend’s first car, hanging on to the windscreen wipers, eventually (unsurprisingly) falling off and getting run over
Of course: none of the new Nightliners I trained knew which, if any, of these stories were real – that was never a part of the experience. But many were real, or had a morsel of truth. And a reasonable number of them – four of those in the list above – were things that Gary and I had done together in our youth.
JTA’s surprise came from that strange feeling that occurs when two very parts of your life that you thought were completely separate suddenly and unexpectedly collide with one another (I’m familiar with it). The anecdote that Gary had just shared about our teen years was one that exactly mirrored something he’d heard me say during the My Friend Scott roleplay, and it briefly crashed his brain. Suddenly, this was Scott standing in front of him, and he’d been able to get far enough through his sentence to begin saying that name (“Sc…”) before the crash stopped him in his tracks and he finished off with “…gary”.
I’m not sure whether or not Gary realises that, in my house at least, he’s to this day been called “Scgary”.
I bumped into him, completely by chance, while visiting my family in Preston this weekend. That reminded me that I’d long planned to tell this story: the story of Scgary, the imaginary person who exists only in the minds of the tiny intersection of people who’ve both (a) met my friend Gary and know about some of the crazy shit we got up to together when we were young and foolish and (b) trained as a volunteer at Aberystwyth Nightline during the window between me overhauling how training was provided and ceasing to be involved with the training programme (as far as I’m aware, nobody is performing My Friend Scott in my absence, but it’s possible…).
Gary asked me to give him a shout and meet up for a beer next time I’m in his neck of the woods, but it only occurred to me after I said goodbye that I’ve no idea what the best way to reach him is, these days. Like many children of the 80s, I’ve still got the landline phone numbers memorised of all of my childhood friends, but even if that number is still valid, it’d be his parents house!
I guess that I’ll let the Internet do the work for me: perhaps if I write this, here, he’ll find it, somehow. Hi, Scgary!
Here’s a thought: what’s the minimum number of votes your party would need to attract in order to be able to secure a majority of seats in the House of Commons and form a government? Let’s try to work it out.
The 2017 general election reportedly enjoyed a 68.8% turnout. If we assume for simplicity’s sake that each constituency had the same turnout and that votes for candidates other than yours are equally-divided amongst your opposition, that means that the number of votes you need to attract in a given constituency is:
68.8% × the size of its electorate ÷ the number of candidates (rounded up)
For example, if there was a constituency of 1,000 people, 688 (68.8%) would have voted. If there were 3 candidates in that constituency you’d need 688 ÷ 3 = 229⅓, which rounds up to 230 (because you need the plurality of the ballots) to vote for your candidate in order to secure the seat. If there are only 2, you need 345 of them.
The minimum number of votes you’d need would therefore be this number for each of the smallest 326 constituencies (326 is the fewest number of seats you can hold in the 650-seat House of Commons and guarantee a strict majority; in reality, a minority government can sometimes form a government but let’s not get into that right now). Constituencies vary significantly in size, from only 21,769 registered voters in Na h-Eileanan an Iar (the Western Isles of Scotland, an SNP/Labour marginal) to 110,697 in the Isle of Wight (which flip-flops between the Conservatives and the Liberals), but each is awarded exactly one seat, so if we’re talking about the minimum number of votes you need we can take the smallest 326.
By my calculation, with a voter turnout of 68.8% and assuming two parties field candidates, one can win a general election with only 7,375,016 votes; that’s 15.76% of the electorate (or 11.23% of the total population). That’s right: you could win a general election with the support of a little over 1 in 10 of the population, so long as it’s the right 1 in 10.
I’ll leave you to decide how you feel about that. In the meantime, here’s my working (and you can tweak the turnout and number-of-parties fields to see how that affects things). My data comes from the following Wikipedia/Wikidata sources: , , , ,  mostly because the Office of National Statistics’ search engine is terrible.