tl;dr: TRRTL.COM is my reimplementation of a Logo on-screen turtle as a CoffeeScript-backed web application
For many children growing up in the 1970s and 1980s, their first exposure to computer programming may have come in the form of Logo, a general-purpose educational programming language best-known for its “turtle graphics” capabilities. By issuing
commands to an on-screen – or, if they were really lucky, robotic – cursor known as a turtle, the student could draw lines and curves all over the screen (or in the case of
robotic turtles: a large sheet of paper on the floor).
While our eldest and I were experimenting with programming (because, well…) a small robotic toy of hers, inspired by a book, it occurred to me that this was an experience that she might miss out on. That’s fine, of course: she doesn’t have to find the same joy
in playing with Logo on an Amstrad CPC or a BBC Micro that I did… but I’d like her to be able to have the option. In fact, I figured, there’s probably a whole generation of folks who
played with Logo in their childhood but haven’t really had the opportunity to use something as an adult that gives the same kind of satisfaction. And that’s the kind of thing I can fix.
TRRTL.COM is my attempt to produce a modern, web-based (progressive,
offline-first) re-imagining of Logo. It uses CoffeeScript as its base language because it provides all of the power of JavaScript but supports a
syntax that’s more-similar to that of traditional Logo implementations (with e.g. optional semicolons and unparenthesised parameters).
If you’ve not used Logo before, give it a go. Try typing simple commands like forward 100 (steps), right 90 (degrees), and so
on and you’ll find it’s a bit like an etch-a-sketch. Click the “help” icon in the corner for more commands (and shorter forms of them) as well as instructions on writing longer programs
and sharing your work with the world.
And of course the whole thing is open source in the most permissive way imaginable, so if you’re of an
inclination to do your own experiments with <canvas>, Progressive Web Apps, and the like, you’re welcome to borrow from me. Or if anybody wants to tag-team on making
a version that uses the Web Bluetooth API to talk to a robotic turtle or to use WebRTC to make LAN “multiplayer” turtle art, I’m totally game for that.
My volunteering and academic workload for the rest of this year is likely to reduce the amount of random/weird stuff I put online, so it might get boring here for a while. Hope this
tides you over in the meantime.
Hello, friendly insurance salesman I spoke to earlier today! I’ve been expecting you. Also: sorry.
I’ve been expecting you because you seemed so keen to finish your shift and search for me and, with my name, I’m pretty easy to find. I knew that you planned to search for me because
after I caused so much trouble for your computer systems then, well, I probably deserved it.
I’m sorry that what should have been a click-click-done exercise came down to a live chat session and then a phone call. I don’t mean to be more work for people.
But thank you for being friendly. And useful. And generally awesome. I expected a painful process, perhaps because that’s what I’d had from my last insurer. You, on the other hand (and
your Live Chat colleague who I spoke to beforehand) were fantastic. Somehow you were more-pleasant, more-competent, and represent better value than the insurer we’re coming
from, so thank you. And that’s the real reason that I hope you’ll follow through on the suggestion that you search for me by name: because you deserve a pat on the back.
Have you noticed how the titles printed on the spines of your books are all, for the most part, oriented the same way? That’s not a coincidence.
ISO 6357 defines the standard positioning of titles on the spines of printed books (it’s also
codified as British Standard BS6738). If you assume that your book is stood “upright”, the question is one of which way you tilt your head to read the title printed on the spine. If you
tilt your head to the right, that’s a descending title (as you read left-to-right, your gaze moves down, towards the surface on which the book stands). If you tilt your head to
the left, that’s an ascending title. If you don’t need to tilt your head in either direction, that’s a transverse title.
The standard goes on to dictate that descending titles are to be favoured: this places the title in a readable orientation when the book lays flat on a surface with the cover
face-up. Grab the nearest book to you right now and you’ll probably find that it has a descending title.
But if the book is lying on a surface, I can usually read the cover of the book. Only if a book is in a stack am I unable to do so, and stacks are usually relatively short and
so it’s easy enough to lift one or more books from the top to see what lies beneath. What really matters when considering the orientation of a spine title is, in my mind, how it appears
when it’s shelved.
It feels to me like this standard’s got things backwards. If a shelf of anglophone books is organised into any kind of order (e.g. alphabetically) then it’ll usually be from left to
right. If I’m reading the titles from left to right, and the spines are printed descending, then – from the perspective of my eyes – I’m reading from bottom to top:
i.e. backwards!
It’s possible that this is one of those things that I overthink.
If you’re reading this post via my blog and using a desktop computer, try opening your browser’s debug console (don’t worry; I’ll wait). If you don’t know how, here’s instructions for Firefox and instructions for Chrome. Other browsers may vary. You ought to see something like this in your
debugger:
What sorcery is this?
The debug console is designed to be used by web developers so that they can write Javascript code right in their browser as well as to investigate any problems with the code run by a
web page. The web page itself can also output to the console, which is usually used for what I call “hello-based debugging”: printing out messages throughout a process so that the flow
and progress can be monitored by the developer without having to do “proper” debugging. And it gets used by some web pages to deliver secret messages to any of the site users who open
their debugger.
Principally, though, the console is designed for textual content and nothing else. That said, both Firefox and Chrome’s consoles permit the use of CSS to style blocks of debug output by using the %c escape sequence. For example, I could style some of a message with italic text:
>> console.log('I have some %citalic %ctext', 'font-style:
italic;', ''); I have someitalictext
Using CSS directives like background, then, it’s easy
to see how one could embed an image into the console, and that’s been done before. Instead, though, I wanted to use
the lessons I’d learned developing PicInHTML 8¾ years ago to use text and CSS
(only) to render a colour picture to the console. First, I created my template image – a hackergotchi of me and an accompanying
speech bubble, shrunk to a tiny size and posterised to reduce the number of colours used and saved as a PNG.
Next, I wrote a quick Ruby program, consolepic.rb, to do the hard work. It analyses each pixel of the image
and for each distinct colour assigns to a variable the CSS code used to set the background colour to that colour. It looks for
“strings” of like pixels and combines them into one, and then outputs the Javascript necessary to write out all of the above. Finally, I made a few hand-tweaks to insert the text into
the speech bubble.
The resulting output weighs in at 31.6kB – about a quarter of the size of the custom Javascript on the frontend of my site and so quite a
bit larger than I’d have liked and significantly less-efficient than the image itself, even base64-encoded for embedding directly into the code, but that really wasn’t the
point of the exercise, was it? (I’m pretty sure there’s significant room for improvement from a performance perspective…)
What it achieved was an interesting experiment into what can be achieved with Javascript, CSS, the browser console, and a little
imagination. An experiment that can live here on my site, for anybody who looks in the direction of their debugger, for the foreseeable future (or until I get bored of it). Anybody with
any more-exotic/silly ideas about what this technique could be used for is welcome to let me know!
Update: 17 April 2019 – fun though this was, it wasn’t worth continuing to deliver an additional 25% Javascript payload to every
visitor just for this, so I’ve stopped it for now. You can still read the source code (and even manually run it in the console) if you like.
And I have other ideas for fun things to do with the console, so keep an eye out for that…
Somehow in the intervening years I’ve gotten way out of practice and even more out of shape because our expedition was hard. Partly that was our fault for choosing to climb on
one of the shortest days of the year, requiring that we maintain a better-than-par pace throughout to allow us to get up and down before the sun set (which we actually managed with
further time in-hand), but mostly it’s the fact that I’ve neglected my climbing: just about the only routine exercise I get these days is cycling, and with changes in my work/life
balance I’m now only doing that for about 40 miles in a typical week.
For the longest time my primary mountaineering-buddy was my dad, who was – prior to his death during a hillwalking accident – a bigger climber and
hiker than I’ll ever be. Indeed, I’ve been “pushed on” by trying to keep up with my father enough times that fighting to keep up with Robin at the weekend was second nature. If I want
to get back to the point where I’m fit enough for ice climbing again I probably need to start by finding the excuse for getting up a hill once in a while more-often than I do, first,
too. Perhaps I can lay some of the blame for my being out of practice in the flat, gentle plains of Oxfordshire?
I’d have loved to have gotten a shot of me actually managing to get some use out of my crampons, but by that point visibility wasn’t great and we were rather cold and wet to be
stopping in a wind to take photographs. So this rocky stretch will have to do.
In any case, it was a worthwhile and enjoyable treat to be able to be part of Robin’s final reflection as well as to end the year somewhat-literally “on a high” by seeing off 2018 in
the Scottish Highlands. If you’ve not read his blog about his adventures of the last 52 weekends, you should: whether taking a Boris Bike from Brixton to Brighton (within the rental window) or hitching a ride on an aeroplane, he’s provided a year’s worth of fantastic stories accompanied by some great photography.
Noticing that our bagel supply was running low and with two kids who’d happily fight to the death for the last one if it came down to it, I decided this weekend to dust off an old
recipe and restock using the ingredients in our cupboard. For a festive spin, I opted to make cranberry and cinnamon bagels, and served a few at my family’s regular Sunday brunch.
Little did I know that they would turn out to be such a hit that not one from the resupply would survive to the end of the day, and I’ve been pressed into making them again in time for
breakfast on Christmas Day (or, as Ruth suggested as she and Robin fought for the last one in a manner
more-childish than the children ever would, I could “make them any time I feel like it; every week maybe?”).
Even the slightly-charred one turned out to be flipping delicious.
If you’d like to make your own, and you totally should, the recipe’s below. I prefer volumetric measurements to weight for bread-making: if you’re not used to doing so, be sure to give
your dry ingredients a stir/shake to help them settle when measuring.
Festive Cranberry & Cinnamon Bagels
Yield: 8 bagels
Duration:
When my dough is unevenly shaped I call it “rustic”. These are rustic bagels, ready to go into the oven.
Eyes on the prize: this is what you’re ultimately aiming for. You might even make a less-“rustic” one.
Directions
Whisk the yeast into the water and set aside for a few minutes to activate.
Combine the flour, one quarter of the sugar, and salt.
Make a well, and gradually introduce the water/yeast, mixing thoroughly to integrate all the flour into a sticky wet dough.
Add the vanilla extract and mix through.
Knead thoroughly: I used a mixer with a dough hook, but you could do it by hand if you prefer. After 5-10 minutes, when the dough becomes stretchy, introduce the dried fruit and
continue to knead until well integrated. The dough will be very wet.
Mix the cinnamon into the remaining sugar and scatter over a clean surface. Using well-floured fingers, form the dough into a ball and press into the sugar/cinnamon mixture. Fold
and knead against the mixture until it’s all picked-up by the dough: this approach forms attractive pockets and rivulets of cinnamon throughout the dough.
Rub a large bowl with oil. Ball the dough and put it into the bowl, cover tightly, and leave at room temperature for up to two hours until doubled in size.
When it’s ready, fill a large pan about 6cm deep with water, add the honey, and bring to a simmer. Pre-heat a hot oven (gas mark 7, 220°)
On a lightly-floured surface and with well-floured fingertips, extract the ball of dough and divide into eight (halve, halve, halve again). Shape each ball into a bagel by
pushing-through the middle with your thumb and stretching out the hole as you rotate it.
Submerge each bagel into the hot water for about a minute on each side, then transfer to baking sheet lined with greaseproof paper.
Thin the egg white with a few drops of water, stir, then brush each bagel with the egg mix.
Bake for about 25 minutes until golden brown. Cool on a wire rack.
Most bagel recipes I’ve seen claim that they freeze well. I can make no such claim, because ours barely cool before they’re eaten.
Mostly this recipe’s here for my own reference, but if you make some then let me know how they turn out for you. (Oh, and for those of you who prefer when my blog posts are technical,
this page is marked up in h-recipe.)
An increasing number of people are reportedly suffering from an allergy to the meat and other products of nonhuman mammals, reports Mosaic Science this week, and we’re increasingly confident that the cause is
a sensitivity to alpha-gal (Galactose-alpha-1,3-galactose), a carbohydrate produced in the bodies of virtually
all mammals except for us and our cousin apes, monkeys, and simians (and one of the reasons you can’t transplant tissue from pigs to humans, for example).
The lone star tick (You call that a star, tick? Looks like a blob to me!), one of several vectors for alpha-gal sensitivity.
The interesting thing is that the most-common cause of alpha-gal sensitivity appears to be the bite of one of a small number of species of tick. The most-likely hypothesis seems to be
that being bitten by such a tick after it’s bitten e.g. deer or cattle may introduce that species’ alpha-gal directly to your bloodstream. This exposure triggers an immune response
through all future exposure, even if it’s is more minor, e.g. consuming milk products or even skin contact with an animal.
That’s nuts, isn’t it? The Mosaic Science article describes the reaction of Tami McGraw, whose symptoms began in 2010:
[She] asked her doctor to order a little-known blood test that would show if her immune system was reacting to a component of mammal meat. The test result was so strongly positive,
her doctor called her at home to tell her to step away from the stove.
That should have been the end of her problems. Instead it launched her on an odyssey of discovering just how much mammal material is present in everyday life. One time, she took
capsules of liquid painkiller and woke up in the middle of the night, itching and covered in hives provoked by the drug’s gelatine covering.
When she bought an unfamiliar lip balm, the lanolin in it made her mouth peel and blister. She planned to spend an afternoon gardening, spreading fertiliser and planting flowers, but
passed out on the grass and had to be revived with an EpiPen. She had reacted to manure and bone meal that were enrichments in bagged compost she had bought.
Cats can eat bacon. But some cat owners can’t. More bacon for the cats? The plot thickens. Also: haven’t used this picture in
a while, have I?
Of course, this isn’t the only nor even the most-unusual (or most-severe) animal-induced allergy-to-a-different-animal we’re aware of. The hilariously-named but terribly-dangerous
Pork-Cat syndrome is caused, though we’re not sure how, by exposure to cats and
results in a severe allergy to pork. But what makes alpha-gal sensitivity really interesting is that it’s increasing in frequency at quite a dramatic rate. The culprit? Climate change.
Probably.
It’s impossible to talk to physicians encountering alpha-gal cases without hearing that something has changed to make the tick that transmits it more common – even though they don’t
know what that something might be.
…
“Climate change is likely playing a role in the northward expansion,” Ostfeld adds, but acknowledges that we don’t know what else could also be contributing.
An average serving of beef contributes almost 8kg of greenhouse gases, compared to around 1kg for chicken. Thanks, Beeb (click through for full article).
A little dated, perhaps: I’m sure that nobody needs to be told nowadays that one of the biggest things a Westerner can do to reduce their personal carbon footprint (after from breeding less or not at all, which I maintain is the biggest, or avoiding air
travel, which Statto argues for) is to reduce or refrain from consumption of meat (especially pork and beef) and dairy products.
Indeed, environmental impact was the biggest factor in my vegetarianism (now weekday-vegetarianism) for
the last eight years, and it’s an outlook that I’ve seen continue to grow in others over the same period.
Seeing these two stories side-by-side in my RSS reader put the Gaia hypothesis in my mind.
If you want a pop-culture-grade introduction to the Gaia hypothesis in the context of climate change, this SMBC comic does the job, and does so almost with fewer words than this
caption explaining that it does so.
If you’re not familiar with the Gaia hypothesis, the basic idea is this: by some mechanism, the Earth and all of the life on it act in synergy to maintain homeostasis. Organisms not
only co-evolve with one another but also with the planet itself, affecting their environment in a way that in turn affects their future evolution in a perpetual symbiotic relationship
of life and its habitat.
Its advocates point to negative feedback loops in nature such as plankton blooms affecting the weather in ways that inhibit
plankton blooms and to simplistic theoretical models like the Daisyworld Simulation
(cute video). A minority of its proponents go a step further and describe the Earth’s changes teleologically, implying a conscious Earth with an intention to
protect its ecosystems (yes, these hypotheses were born out of the late 1960s, why do you ask?). Regardless, the essence is the same: life’s effect on its environment affects the
environment’s hospitality to life, and vice-versa.
There’s an attractive symmetry to it, isn’t there, in light of the growth in alpha-gal allergies? Like:
Today – climate change causes ticks to spread more-widely and bite more humans.
Tomorrow – tick bites cause humans to consume less products farmed from mammals?
Both my appreciation and my rejection of Gaia Hypothesis can probably be traced to me playing way too much SimEarth as a
teenager. Here’s my Daisyworld in state of equilibrium, because I haven’t yet gotten bored and spawned dinosaurs to eat all of the daisies.
That’s not to say that I buy it, mind. The Gaia hypothesis has a number of problems, and – almost as bad – it encourages a complacent “it’ll all be okay, the Earth will fix itself”
mindset to climate change (which, even if it’s true, doesn’t bode well for the humans residing on it).
But it was a fun parallel to land in my news reader this morning, so I thought I’d share it with you. And, by proxy, make you just a little bit warier of ticks than you might have been
already. /shudders/
I’m a huge fan of multifactor authentication. If you’re using it, you’re probably familiar with using an app on your phone (or receiving a text or email) in addition to a username and
password when logging in to a service like your email, social network, or a bank. If you’re not using it then, well, you should be.
Using an app in addition to your username and password, wherever you can, may be the single biggest step you can make to improving your personal digital security.
Ruth recently had a problem when she lost her phone and couldn’t connect to a service for which she usually used an authenticator app like the
one pictured above, so I thought I’d share with you my personal strategy for managing multifactor authentication, in case it’s of any use to anybody else. After all: the issue of
not-having-the-right-second-factor-to-hand has happened to me before, it’s certainly now happened to Ruth, and it’s probably
something that’s happened to other people I know by now, too.
It could happen to anybody. What’s your authentication plan?
Here’s my strategy:
Favour fewer different multifactor solutions. Instead of using e.g. text messaging for one, an app for another, a different app for a third, a hardware token
for a fourth, and so on, try to find the fewest number of different solutions that work for your personal digital life. This makes backing up and maintenance easier.
I use RFC6238/TOTP (better known as “Google Authenticator”) for
almost all second factor purposes: the only exceptions are my online bank (who use a proprietary variant of RFC6238 that I’ve not finished reverse-engineering) and Steam (who use a
proprietary implementation of RFC6238 with a larger character set, for some reason, in their Steam Guard app).
Have a backup plan. Here’s the important bit. If you use your phone to authenticate, and you lose access to your phone for a period of time (broken, lost, stolen, out
of battery, in use by a small child playing a game), you can’t authenticate. That’s why it’s important that you have a backup plan.
That’s probably more backup devices than you need, but YMMV.
Some suggested backup strategies to consider (slightly biased towards TOTP):
Multiple devices: (Assuming you’re using TOTP or something like it) there’s nothing to stop you
setting up multiple devices to access the same account. Depending on how the service you’re accessing provides the code you need to set it up, you might feel like you have to
set them all up at the same time, but that’s not strictly true: there’s another way…
Consider setting up a backdoor: Some systems will allow you to print e.g. a set of “backup codes” and store them in a safe place for later use should you lose access
to your second factor. Depending on the other strategies you employ, you should consider doing this: for most (normal) people, this could be the single safest way to retain access to
your account in the event that you lose access to your second factor. Either way, you should understand the backdoors available: if your online bank’s policy is to email you
replacement credentials on-demand then your online bank account’s security is only as good as your email account’s security: follow the chain to work out where the weak links are.
Retain a copy of the code: The code you’re given to configure your device remains valid forever: indeed, the way that it works is that the
service provider retains a copy of the same code so they can generate numbers at the same time as you, and thus check that you’re generating the same numbers as them. If you keep a
copy of the backup code (somewhere very safe!) you can set up any device you want, whenever you want. Personally, I keep copies of all TOTP configuration codes in my password safe (you’re using a password safe, right?).
Set up the infrastructure what works for you: To maximise my logging-on convenience, I have my password safe enter my TOTP numbers for me: I’m using KeeOTP for KeePass, but since 2016 LastPass users can do basically the
same thing. I’ve also implemented my own TOTP client in Ruby to run on desktop computers I control (just be careful to protect the secrets file), because sometimes you just want a
command-line solution. The code’s below, and I think you’ll agree that it’s simple enough that you can audit it for your own safety too.
I’ve occasionally been asked whether my approach actually yields me any of the benefits of two-factor authentication. After all, people say, aren’t I weakening its benefits by storing
the TOTP generation key in the same place as my usernames and passwords rather than restricting it to my mobile
device. This is true, and it is weaker to do this than to keep the two separately, but it’s not true to say that all of the benefits are negated: replay attacks by an
attacker who intercepts a password are mitigated by this approach, for example, and these are a far more-common vector for identity theft than the theft and decryption of password
safes.
Everybody has to make their own decisions on the balance of their convenience versus their security, but for me the sweet spot comes here: in preventing many of the most-common attacks
against the kinds of accounts that I use and reinforcing my existing username/strong-unique-passwords approach without preventing me from getting stuff done. You’ll have to make your
own decisions, but if you take one thing away from this, let it be that there’s nothing to stop you having multiple ways to produce TOTP/Google Authenticator credentials, and you should consider doing so.
Our eldest, 4, started school this year and this week saw her first parents’ evening. This provided an opportunity for we, her parents, to “come out” to her teacher about our slightly-unconventional relationship structure. And everything was fine, which is nice.
We’re a unusual shape for a family. But three of us are an unusual shape for being in a kids’ soft play area, too, I suppose.
I’m sure the first few months of every child’s school life are a time that’s interesting and full of change, but it’s been particularly fascinating to see the ways in which our young
academic’s language has adapted to fit in with and be understood by her peers.
I first became aware of these changes, I think, when I overheard her describing me to one of her school friends as her “dad”: previously she’d always referred to me as her “Uncle Dan”.
I asked her about it afterwards and she explained that I was like a dad, and that her friend didn’t have an “Uncle Dan” so she used words that her friend would know. I’m not
sure whether I was prouder about the fact that she’d independently come to think of me as being like a bonus father figure, or the fact that she demonstrated such astute audience
management.
She’s since gotten better at writing on the lines (and getting “b” and “d” the right way around), but you can make out “I have two dads”.
I don’t object to being assigned this (on-again, off-again, since then) nickname. My moniker of Uncle Dan came about as a combination of an effort to limit ambiguity
(“wait… which dad?”) and an attempt not to tread on the toes of actual-father JTA: the kids themselves are welcome to call me pretty-much whatever they’re comfortable with. Indeed, they’d be carrying on a family tradition if they
chose-for-themselves what to call me: Ruth and her brothers Robin and Owen address their father not by a
paternal noun but by his first name, Tom, and this kids have followed suit by adopting “Grand-Tom” as their identifier for him.
Knowing that we were unusual, though, we’d taken the time to do some groundwork before our eldest started school. For example we shared a book about and spent a while talking about how
families differ from one another: we figure that an understanding that families come in all kinds of shapes and sizes is a useful concept in general from a perspective of
diversity and and acceptance. In fact, you can hear how this teaching pays-off in the language she uses to describe other aspects of the differences she sees in her friends and
their families, too.
Still, it was a little bit of a surprise to find myself referred to as a “dad” after four years of “Uncle Dan”.
I’ve no idea what the littler one – picture here with his father – will call me when he’s older, but this week has been a “terrible 2s” week in which he’s mostly called me “stop it”
and “go away”.
Nonetheless: in light of the fact that she’d clearly been talking about her family at school and might have caused her teacher some confusion, when all three of us “parents” turned up
to parents’ evening we opted to introduce ourselves and our relationship. Which was all fine (as you’d hope: as I mentioned the other day, our unusual relationship structure is pretty boring, really), and the only
awkwardness was in having to find an additional chair than the teacher had been expecting to use with which to sit at the table.
There’s sometimes a shortage of happy “we did a thing, and it went basically the same as it would for a family with monogamous parents” poly-family stories online, so I thought this one
was worth sharing.
And better yet: apparently she’s doing admirably at school. So we all celebrated with an after-school trip to one of our favourite local soft play centres.
Run run run run run run run STOP. Eat snack. Run run run run run run…
If you look closely, you’ll find I’ve shown you a sneak-peek at some of what’s behind tomorrow’s door. Shh. Don’t tell our social media officer.
As each door is opened, a different part of a (distinctly-Bodleian/Oxford) winter scene unfolds, complete with an array of fascinating characters connected to the history, tradition,
mythology and literature of the area. It’s pretty cool, and you should give it a go.
If you want to make one of your own – for next year, presumably, unless you’ve an inclination to count-down in this fashion to something else that you’re celebrating 25 days
hence – I’ve shared a version of the code that you can adapt for yourself.
The open-source version doesn’t include the beautiful picture that the Bodleian’s does, so you’ll have to supply your own.
Features that make this implementation a good starting point if you want to make your own digital advent calendar include:
Secure: your server’s clock dictates which doors are eligible to be opened, and only content legitimately visible on a given date can be obtained (no path-traversal,
URL-guessing, or traffic inspection holes).
Responsive: calendar adapts all the way down to tiny mobiles and all the way up to 4K fullscreen along with optimised images for key resolutions.
Friendly: accepts clicks and touches, uses cookies to remember the current state so you don’t have to re-open doors you opened yesterday (unless you forgot to open
one yesterday), “just works”.
Debuggable: a password-protected debug mode makes it easy for you to test, even on a production server, without exposing the secret messages behind each door.
Expandable: lots of scope for the future, e.g. a progressive web app version that you can keep “on you” and which notifies you when a new door’s ready to be opened,
was one of the things I’d hoped to add in time for this year but didn’t quite get around to.
Microsoft engineers have been spotted committing code to Chromium, the backend of Google Chrome
and many other web browsers. This, among other things, has lead to speculation that Microsoft’s browser, Edge, might be planned to switch from its
current rendering engine (EdgeHTML) to Blink (Chromium’s). This is bad news.
This post, as it would appear if you were looking at it in Edge. Which you might be, I suppose.
The younger generation of web developers are likely to hail this as good news: one fewer engine to develop for and test in, they’re all already using Chrome or
something similar (and certainly not Edge) for development and debugging anyway, etc. The problem comes perhaps because they’re too young to remember the First Browser War and its aftermath. Let me summarise:
Once upon a time – let’s call it the mid-1990s – there were several web browsers: Netscape Navigator, Internet Explorer, Opera, etc. They all used different rendering
engines and so development was sometimes a bit of a pain, but only if you wanted to use the latest most cutting-edge features: if you were happy with the standard, established
features of the Web then your site would work anywhere, as has always been the case.
Then, everybody starting using just one browser: following some shady dealings and monopoly abuse, 90%+ of Web users started using just one web browser, Internet Explorer. By the time anybody took
notice, their rivals had been economically crippled beyond any reasonable chance of recovery, but the worst had yet to come…
Developers started targeting only that one browser: instead of making websites, developers started making “Internet Explorer sites” which were only tested in that one
browser or, worse yet, only worked at all in that browser, actively undermining the Web’s position as an open platform. As the grip of the monopoly grew tighter,
technological innovation was centred around this single platform, leading to decade-long knock-on
effects.
The Web ceased to grow new features: from the release of Internet Explorer 6 there were no significant developments in the technology of the Web for many years.
The lack of competition pushed us into a period of stagnation. A
decade and a half later, we’re only just (finally) finishing shaking off this unpleasant bit of our history.
History looks set to repeat itself. Substitute Chrome in place of Internet Explorer and update the references to other web browsers and the steps above could be our future history, too.
Right now, we’re somewhere in or around step #2 – Chrome is the dominant browser – and we’re starting to see the beginnings of step #3: more and more “Chrome only” sites.
More-alarmingly this time around, Google’s position in providing many major Web services allows them to “push” even harder for this kind of change, even just subtly: if you make the
switch from Chrome to e.g. Firefox (and you absolutely should) you might find that
YouTube runs slower for you because YouTube’s (Google) engineers favour Google’s web browser.
So these are the three browser engines we have: WebKit/Blink, Gecko, and EdgeHTML. We are unlikely to get any brand new bloodlines in the foreseeable future. This is it.
If we lose one of those browser engines, we lose its lineage, every permutation of that engine that would follow, and the unique takes on the Web it could allow for.
And it’s not likely to be replaced.
Imagine a planet populated only by hummingbirds, dolphins, and horses. Say all the dolphins died out. In the far, far future, hummingbirds or horses could evolve into something that
could swim in the ocean like a dolphin. Indeed, ichthyosaurs in the era of dinosaurs looked much like dolphins. But that creature would be very different from a true dolphin: even
ichthyosaurs never developed echolocation. We would wait a very long time (possibly forever) for a bloodline to evolve the traits we already have present in other bloodlines today.
So, why is it ok to stand by or even encourage the extinction of one of these valuable, unique lineages?
We have already lost one.
We used to have four major rendering engines, but Opera halted development of its own rendering engine Presto before adopting Blink.
Three left. Spend them wisely.
As much as I don’t like having to work-around the quirks in all of the different browsers I test in, daily, it’s way preferable to a return to the dark days of the Web circa
most of the first decade of this century. Please help keep browsers diverse: nobody wants to start seeing this shit –
The Bodleian has a specific remit for digital archiving… but sometimes they just like collecting stuff, too, I’m sure.
The team responsible for digital archiving had plans to spend World Digital Preservation Day running a stand in Blackwell Hall for some
time before I got involved. They’d asked my department about using the Heritage Window – the Bodleian’s 15-screen video wall – to show a carousel of slides with relevant content over
the course of the day. Or, they added, half-jokingly, “perhaps we could have Pong up there as it’ll be its 46th birthday?”
Free reign to play about with the Heritage Window while smarter people talk to the public about digital archives? Sure, sign me up.
But I didn’t take it as a joke. I took it as a challenge.
Emulating Pong is pretty easy. Emulating Pong perfectly is pretty hard. Indeed, a lot of the challenge in the preservation of (especially digital) archives in general is in
finding the best possible compromise in situations where perfect preservation is not possible. If these 8″ disks are degrading, is is acceptable to copy them onto a different medium? If this video file is unreadable in
modern devices, is it acceptable to re-encode it in a contemporary format? These are the kinds of questions that digital preservation specialists have to ask themselves all the damn
time.
The JS Gamepad API lets your web browser talk to controller devices.
Emulating Pong in a way that would work on the Heritage Window but be true to the original raised all kinds of complications. (Original) Pong’s aspect ratio doesn’t fit nicely on a 16:9
widescreen, much less on a 27:80 ultrawide. Like most games of its era, the speed is tied to the clock rate of the processor. And of course, it should be controlled using a
“dial”.
By the time I realised that there was no way that I could thoroughly replicate the experience of the original game, I decided to take a different track. Instead, I opted to
reimplement Pong. A reimplementation could stay true to the idea of Pong but serve as a jumping-off point for discussion about how the experience of playing the game
may be superficially “like Pong” but that this still wasn’t an example of digital preservation.
Bip… boop… boop… bip… boop… bip…
Here’s the skinny:
A web page, displayed full-screen, contains both a <canvas> (for the game, sized appropriately for a 3 × 3 section of the video wall) and a
<div> full of “slides” of static content to carousel alongside (filling a 2 × 3 section).
Javascript writes to the canvas, simulates the movement of the ball and paddles, and accepts input from the JS
Gamepad API (which is awesome, by the way). If there’s only one player, a (tough! – only three people managed to beat it over the course of the day!) AI plays the other paddle.
A pair of SNES controllers adapted for use as USB
controllers which I happened to own already.
Increasingly, the Bodleian’s spaces seem to be full of screens running Javascript applications I’ve written.
I felt that the day, event, and game were a success. A few dozen people played Pong and explored the other technology on display. Some got nostalgic about punch tape, huge floppy disks,
and even mechanical calculators. Many more talked to the digital archives folks and I about the challenges and importance of digital archiving. And a good time was had by all.
I’ve open-sourced the entire thing with a super-permissive license so you can deploy it yourself (you know, on your ultrawide
video wall) or adapt it as you see fit. Or if you’d just like to see it for yourself on your own computer, you can (but unless
you’re using a 4K monitor you’ll probably need to use your browser’s mobile/responsive design simulator set to 3200 × 1080 to make it fit your screen). If you don’t have
controllers attached, use W/S to control player 1 and the cursor keys for player 2 in a 2-player game.
It’s always been a bit of an inconvenience to have to do these things, but it’s never been a terrible burden: even when I fly internationally – which is probably the hardest
part of having my name – I’ve learned the tricks I need to minimise how often I’m selected for an excessive amount of unwanted “special treatment”.
I plan to make my first trip to the USA since my name change, next year. Place bets now on how that’ll go.
This year, though, for the very first time, my (stupid bloody) unusual name paid for itself. And not just in the trivial ways I’m used to, like being able to spot my badge instantly on
the registration table at conferences I go to or being able to fill out paper forms way faster than normal people. I mean in a concrete, financially-measurable way. Wanna hear?
So: I’ve a routine of checking my credit report with the major credit reference agencies every few years. I’ve been doing so since long before doing so became free (thanks GDPR); long even before
I changed my name: it just feels like good personal data housekeeping, and it’s interesting to see what shows up.
It started out with the electoral roll. How did it end up like this? It was only the electoral roll. It was only the electoral roll.
And so I noticed that my credit report with Equifax said that I wasn’t on the electoral roll. Which I clearly am. Given that my credit report’s pretty glowing, I wasn’t too worried, but
I thought I’d drop them an email and ask them to get it fixed: after all, sometimes lenders take this kind of thing into account. I wasn’t in any hurry, but then, it seems: neither were
they –
2 February 2016 – I originally contacted them
18 February 2016 – they emailed to say that they were looking into it and that it was taking a while
22 February 2016 – they emailed to say that they were still looking into it
13 July 2016 – they emailed to say that they were still looking into it (which was a bit of a surprise, because after so long I’d almost forgotten that I’d even asked)
14 July 2016 – they marked the issue as “closed”… wait, what?
Given that all they’d done for six months was email me occasionally to say that it was taking a while, it was a little insulting to then be told they’d solved it.
I wasn’t in a hurry, and 2017 was a bit of a crazy year for me (for Equifax too, as it happens), so I ignored it for a bit, and
then picked up the trail right after the GDPR came into force. After all, they were storing personal information
about me which was demonstrably incorrect and, continued to store and process it even after they’d been told that it was incorrect (it’d have been a violation of principle 4 of the DPA 1998, too, but the GDPR‘s got bigger teeth: if you’re going to sick the law on somebody, it’s better that it has bark and bite).
Throwing the book tip-of-the-day: don’t threaten, just explain what you require and under what legal basis you’re able to do so. Let lawyers do the tough stuff.
My anticipation was that my message of 13 July 2018 would get them to sit up and fix the issue. I’d assumed that it was probably related to my unusual name and that bugs in
their software were preventing them from joining-the-dots between my credit report and the Electoral Roll. I’d also assumed that this nudge would have them either fix their software… or
failing that, manually fix my data: that can’t be too hard, can it?
Apparently it can:
You want me to make it my problem, Equifax, and you want me to change my name on the Electoral Roll to match the incorrect name you use to refer to me in your systems?
Equifax’s suggested solution to the problem on my credit report? Change my name on the Electoral Roll to match the (incorrect) name they store in their systems (to work around
a limitation that prevents them from entering single-character surnames)!
At this point, they turned my send-a-complaint-once-every-few-years project into a a full blown rage. It’s one thing if you need me to be understanding of the time it can take to fix
the problems in your computer systems – I routinely develop software for large and bureaucratic organisations, I know the drill! – but telling me that your bugs are my problems
and telling me that I should lie to the government to work around them definitely isn’t okay.
Dear Equifax: No. No no no. No. Also, no. Now try again. Love Dan.
At this point, I was still expecting them to just fix the problem: if not the underlying technical issue then instead just hack a correction into my report. But clearly they considered
this, worked out what it’d cost them to do so, and decided that it was probably cheaper to negotiate with me to pay me to go away.
Which it was.
This week, I accepted a three-figure sum from Equifax as compensation for the inconvenience of the problem with my credit report (which now also has a note of correction, not that my
alleged absence from the Electoral Roll has ever caused my otherwise-fine report any trouble in the past anyway). Curiously, they didn’t attach any strings to the deal, such as not
courting publicity, so it’s perfectly okay for me to tell you about the experience. Maybe you know somebody who’s similarly afflicted: that their “unusual” name means that a
credit reference company can’t accurately report on all of their data. If so, perhaps you’d like to suggest that they take a look at their credit report too… just saying.
You can pay for me to go away, but it takes more for me to shut up. (A lesson my parents learned early on.)
Apparently Equifax think it’s cheaper to pay each individual they annoy than it is to fix their database problems. I’ll bet that, in the long run, that isn’t true. But in the meantime,
if they want to fund my recent trip to Cornwall, that’s fine by me.
After the success of Challenge Robin this summer – where Ruth and I blindfolded her brother
Robin and drove him out to the middle of nowhere without his phone or money and challenged him to find his way home – Robin’s been angling for a sequel. He even went so far as to suffix the title
of his blog post on the subject “(part 1)”, in the hopes perhaps that we’d find a way to repeat the experience.
I gather that Robin was particularly concerned by the combination of the recommendation to bring swimwear and the warning that he may have to leave mainland UK… especially given that
the challenge takes place in November.
In response to an email sent a week in advance of the challenge, Robin quickly prepared and returned a “permission slip” from his parents, although I’m skeptical that either of them
actually saw this document, let alone signed it.
I’m told that the “signatures” on this document accurately replicate the signature styles of Robin’s parents. I’m guessing he forged a lot of these kinds of notes.
With about a day to go before the challenge began, Robin’s phone will have received a number of instructional messages from a sender-ID called “Control”, instructing him of his first
actions and kicking off his adventure. He’d already committed to going to work on Friday with a bag fully-packed for whatever we might have in store for him, but he doubtless wouldn’t
have guessed quite how much work had been put into this operation.
We considered giving Robin’s contact the alias ‘Smiler’, but ‘Frowny’ seemed more-fitting for the part.
By 18:06 he still hadn’t arrived to meet his contact. Had this adventure fallen at the first hurdle? Only time would tell…
Update – Friday 9 November 18:45: Robin arrived late and apologetic to find his contact, “Frowny”, at the GBK at that address, was played by JTA.
After sufficient apologies he was allowed to be granted the clue we’d expected to give him earlier…
The pair ate and drank and “Frowny” handed Robin his first clue: a map pointing to Cornwall Gardens in Kensington and instructions on where to find the next clue when he got there. The
game was afoot!
I love the little “Frowny” face; it really make this prop.
Clearly he’d taken the idea of being prepared for anything to a level beyond what we’d expected. Among his other provisions, he was carrying a tent, sleeping bag, and passport!
“Clearly my mistake,” he told his contact, “Was giving intelligent people a challenge and then leaving them three months to plan.”
Update – Friday 9 November 19:53: In Cornwall Gardens, Robin found the note (delayed somewhat, perhaps by the growing dark) and began his treasure
trail.
The sign at Cornwall Gardens kickstarted a journey visiting a thematic series of blue plaques around London before eventually leading to the Paddington Bear statue at Paddington
Station…
Soon after, though, things started to go very wrong…
I’m not sure that this was even one of the ones he was supposed to photograph, but anyway…
Update – Friday 9 Novembr 20:40: Let’s take a diversion, and I’ll rely on JTA to keep Robin’s eyes away from this post for a bit. Here’s what was
supposed to happen:
Robin would follow a trail of clues around London which would give him key words whose names alluded to literature about Paddington (station) and Penzance. Eventually he’d find a
puzzle box and, upon solving it, discover inside tickets for the Paddington-to-Penzance overnight sleeper train.
Meanwhile, I’ve been rushing around the countryside near Penzance setting up an epic extension to the previous trail complete with puzzles, mixed-terrain hikes, highlands, islands,
lions, tigers and bears (oh my). Some of those might not really have been in the plan.
The storm was just starting as I climbed up to a cliff edge for an as-yet-undisclosed reason; it’s already looking pretty wild and getting wilder all the time!
So now we’re working out what to do next. Right now I’m holed-up in an undisclosed location near Penzance (the ultimate target of the challenge) and Robin’s all the way over in London.
We’re working on it, but this hasn’t been so successful as we might have liked.
Update – Saturday 10 November 07:58: We’ve managed to get Robin onto a series of different trains rather than the sleeper, so he’ll still get to
Penzance… eventually! Meanwhile, I’m adjusting the planned order of stages at this end to ensure that he can still get a decent hike in (weather permitting).
Update – Saturday 10 November 10:45: Originally, when Robin had been expected to arrive in Penzance via a sleeper train about three hours ago, he’d
have received his first instruction via The Gadget, which JTA gave him in London:
The Gadget is a cheap Android smartphone coupled to a beefy battery pack and running custom software I wrote for the challenge.
The Gadget’s primary purpose is to send realtime updates on Robin’s position so that I can plot it on a map (and report it to you guys!) and to issue him with hints so that he knows
where he’s headed next, without giving him access to a phone, Internet, navigation, maps, etc. The first instruction would be to head to Sullivan’s Diner for breakfast (where
I’ve asked staff to issue him with his first clue): cool eh? But now he’s only going to be arriving in the afternoon so I’m going to have to adapt on-the-fly. Luckily I’ve got a plan.
The first clue has a picture of the ruin of a hill fort a few miles away and vauge instructions on how to find it.
I’m going to meet Robin off his train and suggest he skips this first leg of the challenge, because the second leg is… sort-of time-critical…
Update – Saturday 10 November 13:29: Robin finally arrives in Penzance on a (further-delayed) train. I’ve given him a sausage sandwich at
Sullivan’s Diner (who then gave him the clue above), turned on The Gadget (so I’ve got live tracking of his location), and given him the next clue (the one he’d have gotten at
Roger’s Tower) and its accompanying prop.
Rushing off the train, carrying his tent, sleeping bag, clothes…
Armed with the clue, Robin quickly saw the challenge that faced him…
The second clue SHOULD have been read at Roger’s Tower and so the photo would have made more sense.
After all of these delays, there’s only about an hour and a half until the tide comes in enough to submerge the causeway to St. Michael’s Mount: the island he’s being sent to. And he’s
got to get there (about an hour’s walk away), across the causeway, find the next clue, and back across the causeway to avoid being stranded. The race is on.
He WOULD have found this at Roger’s Tower. Why yes, I am a geocacher: why do you ask?
Luckily, he’d been able to open the puzzle box and knows broadly where to look on the island for the next clue. How will he do? We’ll have to wait and see…
Update – Saturday 10 November 14:18: Robin made spectacular time sprinting along the coast to Longrock (although his route might have been
suboptimal). At 14:18 he began to cross to the island, but with only a little over half an hour until the tide began to cover the causeway, he’d have to hunt quickly for the password he
needed.
Each pin is 3 minutes apart. You do the maths.
At 14:22 he retreived the clue and put the password into The Gadget: now he had a new set of instructions – to get to a particular location without being told what it was… only a
real-time display of how far away it was. 7.5km and counting…
I THINK he’s the rightmost blob. I wish I’d brought a telephoto lens.
Update – Saturday 10 November 14:57: Robin’s making his way along the coast (at a more-reasonable pace now!). He’s sticking to the road rather
than the more-attractive coast path, so I’ve had “Control” give him a nudge via The Gadget to see if he’d prefer to try the more-scenic route: he’ll need to, at some point, if he’s to
find his way to the box I’ve hidden.
While not as steep as the gradient to Roger’s Tower would’ve been, the coast path isn’t without its steep bits, too.
Update – Saturday 10 November 16:50: No idea where Robin is; The Gadget’s GPS has gone all screwy.
One of these pins is probably right, right?
But it looks like he probably made it to Cudden Point, where the final clue was hidden. And then kept moving, suggesting that he retreived it without plunging over the cliff
and into the sea.
It’s not VERY well hidden, but it only had to survive a day. A stormy day, mind…
In it, he’ll have found a clue as to broadly where his bed is tonight, plus a final (very devious) puzzle box with the exact location.
The fourth clue basically says “your bed is THAT way, but you need to open THIS puzzle box to know exactly where”. Evil.
The sun is setting and Robin’s into the final stretch. Will he make it?
Update – Saturday 10 November 17:25: He’s going to make it! He’s actually going to make it! Looks like he’s under a mile away and heading in the
right direction!
It might not be clear to him yet that there’s a river in the way, but I’m sure he’ll find a bridge. Or swim.
Update – Saturday 10 November 17:55: He made it!
Success!
We’ve both got lots to say, so a full debrief will take place in a separate blog post.
You may recall that on Halloween I mentioned that the Bodleian had released a mini choose-your-own-adventure-like adventure game book, available freely online. I decided that this didn’t go quite far
enough and I’ve adapted it into a hypertext game, below. (This was also an excuse for me to play with Chapbook, Chris Klimas‘s new under-development story format for Twine.
If the thing you were waiting for before you experienced Shadows Out of Time was it to be playable in your browser, wait no longer: click here to play the game…