Axe Feather 2021

tl;dr?

I recreated a 16-year old interactive ad. Experience it here. Get the source code here. Or keep reading for the full story.

What?

Back in 2005 I reblogged a Flash-based interactive advert I’d discovered via del.icio,us. And if that sentence wasn’t early-naughties enough for you, buckle up…

A woman lies on a bed with her legs crossed, playfully wagging her finger.
This screenshot isn’t from the original site but from my homage to it. More on that later.

At the end of 2004, Unilever brand Axe (Lynx here in the UK) continued their strategy of marketing their deodorant as magically transforming young men into hyper-attractive sex gods. This is, of course, an endless battle, pitting increasingly sexually-charged advertisements against the fundamental experience of their product, which smells distinctly like locker rooms and school discos. To launch 2005’s new fragrance Feather, they teamed up with London-based design agency Dare Digital to create a game at domain AxeFeather.com (long since occupied by domain squatters).

In the game, the player’s mouse pointer becomes a feather which they can use to tickle an attractive young woman lying on a bed. The woman’s movements – which vary based on where she’s tickled – have been captured in digital video. This was aggressively compressed using the then-new H.263-ish Sorensen Spark codec to make a download just-about small enough to be tolerable for people still on dial-up Internet access (which was still almost as popular as broadband). The ad became a viral hit. I can’t tell you whether it paid for itself in sales, but it must have paid for itself in brand awareness: on Valentines Day 2005 it felt like it was all the Internet wanted to talk about.

Axe Feather logo visible via Archive.org, circa August 2005, in a Firefox browser window.
The site was archived by the WayBack Machine… but it doesn’t work in a modern browser.

I suspect its success also did wonders for the career of its creative consultant Olivier Rabenschlag, who left Dare a few years later, hopped around Silicon Valley for a bit, then landed himself a job as Head of Creative (now Chief Creative Officer) with Google. Kudos.

Why?

I told you about the site 16 years ago: why am I telling you again? Because this site, which made headlines at the time, is gone.

And not just a little bit gone, like a television ad no longer broadcast but which might still exist on YouTube somewhere (and here it is – you’re welcome for the earworm). The website went down in 2009, and because it was implemented in Flash the content was locked away in a compiled, proprietary format, which has ceased to be meaningfully usable on the modern web.

IE-specific CSS with a comment "Ok, so the scrollbar is IE specific...but I like it, ok?? :)"
The parts of AxeFeather.com’s code that are openly readable don’t help much, but I love this comment, which carries the scent of the adolescent web in the same way at Lynx deodorant carries the scent of an adolescent human.

The ad was pioneering. Flash had only recently gained video support (this would be used the following year for the first version of YouTube), and it had so far been used mostly for non-interactive linear video. This ad was groundbreaking… but now it’s disappeared like so much other Flash work. And for all that Flash might have been bad for the web, it’s an important part of our digital history [recommended reading].

Ruffle window showing an empty bed.
Third-party Flash emulation is imperfect. I tried to make Axe Feather work in Ruffle and got… an empty bed? What is this, a metaphor for being a lonely nerd?

So on a whim… I decided to see if I could recreate the ad.

Call it lockdown fever if you like, because it’s certainly not the work of a sane mind to attempt to resurrect a 16-year-old Internet advertisement. But that’s what I did.

How?

My plan: to reverse-engineer the digital assets (video, audio, cursor etc.) out of the original Flash file, and use them to construct a moderately-faithful recreation of the ad, suitable for use on the modern web. My version must:

  • Work in any modern browser, without Flash of course.
  • Work on mobile devices/with touchscreens, with all of the original functionality available without a keyboard (the original had secret content hidden behind keyboard keypresses). Nowadays, Rabenschlag knows to put mobile-first, but I think we can forgive him for not doing that twelve months before Flash Lite 2.0 would bring .flv support to mobile devices…
  • Indicate how much of the video content you’d seen, because we live in an era of completionists who want to know they’ve seen it all.
  • Depend on no third-party frameworks/libraries: just vanilla HTML, CSS, and JavaScript.

Let’s get started.

Reverse-engineering

Handbrake converting 19.flv to MP4 format.
At this point I noticed that the videos had no audio tracks: the giggling and other sound effects must be stored separately.

I grabbed the compiled .swf file from archive.org and ran it through SWFExtract and an online decompiler: neither was individually able to extract all of the assets, but together they gave me a full set. I ran the .flv files through Handbrake to get myself a set of .mp4 files instead.

Two starting frames from the videos, annotated to show that they are not aligned to the same point.
In what appears to have been an exercise in size optimisation, the original authors cropped the videos differently depending on how much space was needed (e.g. if the subject stretched her arms above her head, more space would be required). Clearly, some re-alignment would be needed.

Seeing that the extracted video files were clearly designed to be carefully-positioned on a static background, and not all in the exact same position, I decided to make my job easier by combining them all together, and including the background layer (the picture of the bed) as a single video. Integrating the background with the subject meant that I was able to use video editing software to tweak the position, which I imagined would be much easier than doing so in code. Combining all of the video clips into a single file provides compression benefits as well as making it easier to encourage a browser to precache the entire video to begin with.

Four layer design. From bottom to top: web page, video (showing woman on bed), (transparent) canvas, cursor (shaped like a feather).
My design called for three “layers” above my web page: the video, a transparent (and usually hidden) canvas showing the hit areas for debugging purposes, and the feather-shaped cursor.

The longest clip was a little over 6 seconds long, so I split my timeline into blocks of 7 seconds, padding each clip with a freeze-frame of its final image to make each exactly 7 seconds long. This meant that calculating the position in the finished video to which I wanted to jump was as simply as multiplying the (0-indexed) clip number by 7 and seeking to that position. The additional “frozen” frames acted as a safety buffer in case my JavaScript code was delayed by a few milliseconds in jumping to the “next” block.

Davinci Resolve showing composition of the actress onto the bed in a timeline.
I used onion-skinning to help “line up” the actress with herself as I composited her onto the bed in a single unified video of 7-second blocks.

An additional challenge was that in the original binary, the audio files were stored separately from the video clips… and slightly longer than them! A little experimentation revealed that the ends of each clip lined up, presumably something to do with how Flash preloads and synchronises media streams. Luckily for me, the audio clips were numbered such that they mostly mapped to the order in which the videos appeared.

Once I had a video file suitable for use on the web (you can watch the entire clip here, if you really want to), it was time to write some code.

Video timeline showing that each 7-second block is comprised of the original clip plus padding, atop a background layer of the bed and each clip's associated audio.
It feels slightly wasteful that over 50% of the resulting video clip is a freeze-frame, but modern video compression algorithms like H.264 reduce the impact considerably and the resulting video file is about the same size as its more-optimised predecessor.

Regular old engineering

The theory was simple: web page, video, loop the first seven seconds until you click on it, then animate the cursor (a feather) and jump to another seven-second block before jumping back or, in some cases, on to a completely new seven second block. Simple!

Of course, any serious web development is always a little more complex than you first anticipate.

Game map illustrating transition between the states of Axe Feather 2021.
I extracted from the .swf 34 distinct animated clips, which I numbered 0 through 33. 6 and 30 appeared to be duplicates of others. 0 and 33 are each two “idling” states from which interaction can lead to other states. Note that my interpretation of the order and relationship of animation sequences differs from the original.

For example: nowadays, putting a video on a web page is as easy as a <video> tag. But, in an effort to prevent background web pages from annoying you with unexpected audio, modern browsers won’t let a video play sound unless user interaction is the reason that the video starts playing (or unmutes, if it was playing-but-muted to begin with). Broadly-speaking, that means that a definitive user action like a “click” event has to be in the call stack when your code makes the video play/unmute.

But changing the .currentTime of a video to force it into a loop: that’s fine! So I set the video to autoplay muted on page load, with a script to make it loop within its first seven-second block. The actress doesn’t make any sound in block 0 (position A) anyway; so I can unmute the video when the user interacts with a hotspot.

For best performance, I used window.requestAnimationFrame to synchronise my non-interactive events (video loops, virtual cursor repositioning). This posed a slight problem in that animationframes wouldn’t be triggered if the tab was moved to the background: the video would play through each seven-second block and into the next! Fortunately the visibilitychange event came to the rescue and I was able to pause the video when it wasn’t being actively watched.

I originally hoped to use the cursor: CSS directive to make the “feather” cursor, but there’d be no nice way to animate it. Comet Cursor may have been able to use animated GIFs as cursors back in 1997 (when it wasn’t busy selling all your personal information to advertisers, back when that kind of thing used to attract widespread controversy), but modern browsers don’t… presumably because it would be super annoying. They also don’t all respect cursor: none, so I used the old trick of using cursor: url(null.png), none (where null.png is an almost-entirely transparent 1×1 pixel image) to hide the original cursor, then position an image dynamically.  I usegetBoundingClientRect() to allow the video to resize dynamically in CSS and convert coordinates on it represented as percentages into actual pixel values and vice-versa: this allows it to react responsively to any screen size without breakpoints or excessive code.

Once I’d gone that far I was able to drop the GIF idea entirely and used a CSS animation for the “tickling” motion.

Woman on bed in idle position B, with hotspots highlighted on each arm, her hed, her chest, her stomach, her hips, the top of her legs, and the bottom of the leg that's extended straight below her.
The hotspot overlay was added as a debugging feature but I left it in the final version. Hold the space bar to highlight hit areas.

I added a transparent <canvas> element on top of the <video> on which the hit areas are dynamically drawn to help me test the “hotspots” and tweak their position. I briefly considered implementing a visual tool to help me draw the hotspots, but figured it wasn’t quite worth the time it would take.

As I implemented more and more of the game, I remembered one feature from the original that I’d missed: the “blowaway”. If you trigger block 31 – a result of tickling the woman’s nose – she’ll blow your cursor off the screen. It’s particularly fun because it subverts the player’s expectations of their user interface: once you’ve got past the surprise of your cursor being a feather, you quickly settle in to it moving like a regular cursor… but then control’s stolen from you and the cursor vanishes! (Well I thought it was cool… 16 years ago.)

A woman blows a feather away from her face.
Sometimes tickling her nose will make her blow your feather off the screen. That’ll show you.

So yeah: that was my project this weekend.

I can’t even begin to explain why anybody would do this. But I did it. If you haven’t already: go have a play. And if you’re interested in how it works, the source code’s free for you to explore.

× × × × × × × × × × × ×

Further Lessons in Wood-Fired Pizza

Earlier this month, I made my first attempt at cooking pizza in an outdoor wood-fired oven. I’ve been making pizza for years: how hard can it be?

Charred non-stick coating on a tray.
Do you know what temperature Teflon (PTFE) burns at? I do. Now. (About 470ºC.)

It turned out: pretty hard. The oven was way hotter than I’d appreciated and I burned a few crusts. My dough was too wet to slide nicely off my metal peel (my wooden peel disappeared, possibly during my house move last year), and my efforts to work-around this by transplanting cookware in and out of the oven quickly lead to flaming Teflon and a shattered pizza stone. I set up the oven outside the front door and spent all my time running between the kitchen (at the back of the house) and the front door, carrying hot tools, while hungry children snapped at my ankles. In short: mistakes were made.

Pizza dough balls under cling film on a black marble surface.
I roped in Robin to help make dough, because he’s damn good at it. And because I had a mountain of work to do today.

I suspect that cooking pizza in a wood-fired oven is challenging in the same way that driving a steam locomotive is. I’ve not driven one, except in simulators, but it seems like you’ve got a lot of things to monitor at the same time. How fast am I going? How hot is the fire? How much fuel is in it? How much fuel is left? How fast is it burning through it? How far to the next station? How’s the water pressure? Oh fuck I forget to check on the fire while I was checking the speed…

Digital infared thermometer display reading 246ºC.
“How hot is it?” is a question I’m now much better at answering. Thanks, technology! This oven’s still warming up. (!)

So it is with a wood-fired pizza oven. If you spend too long preparing a pizza, you’re not tending the fire. If you put more fuel on the fire, the temperature drops before it climbs again. If you run several pizzas through the oven back-to-back, you leech heat out of the stone (my oven’s not super-thick, so it only retains heat for about four consecutive pizzas then it needs a few minutes break to get back to an even temperature). If you put a pizza in and then go and prepare another, you’ve got to remember to come back 40 seconds later to turn the first pizza. Some day I’ll be able to manage all of those jobs alone, but for now I was glad to have a sous-chef to hand.

Pizza in an oven, fire raging behind and jumping onto the floor of the oven.
Also, if too much of the flour you use to keep your peel moves slick falls into the oven, it catches fire. Now you have two fires to attend to.

Today I was cooking out amongst the snow, in a gusty crosswind, and I learned something else new. Something that perhaps I should have thought of already: the angle of the pizza oven relative to the wind matters! As the cold wind picked up speed, its angle meant that it was blowing right across the air intake for my fire, and it was sucking all of the heat out of the back of the oven rather than feeding the flame and allowing the plasma and smoke to pass through the top of the oven. I rotated the pizza oven so that the air blew into rather than across the oven, but this fanned the flames and increased fuel consumption, so I needed to increase my refuelling rate… there are just so many variables!

A salami and basil pizza with olives on half.
You know what, though? Everything turned out pretty-much okay.

The worst moment of the evening was probably when I took a bite out of a pizza that, it turned out, I’d shunted too-deep into the oven and it had collided with the fire. How do I know? Because I bit into a large chunk of partially-burned wood. Not the kind of smoky flavour I was looking for.

But apart from that, tonight’s pizza-making was a success. Cooking in a sub-zero wind was hard, but with the help of my excellent sous-chef we churned out half a dozen good pizzas (and a handful of just-okay pizzas), and more importantly: I learned a lot about the art of cooking pizza in a box of full of burning wood. Nice.

× × × × ×

We Are The Martians

This week our usual Dungeons & Dragons group took a week off while our DM recovered from a long and tiring week. As a “filler”, I offered to facilitate a game of Dialect: A Game About Language and How It Dies, from Thorny Games, who I discovered through a Metafilter post about their latest free print-and-play game, Sign: A Game about Being Understood. Yes, all of their games about about language and communication; what of it?

Dialect

Dialect could be described as a rules-light, GM-less (it has a “facilitator” role, but they have no more authority than any player on anything), narrative-driven/storytelling roleplaying game based on the concept of isolated groups developing their own unique dialect and using the words they develop as a vehicle to tell their stories.

Dialect's rulebook and card deck.
It’s also super-pretty to leaf through and hold.

This might not be the kind of RPG that everybody likes to play – if you like your rules more-structured, for example, or you’re not a fan of “one-shot”/”beer and pretzels” gaming – but I was able to grab a subset of our usual roleplayers – Alec, Matt R, Penny, and I – and have a game (with thanks to Google Meet for videoconferencing and Roll20 for the virtual tabletop: I’d have used Foundry but its card support is still pretty terrible!).

The Outpost

A game of Dialect begins with a backdrop – what other games might call a scenario or adventure – to set the scene. We opted for The Outpost, which put the four of us among the first two thousand humans to colonise Mars, landing in 2045. With help from some prompts provided by the backdrop we expanded our situation in order to declare the “aspects” that would underpin our story, and then expand on these to gain a shared understanding of our world and society:

  • Refugees from plague: Our expedition left Earth to escape from a series of devastating plagues that were ravaging the planet, to try to get a fresh start on another world.
  • Hostile environment: Life on Mars is dominated by the ongoing struggle for sufficient food and water; we get by, but only thanks to ongoing effort and discipline and we lack some industries that we haven’t been able to bootstrap in the five years we’ve been here (we had originally thought that others would follow).
  • Functionalist, duty-driven society: The combination of these two factors led us to form a society based on supporting its own needs; somewhat short of a caste system, our culture is one of utilitarianism and unity.
Finished game board from The Outpost backdrop of our game of Dialect.
Our finished game board, or tableau.

It soon became apparent that communication with Earth had been severed, at least initially, from our end: radicals, seeing the successes of our new social and economic systems, wanted to cement our differences by severing ties with the old world. And so our society lives in a hub-and-spoke cave system beneath the Martian desert, self-sustaining except for the need to send rovers patrolling the surface to scout for and collect valuable surface minerals.

In this world, and prompted by our cards, we each developed a character. I was Jeramiah, the self-appointed “father” of the expedition and of this unusual new social order, who remembers the last disasters and wars of old Earth and has revolutionary plans for a better world here on Mars, based on controlled growth and a planned economy. Alec played Sandy – “Tyres” to their friends – a rover-driving explorer with one eye always on the horizon and fresh stories for the colony brought back from behind every new crater and mountain. Penny played Susie, acting not only as the senior medic to the expedition but something more: sort-of the “mechanic” of our people-driven underground machine, working to keep alive the genetic records we’d brought from Earth and keep them up-to-date as our society eventually grew, in order to prevent the same kinds of catastrophe happening here. “Picker” Ben was our artist, for even a functionalist society needs somebody to record its stories, celebrate its accomplishments, and inspire its people. It’s possible that the existence of his position was Jeramiah’s doing: the two share a respect for the stark, barren, undeveloped beauty of the Martian surface.

We developed our language using prompt cards, improvised dialogue, and the needs of our society. But the decades that followed brought great change. More probes began to land from Earth, more sophisticated than the ones that had delivered us here. They brought automated terraforming equipment, great machines that began to transform Mars from a barren wasteland into a place for humans to thrive. These changes fractured our society: there were those that saw opportunity in this change – a chance to go above ground and live in the sun, to expand across the planet, to make easier the struggle of our day-to-day lives. But others saw it as a threat: to our way of life, which had been shaped by our challenging environment; to our great social experiment, which could be ruined by the promise of an excessive lifestyle; to our independence, as these probes were clearly the harbingers of the long-promised second wave from Earth.

Even as new colonies were founded, the Martians of the Hub (the true Martians, who’d been here for yams time, lived and defibed here, not these tanning desert-dwellers that followed) resisted the change, but it was always going to be a losing battle. Jeramiah took his last breath in an environment suit atop a dusty Martian mountain a day’s drive from the Hub, watching the last of the nearby deserts that was still untouched by the new green plants that had begun to spread across the surface. He was with his friend Sandy, for despite all of the culture’s efforts to paint them as diametrically opposed leaders with different ideas of the future, they remained friends until the end. As the years went by and more and more colonists arrived, Sandy left for Phobos, always looking for a new horizon to explore. Sick of the growing number of people who couldn’t understand his language or his art, Ben pioneered an expedition to the far side of the planet where he lived alone, running a self-sustaining agri-home and exploring the hills until his dying day. We were never sure where Susie ended up, but it wasn’t Mars: she’d talked about joining humanity’s next big jump, to the moons of Jupiter, so perhaps she’s out there on one of the colonies of Titan or Europa. Maybe, low clicks, she’s even keeping our language alive out there.

Retrospective

The whole event was a lot of fun and I’m keen to repeat it, perhaps with a different group and a different backdrop. The usual folks know who they are, but if you’re not one of those and you want in next time we play, drop me a message of some kind.

× ×

Confused.com Confuses Me

It’s that time of year again when I comparison-shop for car insurance, and every time I come across a new set of reasons to hate the developers at Confused.com. How do you confuse me? Let me count the ways.

No means yes

I was planning to enumerate my concerns to them directly, via their contact form, but when I went to do so I spotted this bit of genius, which clinched it and made me write a blog post instead:

Animated GIF showing how clicking on "No" on Confused.com's contact form checks the "Yes" box.
Clicking the word “Yes” means “Yes”. Clicking the word “No” means “Yes” as well.

Turns out that there’s a bit of the old sloppy-paste going on there:

<input type="radio" value="Yes" id="ContactByPhoneYes" name="contactByPhone" />
<label for="ContactByPhoneYes" class="label">Yes</label>
<input type="radio" value="No" id="ContactByPhoneNo" name="contactByPhone" />
<label for="ContactByPhoneYes" class="label">No</label>

I guess nobody had the “consent talk” with Confused.com?

That’s not my name!

Error message "Please enter a name between 2 and 30 letters long..." when Dan enters "Q" as his surname.
Somebody needs to brush up on their falsehoods programmers believe about names.

Honestly, I’m used to my unusual name causing trouble by now and I know how to work around it in the way that breaks the fewest systems (I can even usually get airline tickets without too much difficulty nowadays). But these kinds of (arbitrary) restrictions must frustrate folks like Janice Keihanaikukauakahihulihe’ekahaunaele.

I guess their developers didn’t realise that this blog post was parody?

Also, that’s not my title!

This one, though, pisses me off:

Animation showing title selector with options "Mr", "Mrs", "Miss", and "More...". Clicking "More..." reveals three more: "Ms", "Dr (Male)" and "Dr (Female)"
As everybody knows, there are only six titles, and two of them are “Dr”.

This is a perfect example of why your forms should ask for what you actually want to know, not for what you think people want to tell you. Just ask!

  1. If you want to know my gender, ask for my gender! (I’m a man, by the way.)
    I don’t understand why you want to know – after all, it’s been illegal since 2012 to risk-assess/price car insurance differently on the grounds of gender – but maybe you’ve got a valid reason. Which hopefully you’ll tell me in a tooltip. Like you’re using it as a (terrible checksum) when you check my driving license details, that’s fine!
  2. If you want to know my title, ask for my title! (I prefer not to use one, but if you must use one I’d prefer Mx.)
    This ought to be an optional field, of course, and ideally you want a free text input or else you’ll always have missed somebody (Lord, Reverend, Prince, Wing Commander…). It’s in your interests because I’m totally going to pick at random otherwise. Today I’m a Ms.

Consistency? Never heard of it.

It’s not a big thing, but if you come up with a user interface paradigm like “clicking More… shows more buttons”, you ought to stick to it.

Animation of marital statuses: clicking "More..." shows a dropdown instead of more buttons.
Maybe their internal style guide says “a More… button with three additional options should use buttons, but four additional options should be a drop-down”. But it seems more-likely that they just don’t have one.

Again, I’m not sure exactly what all of this data is used for, nor why there’s a need to differentiate between married couples and civil partnerships, but let’s just assume this is all necessary and legitimate and just ask ourselves: why are we using drop-downs now for “More…”? We were using buttons just a second ago!

"How many cars are at your home?" has a "More..." box that shows more buttons.
This was just crying out for a type-in field. But I guess the same developer who did the “Title” question did this one too, and wanted to show off the fancy “more buttons” control they’d written. (Imaginary style guide be damned!)

What’s my occupation again?

There’s so much to unpack in the “occupation” part of the form that I’m not even sure where to begin. Let’s just pick out a few things:

What type of student are you? List of options, many of which intersect.
I never answered a question this hard even in the exams I did when I was a student. Why do we care where students live… except if they’re postgrads? If I’m a mature student studying a postgraduate course in medicine while living at home with my parents… which of the five possible options should I pick? And, again: what difference could it conceivably make?

The student thing is just the beginning, though. You can declare up to two jobs, but if the first one is “house person/parent” you can’t have a second one. If you’re self-employed, that has to be your first job even though the guidance says that the one you spend most time on must be the first one (this kind of thing infuriated me when I used to spend 60% of my work time employed, 20% self-employed, and 20% studying).

I’m not saying it’s easy to make a form like this. I know from experience that it’s not. I am saying that Confused.com make it look a lot harder than it is.

Tooltip reading "Please choose the employment status that reflects the majority of the work you do. For example if you are a house person and have a part time job of 5 hours a week, you should select 'House person/parent' as your primary job.
Well that clears everything up. Also, I think you mean “houseperson”, unless you’re referring to somebody who is half-house/half-person, like some kind of architectural werewolf.

What do you mean, you live with your partner?

At a glance, this sounds like a “poly world problem”, but hear me out:

Relationship to policy holder: Living together (couple) results in the error "The driver's marital status must be Living With Partner" if their relationship to the proposer is Living Together (Couple)".
What you’re seeing here is a reference-identity error. I can’t possibly be living together with somebody as a couple if their marital status isn’t “Living With Partner”.

I put Ruth‘s martial status as married, because she’s married to JTA. But then when it asked how she was related to me, it wouldn’t accept “Living together (couple)”.

Relationship to proposer question with 'spouse' option but not 'living with partner'.
If I put Ruth as the primary policyholder (proposer) though, I don’t even get the option of “living together (couple)” to describe her relationship with me. ‘Cos it’s physically impossible to have a partner and be married, right?

Even if you don’t think it’s odd that they hide “living with partner” button as an option to describe a married person’s relationship to somebody other than their spouse… you’ve still got to agree that it’s a little bit odd that they don’t hide the “spouse” button. In other words, this user interface is more-okay with you having multiple spouses than it is with you having a spouse and an unmarried partner!

And of course this isn’t just about polyamorous folks: there are perfectly “normal” reasons that a person might end up confused by this interface, too. For example a separated (but not yet divorced) couple, one of whom has a new partner (it’s not even inconceivable that such a pair might share custody of a car). Also interesting is the fact that the form doesn’t care about the gender of your spouse (it doesn’t ask for “husband” or “wife”) but does care about the gender of your parent, child, or sibling. What gives?

Half a dozen easy fixes. Go for it, Confused.com.

Given that their entire marketing plan for most of the last two decades has been that they reduce customer confusion, Confused.com’s user interface leaves a lot to be desired. As I’ve mentioned before – and speaking as a web developer that’s been in the game for longer than their company has – it’s not necessarily easy to get this kind of thing right. But you can improve a form like this, a little at a time. And every little win counts for something: a more-satisfied returning customer, perhaps, or a new word-of-mouth recommendation.

Or you can just let it languish and continue to have the kind of form that people mock on the public Internet.

It’ll be a year until I expect to comparison-shop for car insurance again: let’s see how they get on, shall we?

Update (21 January 2021): Confused.com Respond!

I didn’t expect to receive any response to this post: most organisations don’t when I call-out the problems with their websites (not least because I’m more than a little bit sarcastic about it!). I never heard back from the Digital Climate Strike folks, for example, when I pointed out that their website was a great example of exactly the kind of problem they were protesting. But Confused.com passed on my thoughts to Product Manager Gareth who took a look at them and gave me a £20 Amazon gift card by way of thanks. Nice one, Confused.com!

× × × × × × × × ×

The Diamonds, The Dagger and One Classy Dame

On account of the pandemic, I’d expected my fortieth birthday to be a somewhat more-muted affair than I’d hoped. I had a banner, I got trolled by bagels, and I received as a gift a pizza oven with which I immediately set fire to several pieces of cookware, but I hadn’t expected to be able to do anything like the “surprise” party of my thirtieth, and that saddened me a little. So imagine my surprise when I come back from an evening walk the day after my birthday to discover than an actual (remote) surprise party really had been arranged without my knowing!

Matt, Suz, Alec, Jen, Dermot and Doreen on a Google Meet screen.
“Hello, remote guests! What are you doing here?”

Not content with merely getting a few folks together for drinks, though, Ruth and team had gone to great trouble (involving lots of use of the postal service) arranging a “kit” murder mystery party in the Inspector McClue series – The Diamonds, The Dagger, and One Classy Dame – for us all to play. The story is sort-of a spiritual successor to The Brie, The Bullet, and The Black Cat, which we’d played fifteen years earlier. Minor spoilers follow.

JTA (wearing a string of pearls) and Robin (wearing sunglasses)
“Hello, local guests. Wait… why are you all in costumes…?”

Naturally, I immediately felt underdressed, having not been instructed that I might need a costume, and underprepared, having only just heard for the first time that I would be playing the part of German security sidekick Lieutenant Kurt Von Strohm minutes before I had to attempt my most outrageous German accent.

Dan with his tongue out holding a glass of champagne.
Fortunately I was able to quickly imbibe a few glasses of champagne and quickly get into the spirit. Hic.

The plot gave me in particular a certain sense of deja vu. In The Brie, The Bullet, and The Black Cat, I played a French nightclub owner who later turned out to be an English secret agent supplying the French Resistance with information. But in The Diamonds, The Dagger, and One Classy Dame I played a Gestapo officer who… also later turned out to be an English secret agent infiltrating the regime and, you guessed it, supplying the French Resistance.

Jen drinking from the neck of a nearly-empty wine bottle.
As she had previously with Sour Grapes, Ruth had worked to ensure that a “care package” had reached each murder mystery guest. Why yes, it was a boozy care package.

It was not the smoothest nor the most-sophisticated “kit” murder mystery we’ve enjoyed. The technology made communication challenging, the reveal was less-satisfying than some others etc. But the company was excellent. (And the acting way pretty good too, especially by our murderer whose character was exquisitely played.)

JTA downing a Jeroboam of champagne.
The largest bottle, though, was with us: we opened the Jeroboam of champagne Ruth and JTA had been saving from their anniversary (they have a tradition involving increasing sizes of bottle; it’s a whole thing; I’ll leave them to write about it someday).

And of course the whole thing quickly descended into a delightful shouting match with accusations flying left, right, and centre and nobody having a clue what was going on. Like all of our murder mystery parties!

Google Meet transcript with the words "You are the Jewish", which nobody said.
I’m not sure how I feel about Google Meet’s automatic transcription feature. It was generally pretty accurate, but it repeatedly thought that it heard the word “Jewish” being spoken by those of us who were putting on German accents, even though none of us said that.

In summary, the weekend of my fortieth birthday was made immeasurably better by getting to hang out with (and play a stupid game with) some of my friends despite the lockdown, and I’m ever so grateful that those closest to me were able to make such a thing happen (and without me even noticing in advance).

× × × × × ×

How Not to use A Pizza Oven

Clearly those closest to me know me well, because for my birthday today I received a beautiful (portable: it packs into a bag!) wood-fired pizza oven, which I immediately assembled, test-fired, cleaned, and prepped with the intention of feeding everybody some homemade pizza using some of Robin‘s fabulous bread dough, this evening.

Ooni Fyra portable wood-fired pizza oven.

Fuelled up with wood pellets the oven was a doddle to light and bring up to temperature. It’s got a solid stone slab in the base which looked like it’d quickly become ideal for some fast-cooked, thin-based pizzas. I was feeling good about the whole thing.

But then it all began to go wrong.

Animated GIF showing the fire blazing as seen through the viewing window on the front of the oven.
The confined space quickly heats up to a massive 400-500ºC.

If you’re going to slip pizzas onto hot stone – especially using a light, rich dough like this one – you really need a wooden peel. I own a wooden peel… somewhere: I haven’t seen it since I moved house last summer. I tried my aluminium peel, but it was too sticky, even with a dusting of semolina or a light layer of oil. This wasn’t going to work.

I’ve got some stone slabs I use for cooking fresh pizza in a conventional oven, so I figured I’d just preheat them, assemble pizzas directly on them, and shunt the slabs in. Easy as (pizza) pie, right?

Pizza on fire in oven.
Within 60 seconds the pizza was cooked and, in its elevated position atop a second layer of stone, the crust began to burn. The only-mildy-charred bits were delicious, though.

This oven is hot. Seriously hot. Hot enough to cook the pizza while I turned my back to assemble the next one, sure. But also hot enough to crack apart my old pizza stone. Right down the middle. It normally never goes hotter than the 240ºC of my regular kitchen oven, but I figured that it’d cope with a hotter oven. Apparently not.

So I changed plan. I pulled out some old round metal trays and assembled the next pizza on one of those. I slid it into the oven and it began to cook: brilliant! But no sooner had I turned my back than… the non-stick coating on the tray caught fire! I didn’t even know that was a thing that could happen.

Flames flickering at the back of a pizza oven.
Hello fire. I failed to respect you sufficiently when I started cooking. I’ve learned my lesson now.

Those first two pizzas may have each cost me a piece of cookware, but they tasted absolutely brilliant. Slightly coarse, thick, yeasty dough, crisped up nicely and with a hint of woodsmoke.

But I’m not sure that the experience was worth destroying a stone slab and the coating of a metal tray, so I’ll be waiting until I’ve found (or replaced) my wooden peel before I tangle with this wonderful beast again. Lesson learned.

× × × ×

The Fourth Way to Inject CSS

This is a repost promoting content originally published elsewhere. See more things Dan's reposted.

Last week, Jeremy Keith wrote an excellent summary of the three ways to inject CSS into a web document. In short, he said:

There are three ways—that I know of—to associate styles with markup.

External CSS

<link rel="stylesheet" href="/path/to/styles.css">

Embedded CSS

<style>element { property: value; }</style>

Inline CSS

<element style="property: value"></element>

While talking about external CSS, he hinted at what I consider to be a distinct fourth way with its own unique use cases:; using the Link: HTTP header. I’d like to share with you how it works and why I think it needs to be kept in people’s minds, even if it’s not suitable for widespread deployment today.

Injecting CSS using the Link: HTTP Header

Every one of Jeremy’s suggestions involve adding markup to the HTML document itself. Which makes sense; you almost always want to associate styles with a document regardless of the location it’s stored or the medium over which it’s transmitted. The most popular approach to adding CSS to a page uses the <link> HTML element, but did you know… the <link> element has a semantically-equivalent HTTP header, Link:.

Bash shell running the command "curl -i https://hell.meiert.org/core/php/link.php". The response includes a HTTP Link: header referencing a CSS stylesheet; this is highlighted.
The only active example I’ve been able to find are test pages like Jens Oliver Meiert’s (pictured), Louis Lazaris’s , and Anne van Kesteren’s, but it’s possible that others are hiding elsewhere on the Web.

According to the specifications, the following HTTP responses are equivalent in terms of the CSS that would be loaded and applied to the document:

Traditional external CSS injection:

HTTP/1.1 200 OK
Content-Type: text/html; charset=UTF-8

<!doctype html>
<html>
  <head>
    <title>My page</title>
    <link rel="stylesheet" href="/style/main.css">
  </head>
  <body>
    <h1>My page</h1>
  </body>
</html>

Link: header CSS injection:

HTTP/1.1 200 OK
Content-Type: text/html; charset=UTF-8
Link: </style/main.css>; rel="stylesheet"

<!doctype html>
<html>
  <head>
    <title>My page</title>
  </head>
  <body>
    <h1>My page</h1>
  </body>
</html>
Diagram showing a web browser requesting a document from a web server, the web server finding the document and returning it after attaching HTTP headers.
A webserver adds headers when it serves a document anyway. Adding one more is no big deal.

Why is this important?

This isn’t something you should put on your website right now. This (21-year-old!) standard is still only really supported in Firefox and pre-Blink Opera, so you lose perhaps 95% of the Web (it could be argued that because CSS ought to be considered progressive enhancement, it’s tolerable so long as your HTML is properly-written).

If it were widely-supported, though, that would be a really good thing: HTTP headers beat meta/link tags for configurability, performance management, and separation of concerns. Need some specific examples? Sure: here’s what you could use HTTP stylesheet linking for:

Diagram showing a reverse proxy server modifying the headers set by an upstream web server in response to a request by a web browser.
You have no idea how many times in my career I’d have injected CSS Link: headers using a reverse proxy server the standard was universally-implemented. This technique would have made one of my final projects at the Bodleian so much easier…
  • Performance improvement using aggressively preloaded “top” stylesheets before the DOM parser even fires up.
  • Stylesheet injection by edge caches to provide regionalised/localised changes to brand identity.
  • Strong separation of content and design by hosting content and design elements in different systems.
  • Branding your staff intranet differently when it’s accessed from outside the network than inside it.
  • Rebranding proprietary services on your LAN without deep inspection, using reverse proxies.
  • Less-destructive user stylesheet injection by plugins etc. that doesn’t risk breaking icky on-page Javascript (e.g. theme switchers).
  • Browser detection? 😂 You could use this technique today to detect Firefox. But you absolutely shouldn’t; if you think you need browser detection in CSS, use this instead.

Unfortunately right now though, stylesheet Link: headers remain consigned to the bin of “cool stylesheet standards that we could probably use if it weren’t for fucking Google”; see also alternate stylesheets.

Update: I later used this technique to make a seemingly-empty web page.

× × ×

Downloading a YouTube Music Playlist for Offline Play

Now that Google Play Music has been replaced by YouTube Music, and inspired by the lampshading the RIAA did recently with youtube-dl, a friend asked me: “So does this mean I could download music from my Google Play Music/YouTube Music playlists?”

A Creative MuVo MP3 player (and FM radio), powered off, on a white surface.
My friend still uses a seriously retro digital music player, rather than his phone, to listen to music. It’s not a Walkman or a Minidisc player, I suppose, but it’s still pretty elderly. But it’s not one of these.

I’m not here to speak about the legality of retaining offline copies of music from streaming services. YouTube Music seems to permit you to do this using their app, but I’ll bet there’s something in their terms and conditions that specifically prohibits doing so any other way. Not least because Google’s arrangement with rights holders probably stipulates that they track how many times tracks are played, and using a different player (like my friend’s portable device) would throw that off.

But what I’m interested in is the feasibility. And in answering that question, in explaining how to work out that it’s feasible.

A "Your likes" playlist in the YouTube Music interface, with 10 songs showing.
The web interface to YouTube Music shows playlists of songs and streaming is just a click away.

Spoiler: I came up with an approach, and it looks like it works. My friend can fill up their Zune or whatever the hell it is with their tunes and bop away. But what I wanted to share with you was the underlying technique I used to develop this approach, because it involves skills that as a web developer I use most weeks. Hold on tight, you might learn something!

youtube-dl can download “playlists” already, but to download a personal playlist requires that you faff about with authentication and it’s a bit of a drag. Just extracting the relevant metadata from the page is probably faster, I figured: plus, it’s a valuable lesson in extracting data from web pages in general.

Here’s what I did:

Step 1. Load all the data

I noticed that YouTube Music playlists “lazy load”, and you have to scroll down to see everything. So I scrolled to the bottom of the page until I reached the end of the playlist: now everything was in the DOM, I could investigate it with my inspector.

Step 2. Find each track’s “row”

Using my browser’s debugger “inspect” tool, I found the highest unique-sounding element that seemed to represent each “row”/track. After a little investigation, it looked like a playlist always consists of a series of <ytmusic-responsive-list-item-renderer> elements wrapped in a <ytmusic-playlist-shelf-renderer>. I tested this by running document.querySelectorAll('ytmusic-playlist-shelf-renderer ytmusic-responsive-list-item-renderer') in my debug console and sure enough, it returned a number of elements equal to the length of the playlist, and hovering over each one in the debugger highlighted a different track in the list.

A browser debugger inspecting a "row" in a YouTube Music playlist. The selected row is "Baba Yeta" by Peter Hollens and Malukah, and has the element name "ytmusic-responsive-list-item-renderer" shown by the debugger.
The web application captured right-clicks, preventing the common right-click-then-inspect-element approach… so I just clicked the “pick an element” button in the debugger.

Step 3. Find the data for each track

I didn’t want to spend much time on this, so I looked for a quick and dirty solution: and there was one right in front of me. Looking at each track, I saw that it contained several <yt-formatted-string> elements (at different depths). The first corresponded to the title, the second to the artist, the third to the album title, and the fourth to the duration.

Better yet, the first contained an <a> element whose href was the URL of the piece of music. Extracting the URL and the text was as simple as a .querySelector('a').href on the first <yt-formatted-string> and a .innerText on the others, respectively, so I ran [...document.querySelectorAll('ytmusic-playlist-shelf-renderer ytmusic-responsive-list-item-renderer')].map(row=>row.querySelectorAll('yt-formatted-string')).map(track=>[track[0].querySelector('a').href, `${track[1].innerText} - ${track[0].innerText}`]) (note the use of [...*] to get an array) to check that I was able to get all the data I needed:

Debug console running on YouTube Music. The output shows an array of 256 items; items 200 through 212 are visible. Each item is an array containing a YouTube Music URL and a string showing the artist and track name separated by a hyphen.
Lots of URLs and the corresponding track names in my friend’s preferred format (me, I like to separate my music into folders by album, but I suppose I’ve got a music player with more than a floppy disk’s worth of space on it).

Step 4. Sanitise the data

We’re not quite good-to-go, because there’s some noise in the data. Sometimes the application’s renderer injects line feeds into the innerText (e.g. when escaping an ampersand). And of course some of these song titles aren’t suitable for use as filenames, if they’ve got e.g. question marks in them. Finally, where there are multiple spaces in a row it’d be good to coalesce them into one. I do some experiments and decide that .replace(/[\r\n]/g, '').replace(/[\\\/:><\*\?]/g, '-').replace(/\s{2,}/g, ' ') does a good job of cleaning up the song titles so they’re suitable for use as filenames.

I probably should have it fix quotes too, but I’ll leave that as an exercise for the reader.

Step 5. Produce youtube-dl commands

Okay: now we’re ready to combine all of that output into commands suitable for running at a terminal. After a quick dig through the documentation, I decide that we needed the following switches:

  • -x to download/extract audio only: it defaults to the highest quality format available, which seems reasomable
  • -o "the filename.%(ext)s" to specify the output filename but accept the format provided by the quality requirement (transcoding to your preferred format is a separate job not described here)
  • --no-playlist to ensure that youtube-dl doesn’t see that we’re coming from a playlist and try to download it all (we have our own requirements of each song’s filename)
  • --download-archive downloaded.txt to log what’s been downloaded already so successive runs don’t re-download and the script is “resumable”

The final resulting code, then, looks like this:

console.log([...document.querySelectorAll('ytmusic-playlist-shelf-renderer ytmusic-responsive-list-item-renderer')].map(row=>row.querySelectorAll('yt-formatted-string')).map(track=>[track[0].querySelector('a').href, `${track[1].innerText} - ${track[0].innerText}`.replace(/[\r\n]/g, '').replace(/[\\\/:><\*\?]/g, '-').replace(/\s{2,}/g, ' ')]).map(trackdata=>`youtube-dl -x "${trackdata[0]}" -o "${trackdata[1]}.%(ext)s" --no-playlist --download-archive downloaded.txt`).join("\n"));

Code running in a debugger and producing a list of youtube-dl commands to download a playlist full of music.
The output isn’t pretty, but it’s suitable for copy-pasting into a terminal or command prompt where it ought to download a whole lot of music for offline play.

This isn’t an approach that most people will ever need: part of the value of services like YouTube Music, Spotify and the like is that you pay a fixed fee to stream whatever you like, wherever you like, obviating the need for a large offline music collection. And people who want to maintain a traditional music collection offline are most-likely to want to do so while supporting the bands they care about, especially as (with DRM-free digital downloads commonplace) it’s never been easier to do so.

But for those minority of people who need to play music from their streaming services offline but don’t have or can’t use a device suitable for doing so on-the-go, this kind of approach works. (Although again: it’s probably not permitted, so be sure to read the rules before you use it in such a way!)

Step 6. Learn something

But more-importantly, the techniques of exploring and writing console Javascript demonstrated are really useful for extracting all kinds of data from web pages (data scraping), writing your own userscripts, and much more. If there’s one lesson to take from this blog post it’s not that you can steal music on the Internet (I’m pretty sure everybody who’s lived on this side of 1999 knows that by now), but that you can manipulate the web pages you see. Once you’re viewing it on your computer, a web page works for you: you don’t have to consume a page in the way that the author expected, and knowing how to extract the underlying information empowers you to choose for yourself a more-streamlined, more-personalised, more-powerful web.

× × × × ×

New Blood Donation Rules Better, I Suppose

This is a repost promoting content originally published elsewhere. See more things Dan's reposted.

So the NHS blood donation rules are changing again. And while they’re certainly getting closer, they’re still not quite hitting the bullseye yet.

That’s great. Prior to 2011 men who’d ever had sex with men, as well as women who’d had sex with such a man within the last 6 months, were banned from donating blood. That rule clearly spun out of the AIDS hysteria of the 1980s and generally entrenched homophobia. It probably did little to protect the recipients of blood, and certainly did a lot to increase the stigma experienced by non-straight men.

A shooting target with a great many holes.
You throw enough policies at a problem, eventually one will get close-enough, right?

The 2011 change permitted donation by men who’d previously had sex with men… so long as they hadn’t done so within the last year. Which opened the doors to donation by a lot of men: e.g. bisexual men who’d been in relationships exclusively with women, gay men who’d been celibate for a period, etc. It still wasn’t great, but it was a step in the right direction.

So when I saw that the rules were changing to better target only risky behaviours, rather than behaviours that are so broad-brush as to target identities, I was initially delighted. Evidence-based medicine, you say? For the win.

A nurse wearing gloves uses a hyperdermic needle to take a blood sample from a patients' arm, as seen from over the patient's shoulder.
Go on! Stick it in me! I’ll still be able to give blood, right?

But… it’s not all sunshine and rainbows. The new rules prohibit blood donation regardless of gender by people who’ve had sex with more than one person in the last three months.

Diagram showing a relationship between Andre and Brandon (married), and between Carlos and Brandon (partners). Andre and Carlos are now allowed to give blood, but Brandon still can't.
Sorry Brandon, we only want Andre and Carlos’ blood.

So if for example if there’s a V-shaped relationship consisting of three men, who only have sex within their thruple… two of them are now allowed to give blood but the third isn’t? (This isn’t a contrived example. I know such a thruple.)

Stranger still: if you swap Brandon in the diagram above for a woman then you get a polycule that’s a lot like mine, but the woman in the middle used to be allowed to give blood… and now can’t! My partner Ruth is in exactly the position: her situation hasn’t changed, but because she’s been in a long-term relationship with exactly two people she’s now not allowed to give blood. Wot?

On the whole, this rule change is an improvement. We’re getting closer to a perfect answer. But it’s amusing to see where the policy misses again and excludes donors who would otherwise be perfectly viable.

Update: as this is attracting a lot of attention I just wanted to remind people that the whole discussion is, of course, a lot more complicated than can be summarised in a single, short, opinionated blog post. Take a look at the FAIR steering group’s recommendations and compare to the government’s press release.

Update #2: justifying choice of words – “AIDS hysteria” refers specifically to the media (and to a lesser extent the policy) reactions to the (very real, very devastating) pandemic. For a while there it was perfectly normal to see (often misguided, sometimes homophobic) scaremongering news coverage suggesting that everybody was at enormous risk from HIV.

× × ×

COVID Ipsum

So I made a COVID conspiracy theory-themed lorem ipsum generator:

I blame my friend Bryn, who put the idea into my head while he was coming up with fake COVID conspiracy theories (I realise this sentence makes it sound like there are real COVID conspiracy theories) on a WhatsApp group we’re both in:

WhatsApp conversation: Bryn says that it's easy to come up with COVID conspiracy theories, Dan says somebody should make a Lorem Ipsum generator based on them.
This is about the minimum level of encouragement I need to do just about anything in tech.

It’s implemented using perchance, a platform for creating random text generators that I’ve been playing with – sometimes with the kids – lately. It’s really easy to use and provides a kind of instant-satisfaction that I think is important if you want to inspire the next generation of software engineers. This means, among other things, that you can clone, edit, and mashup my tool: perhaps you can make it better! Or perhaps you’ll use perchance to write some fiction, or poetry, or something else entirely. But regardless, I’d encourage you to have a play.

Mostly my generator comes up with meaningless gibberish, nonsense, and laughable claims. So it’s marginally more-trustworthy than your typical COVID conspiracy theorist.

×

Staying Sane with GEMSAW

I’ve been having a tough time these last few months. Thanks to COVID, I’m sure I’m not alone in that.

Times are strange, and even when you get a handle on how they’re strange they can still affect you: lockdown stress can quickly magnify anything else you’re already going through.

We’ve all come up with our own coping strategies; here’s part of mine.

JTA, Dan and Ruth shopping for a Christmas tree, wearing face masks
Only people who are highly-allergic to pine needles normally look like this when they’re shopping for a Christmas tree.

These last few months have occasionally seen me as emotionally low as… well, a particularly tough spell a decade ago. But this time around I’ve benefited from the self-awareness and experience to put some solid self-care into practice!

By way partly of self-accountability and partly of sharing what works for me, let me tell you about the silly mnemonic that reminds me what I need to keep track of as part of each day: GEMSAW! (With thanks to Amy Blankson for, among other things, the idea of this kind of acronym.)

Because it’s me, I’ve cited a few relevant academic sources for you in my summary, below:

  • Gratitude
    Taking the time to stop and acknowledge the good things in your life, however small, is associated with lower stress levels (Taylor, Lyubomirsky & Stein, 2017) to a degree that can’t just be explained by the placebo effect (Cregg & Cheavens, 2020).
    Frankly, the placebo effect would be fine, but it’s nice to have my practice of trying to intentionally recognise something good in each day validated by the science too!
  • Exercise
    I don’t even need a citation; I’m sure everybody knows that aerobic exercise is associated with reduced risk and severity of depression: the biggest problem comes from the fact that it’s an exceptionally hard thing to motivate yourself to do if you’re already struggling mentally!
    But it turns out you don’t need much to start to see the benefits (Josefsson, Lindwall & Archer, 2014): I try to do enough to elevate my heart rate each day, but that’s usually nothing more than elevating my desk to standing height, putting some headphones on, and dancing while I work!
Dan dancing at his desk (animated GIF)
Warming up. Things only get nuts when the bass drops, but I’ll spare you having to watch that.
  • Meditation/Mindfulness
    Understandably a bit fuzzier as a concept and tainted by being a “hip” concept. A short meditation break or mindfulness exercise might be verifiably therapeutic, but more (non-terrible) studies are needed (Vonderlin, Biermann, Bohus & Lyssenko 2020). For me, a 2-5 minute meditation break punctuates a day and feels like it contributes towards the goal of staying-sane-in-challenging-times, so it makes it into my wellbeing plan.
    Maybe it’s doing nothing. But I’m not losing much time over it so I’m not worried.
  • Sunlight
    During my 20s I gradually began to suffer more and more from “winter blues”. Nobody’s managed to make an argument for the underlying cause of seasonal affective disorder that hasn’t been equally-well debunked by some other study. Small-scale studies often justify light therapy (e.g. Lam, Levitan & Morehouse 2006) but it’s possibly no-more-effective than a placebo at scale (SBU 2007).
    Since my early 30s, I’ve always felt better to get myself 30 minutes of lightbox on winter mornings (I use one of these bad boys). I admit it’s possible that the benefits are just the result of tricking my brain into waking-up more promptly and therefore feeing like I’m being more-productive with my waking hours! But either way, getting some sunlight – whether natural or artificial – makes me feel better, so it makes it onto my daily self-care checklist.
Bright sunlight in an almost-cloudless blue sky.
10 minutes of overhead, unoccluded sunlight is the minimum therapeutic dose. That translates to about 30 minutes of winter sun at my latitude or 10,000 lux full-spectrum sunlamp.
  • Acts of kindness
    It’s probably not surprising that a person’s overall happiness correlates with their propensity for kindness (Lyubomirsky, King & Diener 2005). But what’s more interesting is that the causal link can be “gamed”. That is: a deliberate effort to engage in acts of kindness results in increased happiness (Buchanan & Bardi 2010)!
    Beneficial acts of kindness can be as little as taking the time to acknowledge somebody’s contribution or compliment somebody’s efforts. The amount of effort it takes is far less-important for happiness than the novelty of the experience, so the type of kindness you show needs to be mixed-up a bit to get the best out of it. But demonstrating kindness helps to make the world a better place for other humans, so it pays off even if you’re coming from a fully utilitarian perspective.
  • Writing
    I write a lot anyway, often right here, and that’s very-definitely for my own benefit first and foremost. But off the back of some valuable “writing therapy” (Baikie & Wilhelm 2005) I undertook earlier this year, I’ve been continuing with the simpler, lighter approach of trying to no more than three sentences about something that’s had an impact on me that day.
    As an approach, it doesn’t help everybody (Zachariae 2015), but writing a little about your day – not even about how you feel about it, just the facts will do (Koschwanez, Robinson, Beban, MacCormick, Hill, Windsor, Booth, Jüllig & Broadbent 2017; fuck me that’s a lot of co-authors) – helps to keep you content, and I’m loving it.

Despite the catchy acronym (Do I need to come up with a GEMSAW logo? I’m pretty sure real gemcutting is actually more of a grinding process…) and stack of references, I’m not actually writing a self-help book; it just sounds like I am.

I don’t claim to be an authority on anything beyond my own head, and I’m not very confident on that subject! I just wanted to share with you something that’s been working pretty well at keeping me sane for the last month or two, just in case it’s of any use to you. These are challenging times; do what you need to find the happiness you can, and hang in there.

× × ×

Watching Films Together… Apart

This weekend I announced and then hosted Homa Night II, an effort to use technology to help bridge the chasms that’ve formed between my diaspora of friends as a result mostly of COVID. To a lesser extent we’ve been made to feel distant from one another for a while as a result of our very diverse locations and lifestyles, but the resulting isolation was certainly compounded by lockdowns and quarantines.

Mark, Sian, Alec, Paul, Kit, Adam, Dan and Claire at Troma Night V.
Long gone are the days when I could put up a blog post to say “Troma Night tonight?” and expect half a dozen friends to turn up at my house.

Back in the day we used to have a regular weekly film night called Troma Night, named after the studio who dominated our early events and whose… genre… influenced many of our choices thereafter. We had over 300 such film nights, by my count, before I eventually left our shared hometown of Aberystwyth ten years ago. I wasn’t the last one of the Troma Night regulars to leave town, but more left before me than after.

Sour Grapes: participants share "hearts" with Ruth
Observant readers will spot a previous effort I made this year at hosting a party online.

Earlier this year I hosted Sour Grapes, a murder mystery party (an irregular highlight of our Aberystwyth social calendar, with thanks to Ruth) run entirely online using a mixture of video chat and “second screen” technologies. In some ways that could be seen as the predecessor to Homa Night, although I’d come up with most of the underlying technology to make Homa Night possible on a whim much earlier in the year!

WhatsApp chat: Dan proposes "Troma Night Remote"; Matt suggests calling it "Troma at Homa"; Dan settles on "Homa Night".
The idea spun out of a few conversations on WhatsApp but the final name – Homa Night – wasn’t agreed until early in November.

How best to make such a thing happen? When I first started thinking about it, during the first of the UK’s lockdowns, I considered a few options:

  • Streaming video over a telemeeting service (Zoom, Google Meet, etc.)
    Very simple to set up, but the quality – as anybody who’s tried this before will attest – is appalling. Being optimised for speech rather than music and sound effects gives the audio a flat, scratchy sound, video compression artefacts that are tolerable when you’re chatting to your boss are really annoying when they stop you reading a crucial subtitle, audio and video often get desynchronised in a way that’s frankly infuriating, and everybody’s download speed is limited by the upload speed of the host, among other issues. The major benefit of these platforms – full-duplex audio – is destroyed by feedback so everybody needs to stay muted while watching anyway. No thanks!
  • Teleparty or a similar tool
    Teleparty (formerly Netflix Party, but it now supports more services) is a pretty clever way to get almost exactly what I want: synchronised video streaming plus chat alongside. But it only works on Chrome (and some related browsers) and doesn’t work on tablets, web-enabled TVs, etc., which would exclude some of my friends. Everybody requires an account on the service you’re streaming from, potentially further limiting usability, and that also means you’re strictly limited to the media available on those platforms (and further limited again if your party spans multiple geographic distribution regions for that service). There’s definitely things I can learn from Teleparty, but it’s not the right tool for Homa Night.
  • “Press play… now!”
    The relatively low-tech solution might have been to distribute video files in advance, have people download them, and get everybody to press “play” at the same time! That’s at least slightly less-convenient because people can’t just “turn up”, they have to plan their attendance and set up in advance, but it would certainly have worked and I seriously considered it. There are other downsides, though: if anybody has a technical issue and needs to e.g. restart their player then they’re basically doomed in any attempt to get back in-sync again. We can do better…
  • A custom-made synchronised streaming service…?
Homa Night architecture: S3 delivers static content to browsers, browsers exchange real-time information via Firebase.
A custom solution that leveraged existing infrastructure for the “hard bits” proved to be the right answer.

So obviously I ended up implementing my own streaming service. It wasn’t even that hard. In case you want to try your own, here’s how I did it:

Media preparation

First, I used Adobe Premiere to create a video file containing both of the night’s films, bookended and separated by “filler” content to provide an introduction/lobby, an intermission, and a closing “you should have stopped watching by now” message. I made sure that the “intro” was a nice round duration (90s) and suitable for looping because I planned to hold people there until we were all ready to start the film. Thanks to Boris & Oliver for the background music!

Dan uses a green screen to add to the intermission.
Honestly, the intermission was just an excuse to keep my chroma key gear out following its most-recent use.

Next, I ran the output through Handbrake to produce “web optimized” versions in 1080p and 720p output sizes. “Web optimized” in this case means that metadata gets added to the start of the file to allow it to start playing without downloading the entire file (streaming) and to allow the calculation of what-part-of-the-file corresponds to what-part-of-the-timeline: the latter, when coupled with a suitable webserver, allows browsers to “skip” to any point in the video without having to watch the intervening part. Naturally I’m encoding with H.264 for the widest possible compatibility.

Handbrake preparing to transcode Premiere's output.
Even using my multi-GPU computer for the transcoding I had time to get up and walk around a bit.

Real-Time Synchronisation

To keep everybody’s viewing experience in-sync, I set up a Firebase account for the application: Firebase provides an easy-to-use Websockets platform with built-in data synchronisation. Ignoring the authentication and chat features, there wasn’t much shared here: just the currentTime of the video in seconds, whether or not introMode was engaged (i.e. everybody should loop the first 90 seconds, for now), and whether or not the video was paused:

Firebase database showing shared currentTime, introMode, and paused values.
Firebase makes schemaless real-time databases pretty easy.

To reduce development effort, I never got around to implementing an administrative front-end; I just manually went into the Firebase database and acknowledged “my” computer as being an administrator, after I’d connected to it, and then ran a little Javascript in my browser’s debugger to tell it to start pushing my video’s currentTime to the server every few seconds. Anything else I needed to edit I just edited directly from the Firebase interface.

Other web clients’ had Javascript to instruct them to monitor these variables from the Firebase database and, if they were desynchronised by more than 5 seconds, “jump” to the correct point in the video file. The hard part of the code… wasn’t really that hard:

// Rewind if we're passed the end of the intro loop
function introModeLoopCheck() {
  if (!introMode) return;
  if (video.currentTime > introDuration) video.currentTime = 0;
}

function fixPlayStatus() {
  // Handle "intro loop" mode
  if (remotelyControlled && introMode) {
    if (video.paused) video.play(); // always play
    introModeLoopCheck();
    return; // don't look at the rest
  }

  // Fix current time
  const desync = Math.abs(lastCurrentTime - video.currentTime);
  if (
    (video.paused && desync > DESYNC_TOLERANCE_WHEN_PAUSED) ||
    (!video.paused && desync > DESYNC_TOLERANCE_WHEN_PLAYING)
  ) {
    video.currentTime = lastCurrentTime;
  }
  // Fix play status
  if (remotelyControlled) {
    if (lastPaused && !video.paused) {
      video.pause();
    } else if (!lastPaused && video.paused) {
      video.play();
    }
  }
  // Show/hide paused notification
  updatePausedNotification();
}

Web front-end

Finally, there needed to be a web page everybody could go to to get access to this. As I was hosting the video on S3+CloudFront anyway, I put the HTML/CSS/JS there too.

Configuring a Homa Night video player.
I decided to carry the background theme of the video through to the web interface too.

I tested in Firefox, Edge, Chrome, and Safari on desktop, and (slightly less) on Firefox, Chrome and Safari on mobile. There were a few quirks to work around, mostly to do with browsers not letting videos make sound until the page has been interacted with after the video element has been rendered, which I carefully worked-around by putting a popup “over” the video to “enable sync”, but mostly it “just worked”.

Delivery

On the night I shared the web address and we kicked off! There were a few hiccups as some people’s browsers got disconnected early on and tried to start playing the film before it was time, and one of these even when fixed ran about a minute behind the others, leading to minor spoilers leaking via the rest of us riffing about them! But on the whole, it worked. I’ve had lots of useful feedback to improve on it for the next version, and I might even try to tidy up my code a bit and open-source the results if this kind of thing might be useful to anybody else.

× × × × × × × ×

Endpoint Encabulator

(This video is also available on YouTube.)

I’ve been working as part of the team working on the new application framework called the Endpoint Encabulator and wanted to share with you what I think makes our project so exciting: I promise it’ll make for two minutes of your time you won’t seen forget!

Naturally, this project wouldn’t have been possible without the pioneering work that preceded it by John Hellins Quick, Bud Haggart, and others. Nothing’s invented in a vacuum. However, my fellow developers and I think that our work is the first viable encabulator implementation to provide inverse reactive data binding suitable for deployment in front of a blockchain-driven backend cache. I’m not saying that all digital content will one day be delivered through Endpoint Encabulator, but… well; maybe it will.

If the technical aspects go over your head, pass it on to a geeky friend who might be able to make use of my work. Sharing is caring!

<blink> and <marquee>

I was chatting with a fellow web developer recently and made a joke about the HTML <blink> and <marquee> tags, only to discover that he had no idea what I was talking about. They’re a part of web history that’s fallen off the radar and younger developers are unlikely to have ever come across them. But for a little while, back in the 90s, they were a big deal.

Macromedia Dreamweaver 3 code editor window showing a <h2> heading wrapped in <marquee> and <blink> tags, for emphasis.
Even Macromedia Dreamweaver, which embodied the essence of 1990s web design, seemed to treat wrapping <blink> in <marquee> as an antipattern.

Invention of the <blink> element is often credited to Lou Montulli, who wrote pioneering web browser Lynx before being joining Netscape in 1994. He insists that he didn’t write any of the code that eventually became the first implementation of <blink>. Instead, he claims: while out at a bar (on the evening he’d first meet his wife!), he pointed out that many of the fancy new stylistic elements the other Netscape engineers were proposing wouldn’t work in Lynx, which is a text-only browser. The fanciest conceivable effect that would work across both browsers would be making the text flash on and off, he joked. Then another engineer – who he doesn’t identify – pulled a late night hack session and added it.

And so it was that when Netscape Navigator 2.0 was released in 1995 it added support for the <blink> tag. Also animated GIFs and the first inklings of JavaScript, which collectively would go on to define the “personal website” experience for years to come. Here’s how you’d use it:

<BLINK>This is my blinking text!</BLINK>

With no attributes, it was clear from the outset that this tag was supposed to be a joke. By the time HTML4 was published as a a recommendation two years later, it was documented as being a joke. But the Web of the late 1990s saw it used a lot. If you wanted somebody to notice the “latest updates” section on your personal home page, you’d wrap a <blink> tag around the title (or, if you were a sadist, the entire block).

Cameron's World website, screenshot, showing GIFS and bright pallette
If you missed this particular chapter of the Web’s history, you can simulate it at Cameron’s World.

In the same year as Netscape Navigator 2.0 was released, Microsoft released Internet Explorer 2.0. At this point, Internet Explorer was still very-much playing catch-up with the features the Netscape team had implemented, but clearly some senior Microsoft engineer took a look at the <blink> tag, refused to play along with the joke, but had an innovation of their own: the <marquee> tag! It had a whole suite of attributes to control the scroll direction, speed, and whether it looped or bounced backwards and forwards. While <blink> encouraged disgusting and inaccessible design as a joke, <marquee> did it on purpose.

<MARQUEE>Oh my god this still works in most modern browsers!</MARQUEE>

Oh my god this still works in most modern browsers!

If you see the text above moving… you’re looking at a living fossil in browser history.

But here’s the interesting bit: for a while in the late 1990s, it became a somewhat common practice to wrap content that you wanted to emphasise with animation in both a <blink> and a <marquee> tag. That way, the Netscape users would see it flash, the IE users would see it scroll or bounce. Like this:

<MARQUEE><BLINK>This is my really important message!</BLINK></MARQUEE>
Internet Explorer 5 showing a marquee effect.
Wrap a <blink> inside a <marquee> and IE users will see the marquee. Delightful.

The web has always been built on Postel’s Law: a web browser should assume that it won’t understand everything it reads, but it should provide a best-effort rendering for the benefit of its user anyway. Ever wondered why the modern <video> element is a block rather than a self-closing tag? It’s so you can embed within it code that an earlier browser – one that doesn’t understand <video> – can read (a browser’s default state when seeing a new element it doesn’t understand is to ignore it and carry on). So embedding a <blink> in a <marquee> gave you the best of both worlds, right? (welll…)

Netscape Navigator 5 showing a blink effect.
Wrap a <blink> inside a <marquee> and Netscape users will see the blink. Joy.

Better yet, you were safe in the knowledge that anybody using a browser that didn’t understand either of these tags could still read your content. Used properly, the web is about progressive enhancement. Implement for everybody, enhance for those who support the shiny features. JavaScript and CSS can be applied with the same rules, and doing so pays dividends in maintainability and accessibility (though, sadly, that doesn’t stop people writing sites that needlessly require these technologies).

Opera 5 showing no blinking nor marquee text.
Personally, I was a (paying! – back when people used to pay for web browsers!) Opera user so I mostly saw neither <blink> nor <marquee> elements. I don’t feel like I missed out.

I remember, though, the first time I tried Netscape 7, in 2002. Netscape 7 and its close descendent are, as far as I can tell, the only web browsers to support both <blink> and <marquee>. Even then, it was picky about the order in which they were presented and the elements wrapped-within them. But support was good enough that some people’s personal web pages suddenly began to exhibit the most ugly effect imaginable: the combination of both scrolling and flashing text.

Netscape 7 showing text that both blinks and marquee-scrolls.
If Netscape 7’s UI didn’t already make your eyes bleed (I’ve toned it down here by installing the “classic skin”), its simultaneous rendering of <blink> and <marquee> would.

The <blink> tag is very-definitely dead (hurrah!), but you can bring it back with pure CSS if you must. <marquee>, amazingly, still survives, not only in polyfills but natively, as you might be able to see above. However, if you’re in any doubt as to whether or not you should use it: you shouldn’t. If you’re looking for digital nostalgia, there’s a whole rabbit hole to dive down, but you don’t need to inflict <marquee> on the rest of us.

× × × × ×

Accidental Geohashing

Over the last six years I’ve been on a handful of geohashing expeditions, setting out to functionally-random GPS coordinates to see if I can get there, and documenting what I find when I do. The comic that inspired the sport was already six years old by the time I embarked on my first outing, and I’m far from the most-active member of the ‘hasher community, but I’ve a certain closeness to them as a result of my work to resurrect and host the “official” website. Either way: I love the sport.

Dan, Ruth, and baby Annabel at geohashpoint 2014-04-21 51 -1
I even managed to drag-along Ruth and Annabel to a hashpoint (2014-04-21 51 -1) once.

But even when I’ve not been ‘hashing, it occurs to me that I’ve been tracking my location a lot. Three mechanisms in particular dominate:

  • Google’s somewhat-invasive monitoring of my phones’ locations (which can be exported via Google Takeout)
  • My personal GPSr logs (I carry the device moderately often, and it provides excellent precision)
  • The personal μlogger server I’ve been running for the last few years (it’s like Google’s system, but – y’know – self-hosted, tweakable, and less-creepy)

If I could mine all of that data, I might be able to answer the question… have I ever have accidentally visited a geohashpoint?

Let’s find out.

KML from Google is coverted into GPX where it joins GPX from my GPSr and real-time position data from uLogger in a MySQL database. This is queried against historic hashpoints to produce a list of candidate accidental hashpoints.
There’s a lot to my process, but it’s technically quite simple.

Data mining my own movements

To begin with, I needed to get all of my data into μLogger. The Android app syncs to it automatically and uploading from my GPSr was simple. The data from Google Takeout was a little harder.

I found a setting in Google Takeout to export past location data in KML, rather than JSON, format. KML is understood by GPSBabel which can convert it into GPX. I can “cut up” the resulting GPX file using a little grep-fu (relevant xkcd?) to get month-long files and import them into μLogger. Easy!

Requesting KML rather than JSON from Google Takeout
It’s slightly hidden, but Google Takeout choose your geoposition output format (from a limited selection).

Well.. μLogger’s web interface sometimes times-out if you upload enormous files like a whole month of Google Takeout logs. So instead I wrote a Nokogiri script to convert the GPX into SQL to inject directly into μLogger’s database.

Next, I got a set of hashpoint offsets. I only had personal positional data going back to around 2010, so I didn’t need to accommodate for the pre-2008 absence of the 30W time zone rule. I’ve had only one trip to the Southern hemisphere in that period, and I checked that manually. A little rounding and grouping in SQL gave me each graticule I’d been in on every date. Unsurprisingly, I spend most of my time in the 51 -1 graticule. Adding (or subtracting, for the Western hemisphere) the offset provided the coordinates for each graticule that I visited for the date that I was in that graticule. Nice.

SQL retreiving hashpoints for every graticule I've been to in the last 10 years, grouped by date.
Preloading the offsets into a temporary table made light work of listing all the hashpoints in all the graticules I’d visited, by date. Note that some dates (e.g. 2011-08-04, above) saw me visit multiple graticules.

The correct way to find the proximity of my positions to each geohashpoint is, of course, to use WGS84. That’s an easy thing to do if you’re using a database that supports it. My database… doesn’t. So I just used Pythagoras’ theorem to find positions I’d visited that were within 0.15° of a that day’s hashpoint.

Using Pythagoras for geopositional geometry is, of course, wrong. Why? Because the physical length of a “degree” varies dependent on latitude, and – more importantly – a degree of latitude is not the same distance as a degree of longitude. The ratio varies by latitude: only an idealised equatorial graticule would be square!

But for this case, I don’t care: the data’s going to be fuzzy and require some interpretation anyway. Not least because Google’s positioning has the tendency to, for example, spot a passing train’s WiFi and assume I’ve briefly teleported to Euston Station, which is apparently where Google thinks that hotspot “lives”.

GIF animation comparing routes recorded by Google My Location with those recorded by my GPSr: they're almost identical
I overlaid randomly-selected Google My Location and GPSr routes to ensure that they coincided, as an accuracy-test. It’s interesting to note that my GPSr points cluster when I was moving slower, suggesting it polls on a timer. Conversely Google’s points cluster when I was using data (can you see the bit where I used a chat app), suggesting that Google Location Services ramps up the accuracy and poll frequency when you’re actively using your device.

I assumed that my algorithm would detect all of my actual geohash finds, and yes: all of these appeared as-expected in my results. This was a good confirmation that my approach worked.

And, crucially: about a dozen additional candidate points showed up in my search. Most of these – listed at the end of this post – were 50m+ away from the hashpoint and involved me driving or cycling past on a nearby road… but one hashpoint stuck out.

Hashing by accident

Annabel riding on Tom's shoulders in Edinburgh.
We all had our roles to play in our trip to Edinburgh. Tom… was our pack mule.

In August 2015 we took a trip up to Edinburgh to see a play of Ruth‘s brother Robin‘s. I don’t remember much about the play because I was on keeping-the-toddler-entertained duty and so had to excuse myself pretty early on. After the play we drove South, dropping Tom off at Lanark station.

We exited Lanark via the Hyndford Bridge… which is – according to the map – tantalisingly-close to the 2015-08-22 55 -3 hashpoint: only about 23 metres away!

The 2015-08-22 55 -3
Google puts the centre of the road I drove down only 23m from the 2015-08-22 55 -3 hashpoint (of course, I was actually driving on the near side of the road and may have been closer still).

That doesn’t feel quite close enough to justify retroactively claiming the geohash, tempting though it would be to use it as a vehicle to my easy geohash ribbon. Google doesn’t provide error bars for their exported location data so I can’t draw a circle of uncertainty, but it seems unlikely that I passed through this very close hashpoint.

Pity. But a fun exercise. This was the nearest of my near misses, but plenty more turned up in my search, too:

  1. 2013-09-28 54 -2 (9,000m)
    Near a campsite on the River Eden. I drove past on the M6 with Ruth on the way to Loch Lomond for a mini-break to celebrate our sixth anniversary. I was never more than 9,000 metres from the hashpoint, but Google clearly had a moment when it couldn’t get good satellite signal and tries to trilaterate my position from cell masts and coincidentally guessed, for a few seconds, that I was much closer. There are a few such erroneous points in my data but they’re pretty obvious and easy to spot, so my manual filtering process caught them.
  2. 2019-09-13 52 -0 (719m)
    A600, near Cardington Airstrip, south of Bedford. I drove past on the A421 on my way to Three Rings‘ “GDPR Camp”, which was more fun than it sounds, I promise.
  3. 2014-03-29 53 -1 (630m)
    Spen Farm, near Bramham Interchange on the A1(M). I drove past while heading to the Nightline Association Conference to talk about Three Rings. Curiously, I came much closer to the hashpoint the previous week when I drove a neighbouring road on my way to York for my friend Matt’s wedding.
  4. 2020-05-06 51 -1 (346m)
    Inside Kidlington Police Station! Short of getting arrested, I can’t imagine how I’d easily have gotten to this one, but it’s moot anyway because I didn’t try! I’d taken the day off work to help with child-wrangling (as our normal childcare provisions had been scrambled by COVID-19), and at some point during the day we took a walk and came somewhat near to the hashpoint.
  5. 2016-02-05 51 -1 (340m)
    Garden of a house on The Moors, Kidlington. I drove past (twice) on my way to and from the kids’ old nursery. Bonus fact: the house directly opposite the one whose garden contained the hashpoint is a house that I looked at buying (and visited), once, but didn’t think it was worth the asking price.
  6. 2017-08-30 51 -1 (318m)
    St. Frieswide Farm, between Oxford and Kidlington. I cycled past on Banbury Road twice – once on my way to and once on my way from work.
  7. 2015-01-25 51 -1 (314m)
    Templar Road, Cutteslowe, Oxford. I’ve cycled and driven along this road many times, but on the day in question the closest I came was cycling past on nearby Banbury Road while on the way to work.
  8. 2018-01-28 51 -1 (198m)
    Stratfield Brake, Kidlington. I took our youngest by bike trailer this morning to his Monkey Music class: normally at this point in history Ruth would have been the one to take him, but she had a work-related event that she couldn’t miss in the morning. I cycled right by the entrance to this nature reserve: it could have been an ideal location for a geohash!
  9. 2014-01-24 51 -1 (114m)
    On the Marston Cyclepath. I used to cycle along this route on the way to and from work most days back when I lived in Marston, but by 2014 I lived in Kidlington and so I’d only cycle past the end of it. So it was that I cycled past the Linacre College of the path, around 114m away from the hashpoint, on this day.
  10. 2015-06-10 51 -1 (112m)
    Meadow near Peartree Interchange, Oxford. I stopped at the filling station on the opposite side of the roundabout, presumably to refuel a car.
  11. 2020-02-27 51 -1 (70m)
    This was a genuine attempt at a hashpoint that I failed to reach and was so sad about that I never bothered to finish writing up. The hashpoint was very close (but just out of sight of, it turns out) a geocache I’d hidden in the vicinity, and I was hopeful that I might be able to score the most-epic/demonstrable déjà vu/hash collision achievement ever, not least because I had pre-existing video evidence that I’d been at the coordinates before! Unfortunately it wasn’t to be: I had inadequate footwear for the heavy rains that had fallen in the days that preceded the expedition and I was in a hurry to get home, get changed, and go catch a train to go and see the Goo Goo Dolls in concert. So I gave up and quit the expedition. This turned out to be the right decision: going to the concert one of the last “normal” activities I got to do before the COVID-19 lockdown made everybody’s lives weird.
  12. 2014-05-23 51 -1 (61m)
    White Way, Kidlington, near the Bicester Road to Green Road footpath. I passed close by while cycling to work, but I’ve since walked through this hashpoint many times: it’s on a route that our eldest sometimes used to take when walking home from her school! With the exception only of the very-near-miss in Lanark, this was my nearest “near miss”.
  13. 2015-08-22 55 -3 (23m)
    So near, yet so far. 🙄
Rain in Lanark, seen through a car window
No silly grin, but coincidentally – perhaps by accident – I took a picture out of the car window shortly after we passed the hashpoint. This is what Lanark looks like when you drive through it in the rain.
× × × × × × × ×