Logitech MX Master 2S

I’m a big believer in the idea that the hardware I lay my hands on all day, every day, needs to be the best for its purpose. On my primary desktop, I type on a Das Keyboard 4 Professional (with Cherry MX brown switches) because it looks, feels, and sounds spectacular. I use the Mac edition of the keyboard because it, coupled with a few tweaks, gives me the best combination of features and compatibility across all of the Windows, MacOS, and Linux (and occasionally other operating systems) I control with it. These things matter.

F13, Lower Brightness, and Raise Brightness keys on Dan's keyboard
I don’t know what you think these buttons do, but if you’re using my keyboard, you’re probably wrong. Also, they need a clean. I swear they don’t look this grimy when I’m not doing macro-photography.

I also care about the mouse I use. Mice are, for the most part, for the Web and for gaming and not for use in most other applications (that’s what keyboard shortcuts are for!) but nonetheless I spend plenty of time holding one and so I want the tool that feels right to me. That’s why I was delighted when, in replacing my four year-old Logitech MX1000 in 2010 with my first Logitech Performance MX, I felt able to declare it the best mouse in the world. My Performance MX lived for about four years, too – that seems to be how long a mouse can stand the kind of use that I give it – before it started to fail and I opted to replace it with an identical make and model. I’d found “my” mouse, and I was sticking with it. It’s a great shape (if you’ve got larger hands), is full of features including highly-configurable buttons, vertical and horizontal scrolling (or whatever you want to map them to), and a cool “flywheel” mouse wheel that can be locked to regular operation or unlocked for controlled high-speed scrolling at the touch of a button: with practice, you can even use it as a speed control by gently depressing the switch like it was a brake pedal. Couple all of that with incredible accuracy on virtually any surface, long battery life, and charging “while you use” and you’ve a recipe for success, in my mind.

My second Performance MX stopped properly charging its battery this week, and it turns out that they don’t make them any more, so I bought its successor, the Logitech MX Master 2S.

(New) Logitech MX Master 2S and (old) Logitech Performance MX
On the left, the (new) Logitech MX Master 2S. On the right, my (old) Logitech Performance MX.

The MX Master 2S is… different… from its predecessor. Mostly in good ways, sometimes less-good. Here’s the important differences:

  • Matte coating: only the buttons are made of smooth plastic; the body of the mouse is now a slightly coarser plastic: you’ll see in the photo above how much less light it reflects. It feels like it would dissipate heat less-well.
  • Horizontal wheel replaces rocker wheel: instead of the Performance MX’s “rocker” scroll wheel that can be pushed sideways for horizontal scroll, the MX Master 2S adds a dedicated horizontal scroll (or whatever you reconfigure it to) wheel in the thumb well. This is a welcome change: the rocker wheel in both my Performance MXes became less-effective over time and in older mice could even “jam on”, blocking the middle-click function. This seems like a far more-logical design.
  • New back/forward button shape: to accommodate the horizontal wheel, the “back” and “forward” buttons in the thumb well have been made smaller and pushed closer together. This is the single biggest failing of the MX Master 2S: it’s clearly a mouse designed for larger hands, and yet these new buttons are slightly, but noticeably, harder to accurately trigger with a large thumb! It’s tolerable, but slightly annoying.
  • Bluetooth support: one of my biggest gripes about the Performance MX was its dependence on Unifying, Logitech’s proprietary wireless protocol. The MX Master 2S supports Unifying but also supports Bluetooth, giving you the best of both worlds.
  • Digital flywheel: the most-noticable change when using the mouse is the new flywheel and braking mechanism, which is comparable to the change in contemporary cars from a mechanical to a digital handbrake. The flywheel “lock” switch is now digital, turning on or off the brake in a single stroke and so depriving you of the satisfaction of using it to gradually “slow down” a long spin-scroll through an enormous log or source code file. But in exchange comes an awesome feature called SmartShift, which dynamically turns on or off the brake (y’know, like an automatic handbrake!) depending on the speed with which you throw the wheel. That’s clever and intuitive and “just works” far better than I’d have imagined: I can choose to scroll slowly or quickly, with or without the traditional ratchet “clicks” of a wheel mouse, with nothing more than the way I flick my finger (and all fully-configurable, of course). And I’ve still got the button to manually “toggle” the brake if I need it. It took some getting used to, but this change is actually really cool! (I’m yet to get used to the sound of the digital brake kicking in automatically, but that’s true of my car too).
  • Basic KVM/multi-computing capability: with a button on the underside to toggle between different paired Unifying/Bluetooth transceivers and software support for seamless edge-of-desktop multi-computer operation, Logitech are clearly trying to target folks who, like me, routinely run multiple computers simultaneously from a single keyboard and mouse. But it’s a pointless addition in my case because I’ve been quite happy using Synergy to do this for the last 7+ years, which does it better. Still, it’s a harmless “bonus” feature and it might be of value to others, I suppose.

All in all, the MX Master 2S isn’t such an innovative leap forward over the Performance MX as the Performance MX was over the MX1000, but it’s still great that this spectacular series of heavyweight workhouse feature-rich mice continues to innovate and, for the most part, improve upon the formula. This mouse isn’t cheap, and it isn’t for everybody, but if you’re a big-handed power user with a need to fine-tune all your hands-on hardware to get it just right, it’s definitely worth a look.

× ×

Modern CSS on DanQ.me

The current iteration of my blog diverges from an architectural principle common to most of previous versions of the last 20 years. While each previous change in design and layout was intended to provide a single monolithic upgrade, this version tries to provide me with a platform for continuous ongoing experimentation and change.

Debug console on DanQ.me showing Dan's head and a speech bubble.
Earlier this year I added experimental console art, for example. Click through for more details.

I’ve been trying to make better use of my blog as a vehicle for experimenting with web technologies, as I used to with personal sites back in the 1990s and early 2000s; to see a vanity site like this one as a living playground rather than something that – like most of the sites I’m paid to work on – something whose design is, for the most part, static for long periods of time.

"Blog" dropdown menu on DanQ.me.
The “popular” flag and associated background colour in the “Blog” top-level menu became permanent after a period of A/B testing. Thanks, unwitting testers!

Among the things I’ve added prior to the initial launch of this version of the design are gracefully-degrading grids, reduced-motion support, and dark-mode support – three CSS features of increasing levels of “cutting edge”-ness but each of which is capable of being implemented in a way that does not break the site’s compatibility. This site’s pages are readable using (simulations of) ancient rendering engines or even in completely text-based browsers, and that’s just great.

Here’s how I’ve implemented those three features:

Gracefully-degrading grids

Grid of recent notes and shares on DanQ.me
I’m not entirely happy with the design of these boxes, but that’s a job for another day.

The grid of recent notes, shares, checkins and videos on my homepage is powered by the display: grid; CSS directive. The number of columns varies by screen width from six on the widest screens down to three or just one on increasingly small screens. Crucially, grid-auto-flow: dense; is used to ensure an even left-to-right filling of the available space even if one of the “larger” blocks (with grid-column: span 2; grid-row: span 2;) is forced for space reasons to run onto the next line. This means that content might occasionally be displayed in a different order from that in which it is written in the HTML (which is reverse order of publication), but in exchange the items are flush with both sides.

Grid sample showing impact of dense flow.
The large “5 Feb” item in this illustration should, reverse-chronologically, appear before the “3 Feb” item, but there isn’t room for it on the previous line. grid-auto-flow: dense; means that the “3 Feb” item is allowed to bubble-up and fill the gap, appearing out-of-order but flush with the edge.

Not all web browsers support display: grid; and while that’s often only one of design and not of readability because these browsers will fall back to usually-very-safe default display modes like block and inline, as appropriate, sometimes there are bigger problems. In Internet Explorer 11, for example, I found (with thanks to @_ignatg) a problem with my directives specifying the size of these cells (which are actually <li> elements because, well, semantics matter). Because it understood the directives that ought to impact the sizing of the list items but not the one that redeclared its display type, IE made… a bit of a mess of things…

Internet Explorer scrambles a list/grid combination.
Thanks, Internet Explorer. That’s totally what I was looking for.

Do websites need to look the same in every browser? No. But the content should be readable regardless, and here my CSS was rendering my content unreadable. Given that Internet Explorer users represent a little under 0.1% of visitors to my site I don’t feel the need to hack it to have the same look-and-feel: I just need it to have the same content readability. CSS Feature Queries to the rescue!

CSS Feature Queries – the @supports selector – make it possible to apply parts of your stylesheet if and only if the browser supports specific CSS features, for example grids. Better yet, using it in a positive manner (i.e. “apply these rules only if the browser supports this feature”) is progressive enhancement, because browsers that don’t understand the  @supports selector act in the same way as those that understand it but don’t support the specified feature. Fencing off the relevant parts of my stylesheet in a @supports (display: grid) { ... } block instructed IE to fall back to displaying that content as a boring old list: exactly what I needed.

Internet Explorer's view of the "grid" on the DanQ.me homepage.
It isn’t pretty, but it’s pretty usable!

Reduced-motion support

I like to put a few “fun” features into each design for my blog, and while it’s nowhere near as quirky as having my head play peek-a-boo when you hover your cursor over it, the current header’s animations are in the same ballpark: hover over or click on some of the items in the header menu to see for yourself..

Main menu with "Dan Q" title in it's "bounced" position.
I’m most-pleased with the playful “bounce” of the letter Q when you hover over my name.

These kinds of animations are fun, but they can also be problematic. People with inner ear disorders (as well as people who’re just trying to maximise the battery life on their portable devices!) might prefer not to see them, and web designers ought to respect that choice where possible. Luckily, there’s an emerging standard to acknowledge that: prefers-reduced-motion. Alongside its cousins inverted-colors, prefers-reduced-transparency, prefers-contrast and prefers-color-scheme (see below for that last one!), these new CSS tools allow developers to optimise based on the accessibility features activated by the user within their operating system.

Motion-reducing controls in Windows 10 and MacOS X.
In Windows you turn off animations while in MacOS you turn on not-having animations, but the principle’s the same.

If you’ve tweaked your accessibility settings to reduce the amount of animation your operating system shows you, this website will respect that choice as well by not animating the contents of the title, menu, or the homepage “tiles” any more than is absolutely necessary… so long as you’re using a supported browser, which right now means Safari or Firefox (or the “next” version of Chrome). Making the change itself is pretty simple: I just added a @media screen and (prefers-reduced-motion: reduce) { ... } block to disable or otherwise cut-down on the relevant animations.

Dark-mode support

DanQ.me in dark mode.

Similarly, operating systems are beginning to support “dark mode”, designed for people trying to avoid eyestrain when using their computer at night. It’s possible for your browser to respect this and try to “fix” web pages for you, of course, but it’s better still if the developer of those pages has anticipated your need and designed them to acknowledge your choice for you. It’s only supported in Firefox and Safari so far and only on recent versions of Windows and MacOS, but it’s a start and a helpful touch for those nocturnal websurfers out there.

Enabling Dark Mode on Windows 10 and MacOS X
Come to the dark side, Luke. Or just get f.lux, I suppose.

It’s pretty simple to implement. In my case, I just stacked some overrides into a @media (prefers-color-scheme: dark) { ... } block, inverting the background and primary foreground colours, softening the contrast, removing a few “bright” borders, and darkening rather than lightening background images used on homepage tiles. And again, it’s an example of progressive enhancement: the (majority!) of users whose operating systems and/or browsers don’t yet support this feature won’t be impacted by its inclusion in my stylesheet, but those who can make use of it can appreciate its benefits.

This isn’t the end of the story of CSS experimentation on my blog, but it’s a part of the it that I hope you’ve enjoyed.

× × × × × × ×

Happy Metamour Appreciation Day, JTA

Apparently the NCSF (US) are typing to make 28 February into Metamour Day: a celebration of one’s lover’s lovers. While I’m not convinced that’ll ever get Hallmark’s interest, I thought it provided a good opportunity to sing the praises of my metamour, JTA.

JTA and Annabel looking into a cabinet at the British Museum.
This is a man who knows how to use Greek myths and legends to add magic to his daughter’s museum visit.

I first met JTA 15 years ago at Troma Night XX, when his girlfriend Ruth – an attendee of Troma Night since its earliest days the previous year – brought him along and we all mocked his three-letter initialism. Contrary to our previous experience, thanks to Liz, of people bringing boyfriends once but never again (we always assumed that we scared them off), JTA became a regular, even getting to relive some of the early nights that he’d missed in our nostalgic 50th event. Before long, I felt glad to count him among my friends.

We wouldn’t become metamours until 3½ years later when a double-date trip to the Edinburgh Fringe turned into a series of (alcohol-assisted) confessions of nonmonagamous attractions between people present and a the ocassionally-controversial relationships that developed as a result. Polyamory has grown to get a lot more media coverage and general acceptance over the last couple of decades, but those of us in these kinds of relationships still face challenges, and during the times that bigots have made it hardest for us – and one period in 2017 in particular – I’ve been so very glad to have JTA in my corner.

JTA delivering the 2018 Three Rings Christmas Quiz.
Three Rings’ Quizmaster General at work.

Almost 13 years ago I described JTA thusly, and I stand by it:

You have a fantastic temper which you keep carefully bottled away and of which you draw out only a little at a time and only where it is genuinely justly deserved. Conversely, your devotion to the things you love and care about is equally inspiring.

But beyond that, he’s a resourceful jury-rigger, a competent oarsman, and a man who knows when it’s time to throw a hobbit into the darkness. He’s a man who’ll sit in the pub and talk My Little Pony with me and who’ll laugh it off when he gets mistaken for my father.

We’d be friends anyway, but having a partner-in-common has given us the opportunity for a closer relationship still. I love you, man: y’know, in the Greek way. Happy metamour appreciation day.

× ×

Debugging WorldWideWeb

Earlier this week, I mentioned the exciting hackathon that produced a moderately-faithful reimagining of the world’s first Web browser. I was sufficiently excited about it that I not only blogged here but I also posted about it to MetaFilter. Of course, the very first thing that everybody there did was try to load MetaFilter in it, which… didn’t work.

MetaFilter failing to load on the reimagined WorldWideWeb.
500? Really?

People were quick to point this out and assume that it was something to do with the modernity of MetaFilter:

honestly, the disheartening thing is that many metafilter pages don’t seem to work. Oh, the modern web.

Some even went so far as to speculate that the reason related to MetaFilter’s use of CSS and JS:

CSS and JS. They do things. Important things.

This is, of course, complete baloney, and it’s easy to prove to oneself. Firstly, simply using the View Source tool in your browser on a MetaFilter page reveals source code that’s quite comprehensible, even human-readable, without going anywhere near any CSS or JavaScript.

MetaFilter in Lynx: perfectly usable browing experience
As late as the early 2000s I’d occasionally use Lynx for serious browsing, but any time I’ve used it since it’s been by necessity.

Secondly, it’s pretty simple to try browsing MetaFilter without CSS or JavaScript enabled! I tried in two ways: first, by using Lynx, a text-based browser that’s never supported either of those technologies. I also tried by using Firefox but with them disabled (honestly, I slightly miss when the Web used to look like this):

MetaFilter in Firefox (with CSS and JS disabled)
It only took me three clicks to disable stylesheets and JavaScript in my copy of Firefox… but I’ll be the first to admit that I don’t keep my browser configured like “normal people” probably do.

And thirdly: the error code being returned by the simulated WorldWideWeb browser is a HTTP code 500. Even if you don’t know your HTTP codes (I mean, what kind of weirdo would take the time to memorise them all anyway <ahem>), it’s worth learning this: the first digit of a HTTP response code tells you what happened:

  • 1xx means “everything’s fine, keep going”;
  • 2xx means “everything’s fine and we’re done”;
  • 3xx means “try over there”;
  • 4xx means “you did something wrong” (the infamous 404, for example, means you asked for a page that doesn’t exist);
  • 5xx means “the server did something wrong”.

Simple! The fact that the error code begins with a 5 strongly implies that the problem isn’t in the (client-side) reimplementation of WorldWideWeb: if this had have been a CSS/JS problem, I’d expect to see a blank page, scrambled content, “filler” content, or incomplete content.

So I found myself wondering what the real problem was. This is, of course, where my geek flag becomes most-visible: what we’re talking about, let’s not forget, is a fringe problem in an incomplete simulation of an ancient computer program that nobody uses. Odds are incredibly good that nobody on Earth cares about this except, right now, for me.

Dan's proposed "Geek Flag"
I searched for a “Geek Flag” and didn’t like anything I saw, so I came up with this one based on… well, if you recognise what it’s based on, good for you, you’re certainly allowed to fly it. If not… well, you can too: there’s no geek-gatekeeping here.

Luckily, I spotted Jeremy’s note that the source code for the WorldWideWeb simulator was now available, so I downloaded a copy to take a look. Here’s what’s happening:

  1. The (simulated) copy of WorldWideWeb is asked to open a document by reference, e.g. “https://www.metafilter.com/”.
  2. To work around same-origin policy restrictions, the request is sent to an API which acts as a proxy server.
  3. The API makes a request using the Node package “request” with this line of code: request(url, (error, response, body) => { ... }).  When the first parameter to request is a (string) URL, the module uses its default settings for all of the other options, which means that it doesn’t set the User-Agent header (an optional part of a Web request where the computer making the request identifies the software that’s asking).
  4. MetaFilter, for some reason, blocks requests whose User-Agent isn’t set. This is weird! And nonstandard: while web browsers should – in RFC2119 terms – set their User-Agent: header, web servers shouldn’t require that they do so. MetaFilter returns a 403 and a message to say “Forbidden”; usually a message you only see if you’re trying to access a resource that requires session authentication and you haven’t logged-in yet.
  5. The API is programmed to handle response codes 200 (okay!) and 404 (not found), but if it gets anything else back it’s supposed to throw a 400 (bad request). Except there’s a bug: when trying to throw a 400, it requires that an error message has been set by the request module and if there hasn’t… it instead throws a 500 with the message “Internal Server Fangle” and  no clue what actually went wrong. So MetaFilter’s 403 gets translated by the proxy into a 400 which it fails to render because a 403 doesn’t actually produce an error message and so it gets translated again into the 500 that you eventually see. What a knock-on effect!
Illustration showing conversation between simulated WorldWideWeb and MetaFilter via an API that ultimately sends requests without a User-Agent, gets a 403 in response, and can't handle the 403 and so returns a confusing 500.
If you’re having difficulty visualising the process, this diagram might help you to continue your struggle with that visualisation.

The fix is simple: simply change the line:

request(url, (error, response, body) => { ... })

to:

request({ url: url, headers: { 'User-Agent': 'WorldWideWeb' } }, (error, response, body) => { ... })

This then sets a User-Agent header and makes servers that require one, such as MetaFilter, respond appropriately. I don’t know whether WorldWideWeb originally set a User-Agent header (CERN’s source file archive seems to be missing the relevant C sources so I can’t check) but I suspect that it did, so this change actually improves the fidelity of the emulation as a bonus. A better fix would also add support for and appropriate handling of other HTTP response codes, but that’s a story for another day, I guess.

I know the hackathon’s over, but I wonder if they’re taking pull requests…

× × × × ×

WorldWideWeb, 30 years on

This month, a collection of some of my favourite geeks got invited to CERN in Geneva to participate in a week-long hackathon with the aim of reimplementing WorldWideWeb – the first web browser, circa 1990-1994 – as a web application. I’m super jealous, but I’m also really pleased with what they managed to produce.

DanQ.me as displayed by the reimagined WorldWideWeb browser circa 1990
With the exception of a few character entity quirks, this site remains perfectly usable in the simulated WorldWideWeb browser. Clearly I wasn’t the only person to try this vanity-check…

This represents a huge leap forward from their last similar project, which aimed to recreate the line mode browser: the first web browser that didn’t require a NeXT computer to run it and so a leap forward in mainstream appeal. In some ways, you might expect reimplementing WorldWideWeb to be easier, because its functionality is more-similar that of a modern browser, but there were doubtless some challenges too: this early browser predated the concept of the DOM and so there are distinct processing differences that must be considered to get a truly authentic experience.

Geeks hacking on WorldWideWeb reborn
It’s just like any other hackathon, if you ignore the enormous particle collider underneath it.

Among their outputs, the team also produced a cool timeline of the Web, which – thanks to some careful authorship – is as legible in WorldWideWeb as it is in a modern browser (if, admittedly, a little less pretty).

WorldWideWeb screenshot by Sir Tim Berners-Lee
When Sir Tim took this screenshot, he could never have predicted the way the Web would change, technically, over the next 25-30 years. But I’m almost more-interested in how it’s stayed the same.

In an age of increasing Single Page Applications and API-driven sites and “apps”, it’s nice to be reminded that if you develop right for the Web, your content will be visible (sort-of; I’m aware that there are some liberties taken here in memory and processing limitations, protocols and negotiation) on machines 30 years old, and that gives me hope that adherence to the same solid standards gives us a chance of writing pages today that look just as good in 30 years to come. Compare that to a proprietary technology like Flash whose heyday 15 years ago is overshadowed by its imminent death (not to mention Java applets or ActiveX <shudders>), iOS apps which stopped working when the operating system went 64-bit, and websites which only work in specific browsers (traditionally Internet Explorer, though as I’ve complained before we’re getting more and more Chrome-only sites).

The Web is a success story in open standards, natural and by-design progressive enhancement, and the future-proof archivability of human-readable code. Long live the Web.

Update 24 February 2019: After I submitted news of the browser to MetaFilter, I (and others) spotted a bug. So I came up with a fix…

× × ×

Hello, Friendly Insurance Salesman!

Hello, friendly insurance salesman I spoke to earlier today! I’ve been expecting you. Also: sorry.

JTA, Ruth, and Dan at JTA and Ruth's wedding.
Here are the people you just sold car insurance to.

I’ve been expecting you because you seemed so keen to finish your shift and search for me and, with my name, I’m pretty easy to find. I knew that you planned to search for me because after I caused so much trouble for your computer systems then, well, I probably deserved it.

I’m sorry that I have such an awkward name and that you had to make your computer system work around it. At least it handled it better than Equifax’s did, and you were far friendlier about it than the Passport Office were. It’s an awkward name, yes, but mostly only because programmers are short-sighted when it comes to names. And I say that as a programmer.

I’m sorry that my unusual relationship structure made your computer system do a double-take. My partner Ruth can’t have a husband as well, can she not? Try telling her that! Don’t feel bad: you’re not even the first person this last fortnight to get confused by our uncommon arrangement, and even where my name doesn’t break computer systems, my relationship status does: even the census can’t cope. I’m sure people must assume we’re insanely radical but we’re honestly pretty boring: just like any other family, just with more love. Don’t believe me? We have spreadsheets. You can’t get more boring than that.

I’m sorry that the email address I gave you looked like a typo and you felt you had to check it thrice. It wasn’t, it’s just that I give a different email address to every company I deal with.

I’m sorry that what should have been a click-click-done exercise came down to a live chat session and then a phone call. I don’t mean to be more work for people.

John points to Arthur, our car
“Which car are we insuring, little fella’?” // “THE RED ONE!”

But thank you for being friendly. And useful. And generally awesome. I expected a painful process, perhaps because that’s what I’d had from my last insurer. You, on the other hand (and your Live Chat colleague who I spoke to beforehand) were fantastic. Somehow you were more-pleasant, more-competent, and represent better value than the insurer we’re coming from, so thank you. And that’s the real reason that I hope you’ll follow through on the suggestion that you search for me by name: because you deserve a pat on the back.

So thanks. But yeah: sorry.

× ×

Why do book spines have the titles printed the way they do?

Have you noticed how the titles printed on the spines of your books are all, for the most part, oriented the same way? That’s not a coincidence.

Illustration of a chained library
If you can’t see the spines of your books at all, perhaps you’re in a library and it’s the 17th century. Wait: how are you on the Internet?

ISO 6357 defines the standard positioning of titles on the spines of printed books (it’s also codified as British Standard BS6738). If you assume that your book is stood “upright”, the question is one of which way you tilt your head to read the title printed on the spine. If you tilt your head to the right, that’s a descending title (as you read left-to-right, your gaze moves down, towards the surface on which the book stands). If you tilt your head to the left, that’s an ascending title. If you don’t need to tilt your head in either direction, that’s a transverse title.

ISO 6357:1985 page illustrating different standard spine title alignments.
Not every page in ISO 6357:1985 is as exciting as this one.

The standard goes on to dictate that descending titles are to be favoured: this places the title in a readable orientation when the book lays flat on a surface with the cover face-up. Grab the nearest book to you right now and you’ll probably find that it has a descending title.

Books on a shelf.
This eclectic shelf includes a transverse title (the Holy Bible), a semi-transverse title (The Book of English Magic) and a rare ascending title (A First Dictionary) among a multitude of descending titles.

But if the book is lying on a surface, I can usually read the cover of the book. Only if a book is in a stack am I unable to do so, and stacks are usually relatively short and so it’s easy enough to lift one or more books from the top to see what lies beneath. What really matters when considering the orientation of a spine title is, in my mind, how it appears when it’s shelved.

It feels to me like this standard’s got things backwards. If a shelf of anglophone books is organised into any kind of order (e.g. alphabetically) then it’ll usually be from left to right. If I’m reading the titles from left to right, and the spines are printed descending, then – from the perspective of my eyes – I’m reading from bottom to top: i.e. backwards!

It’s possible that this is one of those things that I overthink.

× ×

CSS-driven console graphics

If you’re reading this post via my blog and using a desktop computer, try opening your browser’s debug console (don’t worry; I’ll wait). If you don’t know how, here’s instructions for Firefox and instructions for Chrome. Other browsers may vary. You ought to see something like this in your debugger:

Debug console on DanQ.me showing Dan's head and a speech bubble.
I’m in your console, eating your commands!

What sorcery is this?

The debug console is designed to be used by web developers so that they can write Javascript code right in their browser as well as to investigate any problems with the code run by a web page. The web page itself can also output to the console, which is usually used for what I call “hello-based debugging”: printing out messages throughout a process so that the flow and progress can be monitored by the developer without having to do “proper” debugging. And it gets used by some web pages to deliver secret messages to any of the site users who open their debugger.

Facebook console messaging advising against the use of the console.
Facebook writes to the console a “stop” message, advising against using the console unless you know what you’re doing in an attempt to stop people making themselves victims of console-based social engineering attacks.

Principally, though, the console is designed for textual content and nothing else. That said, both Firefox and Chrome’s consoles permit the use of CSS to style blocks of debug output by using the %c escape sequence. For example, I could style some of a message with italic text:

>> console.log('I have some %citalic %ctext', 'font-style: italic;', '');
   I have some italic text

Using CSS directives like background, then, it’s easy to see how one could embed an image into the console, and that’s been done before. Instead, though, I wanted to use the lessons I’d learned developing PicInHTML 8¾ years ago to use text and CSS (only) to render a colour picture to the console. First, I created my template image – a hackergotchi of me and an accompanying speech bubble, shrunk to a tiny size and posterised to reduce the number of colours used and saved as a PNG.

Hackergotchi of Dan with a speech bubble, "squashed".
The image appears “squashed” to compensate for console monospace letters not being “square”.

Next, I wrote a quick Ruby program, consolepic.rb, to do the hard work. It analyses each pixel of the image and for each distinct colour assigns to a variable the CSS code used to set the background colour to that colour. It looks for “strings” of like pixels and combines them into one, and then outputs the Javascript necessary to write out all of the above. Finally, I made a few hand-tweaks to insert the text into the speech bubble.

The resulting output weighs in at 31.6kB – about a quarter of the size of the custom Javascript on the frontend of my site and so quite a bit larger than I’d have liked and significantly less-efficient than the image itself, even base64-encoded for embedding directly into the code, but that really wasn’t the point of the exercise, was it? (I’m pretty sure there’s significant room for improvement from a performance perspective…)

Scatmania.org in 2012
I’ll be first to admit it’s not as cool as the “pop-up Dan” in the corner of my 2012 design. You might enjoy my blog post about my 20 years of blogging or the one about how “pop-up Dan” worked.

What it achieved was an interesting experiment into what can be achieved with Javascript, CSS, the browser console, and a little imagination. An experiment that can live here on my site, for anybody who looks in the direction of their debugger, for the foreseeable future (or until I get bored of it). Anybody with any more-exotic/silly ideas about what this technique could be used for is welcome to let me know!

Update: 17 April 2019 – fun though this was, it wasn’t worth continuing to deliver an additional 25% Javascript payload to every visitor just for this, so I’ve stopped it for now. You can still read the source code (and even manually run it in the console) if you like. And I have other ideas for fun things to do with the console, so keep an eye out for that…

× × × ×

Ending on a High

For the final week of his 52 Reflect series and as a way to see off the year, Robin and I spent the last weekend of the year near Fort William to facilitate a quick ascent of Ben Nevis. My previous expedition to Britain’s highest point was an excuse for some ice climbing but I hadn’t actually come up the “path” route since an aborted expedition in 2009.

Dan and Robin atop Ben Nevis
Probably should have wiped the snow off the lens.

Somehow in the intervening years I’ve gotten way out of practice and even more out of shape because our expedition was hard. Partly that was our fault for choosing to climb on one of the shortest days of the year, requiring that we maintain a better-than-par pace throughout to allow us to get up and down before the sun set (which we actually managed with further time in-hand), but mostly it’s the fact that I’ve neglected my climbing: just about the only routine exercise I get these days is cycling, and with changes in my work/life balance I’m now only doing that for about 40 miles in a typical week.

Robin with the GCG6XD, the Ben Nevis summit geocache
My ongoing efforts to get Robin into geocaching continue to succeed: ice somewhat hampered us in our search for the cache nearest the summit but we got there in the end.

For the longest time my primary mountaineering-buddy was my dad, who was – prior to his death during a hillwalking accident – a bigger climber and hiker than I’ll ever be. Indeed, I’ve been “pushed on” by trying to keep up with my father enough times that fighting to keep up with Robin at the weekend was second nature. If I want to get back to the point where I’m fit enough for ice climbing again I probably need to start by finding the excuse for getting up a hill once in a while more-often than I do, first, too. Perhaps I can lay some of the blame for my being out of practice in the flat, gentle plains of Oxfordshire?

Dan ascending Ben Nevis
I’d have loved to have gotten a shot of me actually managing to get some use out of my crampons, but by that point visibility wasn’t great and we were rather cold and wet to be stopping in a wind to take photographs. So this rocky stretch will have to do.

In any case, it was a worthwhile and enjoyable treat to be able to be part of Robin’s final reflection as well as to end the year somewhat-literally “on a high” by seeing off 2018 in the Scottish Highlands. If you’ve not read his blog about his adventures of the last 52 weekends, you should: whether taking a Boris Bike from Brixton to Brighton (within the rental window) or hitching a ride on an aeroplane, he’s provided a year’s worth of fantastic stories accompanied by some great photography.

And now: time for 2019.

× × ×

Festive Cranberry & Cinnamon Bagels

Noticing that our bagel supply was running low and with two kids who’d happily fight to the death for the last one if it came down to it, I decided this weekend to dust off an old recipe and restock using the ingredients in our cupboard. For a festive spin, I opted to make cranberry and cinnamon bagels, and served a few at my family’s regular Sunday brunch. Little did I know that they would turn out to be such a hit that not one from the resupply would survive to the end of the day, and I’ve been pressed into making them again in time for breakfast on Christmas Day (or, as Ruth suggested as she and Robin fought for the last one in a manner more-childish than the children ever would, I could “make them any time I feel like it; every week maybe?”).

Cooling rack full of rustic bagels.
Even the slightly-charred one turned out to be flipping delicious.

If you’d like to make your own, and you totally should, the recipe’s below. I prefer volumetric measurements to weight for bread-making: if you’re not used to doing so, be sure to give your dry ingredients a stir/shake to help them settle when measuring.

Festive Cranberry & Cinnamon Bagels

Yield: 8 bagels
Duration:

Bagels ready to go into the oven.
When my dough is unevenly shaped I call it “rustic”. These are rustic bagels, ready to go into the oven.

Ingredients

  • 360ml warm water
  • 5ml (1tsp) vanilla extract
  • 60ml clear honey
  • white of 1 egg
  • sunflower/vegetable oil for greasing
  • 10g instant yeast
  • 950ml strong white bread flour
  • extra flour for kneading
  • 40ml golden caster sugar
  • generous pinch salt
  • 240ml dried fruit, half cranberries (sweetened), half raisins
  • heaped teaspoon ground cinnamon
Bagel
Eyes on the prize: this is what you’re ultimately aiming for. You might even make a less-“rustic” one.

Directions

  1. Whisk the yeast into the water and set aside for a few minutes to activate.
  2. Combine the flour, one quarter of the sugar, and salt.
  3. Make a well, and gradually introduce the water/yeast, mixing thoroughly to integrate all the flour into a sticky wet dough.
  4. Add the vanilla extract and mix through.
  5. Knead thoroughly: I used a mixer with a dough hook, but you could do it by hand if you prefer. After 5-10 minutes, when the dough becomes stretchy, introduce the dried fruit and continue to knead until well integrated. The dough will be very wet.
  6. Mix the cinnamon into the remaining sugar and scatter over a clean surface. Using well-floured fingers, form the dough into a ball and press into the sugar/cinnamon mixture. Fold and knead against the mixture until it’s all picked-up by the dough: this approach forms attractive pockets and rivulets of cinnamon throughout the dough.
  7. Rub a large bowl with oil. Ball the dough and put it into the bowl, cover tightly, and leave at room temperature for up to two hours until doubled in size.
  8. When it’s ready, fill a large pan about 6cm deep with water, add the honey, and bring to a simmer. Pre-heat a hot oven (gas mark 7, 220°)
  9. On a lightly-floured surface and with well-floured fingertips, extract the ball of dough and divide into eight (halve, halve, halve again). Shape each ball into a bagel by pushing-through the middle with your thumb and stretching out the hole as you rotate it.
  10. Submerge each bagel into the hot water for about a minute on each side, then transfer to baking sheet lined with greaseproof paper.
  11. Thin the egg white with a few drops of water, stir, then brush each bagel with the egg mix.
  12. Bake for about 25 minutes until golden brown. Cool on a wire rack.
Buttered bagel.
Most bagel recipes I’ve seen claim that they freeze well. I can make no such claim, because ours barely cool before they’re eaten.

Mostly this recipe’s here for my own reference, but if you make some then let me know how they turn out for you. (Oh, and for those of you who prefer when my blog posts are technical, this page is marked up in h-recipe.)

× × × ×

Alpha-Gal and the Gaia Hypothesis

Ticking Point

An increasing number of people are reportedly suffering from an allergy to the meat and other products of nonhuman mammals, reports Mosaic Science this week, and we’re increasingly confident that the cause is a sensitivity to alpha-gal (Galactose-alpha-1,3-galactose), a carbohydrate produced in the bodies of virtually all mammals except for us and our cousin apes, monkeys, and simians (and one of the reasons you can’t transplant tissue from pigs to humans, for example).

Lone star tick
The lone star tick (You call that a star, tick? Looks like a blob to me!), one of several vectors for alpha-gal sensitivity.

The interesting thing is that the most-common cause of alpha-gal sensitivity appears to be the bite of one of a small number of species of tick. The most-likely hypothesis seems to be that being bitten by such a tick after it’s bitten e.g. deer or cattle may introduce that species’ alpha-gal directly to your bloodstream. This exposure triggers an immune response through all future exposure, even if it’s is more minor, e.g. consuming milk products or even skin contact with an animal.

That’s nuts, isn’t it? The Mosaic Science article describes the reaction of Tami McGraw, whose symptoms began in 2010:

[She] asked her doctor to order a little-known blood test that would show if her immune system was reacting to a component of mammal meat. The test result was so strongly positive, her doctor called her at home to tell her to step away from the stove.

That should have been the end of her problems. Instead it launched her on an odyssey of discovering just how much mammal material is present in everyday life. One time, she took capsules of liquid painkiller and woke up in the middle of the night, itching and covered in hives provoked by the drug’s gelatine covering.

When she bought an unfamiliar lip balm, the lanolin in it made her mouth peel and blister. She planned to spend an afternoon gardening, spreading fertiliser and planting flowers, but passed out on the grass and had to be revived with an EpiPen. She had reacted to manure and bone meal that were enrichments in bagged compost she had bought.

A delicious-looking BLT. Mmm, bacon.
Cats can eat bacon. But some cat owners can’t. More bacon for the cats? The plot thickens. Also: haven’t used this picture in a while, have I?

Of course, this isn’t the only nor even the most-unusual (or most-severe) animal-induced allergy-to-a-different-animal we’re aware of. The hilariously-named but terribly-dangerous Pork-Cat syndrome is caused, though we’re not sure how, by exposure to cats and results in a severe allergy to pork. But what makes alpha-gal sensitivity really interesting is that it’s increasing in frequency at quite a dramatic rate. The culprit? Climate change. Probably.

It’s impossible to talk to physicians encountering alpha-gal cases without hearing that something has changed to make the tick that transmits it more common – even though they don’t know what that something might be.

“Climate change is likely playing a role in the northward expansion,” Ostfeld adds, but acknowledges that we don’t know what else could also be contributing.

Meat Me Half-Way

To take a minor diversion: another article I saw this week was the BBC‘s one on the climate footprint of the food you eat.

BBC graph showing climate impact of common foods. Beef is terrible *unshocker*.
An average serving of beef contributes almost 8kg of greenhouse gases, compared to around 1kg for chicken. Thanks, Beeb (click through for full article).

A little dated, perhaps: I’m sure that nobody needs to be told nowadays that one of the biggest things a Westerner can do to reduce their personal carbon footprint (after from breeding less or not at all, which I maintain is the biggest, or avoiding air travel, which Statto argues for) is to reduce or refrain from consumption of meat (especially pork and beef) and dairy products.

Indeed, environmental impact was the biggest factor in my vegetarianism (now weekday-vegetarianism) for the last eight years, and it’s an outlook that I’ve seen continue to grow in others over the same period.

Seeing these two stories side-by-side in my RSS reader put the Gaia hypothesis in my mind.

SMBC comic frame: "Yeah, I don't buy it. If Earth is self-regulating and alive, why hasn't it produced an immune response against humanity?"
If you want a pop-culture-grade introduction to the Gaia hypothesis in the context of climate change, this SMBC comic does the job, and does so almost with fewer words than this caption explaining that it does so.

If you’re not familiar with the Gaia hypothesis, the basic idea is this: by some mechanism, the Earth and all of the life on it act in synergy to maintain homeostasis. Organisms not only co-evolve with one another but also with the planet itself, affecting their environment in a way that in turn affects their future evolution in a perpetual symbiotic relationship of life and its habitat.

Its advocates point to negative feedback loops in nature such as plankton blooms affecting the weather in ways that inhibit plankton blooms and to simplistic theoretical models like the Daisyworld Simulation (cute video). A minority of its proponents go a step further and describe the Earth’s changes teleologically, implying a conscious Earth with an intention to protect its ecosystems (yes, these hypotheses were born out of the late 1960s, why do you ask?). Regardless, the essence is the same: life’s effect on its environment affects the environment’s hospitality to life, and vice-versa.

There’s an attractive symmetry to it, isn’t there, in light of the growth in alpha-gal allergies? Like:

  1. Yesterday – agriculture, particularly intensive farming of mammals, causes climate change.
  2. Today – climate change causes ticks to spread more-widely and bite more humans.
  3. Tomorrow – tick bites cause humans to consume less products farmed from mammals?
Daisyworld in SimEarth
Both my appreciation and my rejection of Gaia Hypothesis can probably be traced to me playing way too much SimEarth as a teenager. Here’s my Daisyworld in state of equilibrium, because I haven’t yet gotten bored and spawned dinosaurs to eat all of the daisies.

That’s not to say that I buy it, mind. The Gaia hypothesis has a number of problems, and – almost as bad – it encourages a complacent “it’ll all be okay, the Earth will fix itself” mindset to climate change (which, even if it’s true, doesn’t bode well for the humans residing on it).

But it was a fun parallel to land in my news reader this morning, so I thought I’d share it with you. And, by proxy, make you just a little bit warier of ticks than you might have been already. /shudders/

× × ×

Second Factor Safety Net

I’m a huge fan of multifactor authentication. If you’re using it, you’re probably familiar with using an app on your phone (or receiving a text or email) in addition to a username and password when logging in to a service like your email, social network, or a bank. If you’re not using it then, well, you should be.

Two factor authentication using Google Authenticator (TOTP) to log in to a Google Account
Using an app in addition to your username and password, wherever you can, may be the single biggest step you can make to improving your personal digital security.

Ruth recently had a problem when she lost her phone and couldn’t connect to a service for which she usually used an authenticator app like the one pictured above, so I thought I’d share with you my personal strategy for managing multifactor authentication, in case it’s of any use to anybody else. After all: the issue of not-having-the-right-second-factor-to-hand has happened to me before, it’s certainly now happened to Ruth, and it’s probably something that’s happened to other people I know by now, too.

Thoroughly broken iPhone.
It could happen to anybody. What’s your authentication plan?

Here’s my strategy:

  1. Favour fewer different multifactor solutions. Instead of using e.g. text messaging for one, an app for another, a different app for a third, a hardware token for a fourth, and so on, try to find the fewest number of different solutions that work for your personal digital life. This makes backing up and maintenance easier.
    I use RFC6238/TOTP (better known as “Google Authenticator”) for almost all second factor purposes: the only exceptions are my online bank (who use a proprietary variant of RFC6238 that I’ve not finished reverse-engineering) and Steam (who use a proprietary implementation of RFC6238 with a larger character set, for some reason, in their Steam Guard app).
  2. Favour offline-ready multifactor solutions. It’s important to me to be able to log in to my email using a computer even if my mobile phone has no signal or the network is down. This, plus the fact that the bad guys have gotten the hang of intercepting second-factor text messages, means that SMS-based solutions aren’t the right solution in my opinion. Google Authenticator, Authy, FreeOTP etc. all run completely offline.
  3. Have a backup plan. Here’s the important bit. If you use your phone to authenticate, and you lose access to your phone for a period of time (broken, lost, stolen, out of battery, in use by a small child playing a game), you can’t authenticate. That’s why it’s important that you have a backup plan.
Many mobile devices.
That’s probably more backup devices than you need, but YMMV.

Some suggested backup strategies to consider (slightly biased towards TOTP):

  • Multiple devices: (Assuming you’re using TOTP or something like it) there’s nothing to stop you setting up multiple devices to access the same account. Depending on how the service you’re accessing provides the code you need to set it up, you might feel like you have to set them all up at the same time, but that’s not strictly true: there’s another way…
  • Consider setting up a backdoor: Some systems will allow you to print e.g. a set of “backup codes” and store them in a safe place for later use should you lose access to your second factor. Depending on the other strategies you employ, you should consider doing this: for most (normal) people, this could be the single safest way to retain access to your account in the event that you lose access to your second factor. Either way, you should understand the backdoors available: if your online bank’s policy is to email you replacement credentials on-demand then your online bank account’s security is only as good as your email account’s security: follow the chain to work out where the weak links are.
  • Retain a copy of the code: The code you’re given to configure your device remains valid forever: indeed, the way that it works is that the service provider retains a copy of the same code so they can generate numbers at the same time as you, and thus check that you’re generating the same numbers as them. If you keep a copy of the backup code (somewhere very safe!) you can set up any device you want, whenever you want. Personally, I keep copies of all TOTP configuration codes in my password safe (you’re using a password safe, right?).
  • Set up the infrastructure what works for you: To maximise my logging-on convenience, I have my password safe enter my TOTP numbers for me: I’m using KeeOTP for KeePass, but since 2016 LastPass users can do basically the same thing. I’ve also implemented my own TOTP client in Ruby to run on desktop computers I control (just be careful to protect the secrets file), because sometimes you just want a command-line solution. The code’s below, and I think you’ll agree that it’s simple enough that you can audit it for your own safety too.
#!/usr/bin/env ruby
require 'rubygems'
require 'rotp'

printf "%-30s %3s (%02ds) %4s\n", 'Account',
                                  'Now',
                                  (30 - (Time::now.utc.to_i % 30)),
                                  'Next'
puts '-' * 47
File.read(File.expand_path('~/.google-authenticator-accounts')).
     split("\n").reject{|l| l.strip == ''}.
     each do |account|
  if account =~ /^(.+) ([\w\d]+)$/
    totp = ROTP::TOTP.new($2)
    printf "%-30s %06s    %06s\n", $1,
                                   totp.at(Time::now.utc),
                                   totp.at(Time::now.utc + 30)
  end
end

I’ve occasionally been asked whether my approach actually yields me any of the benefits of two-factor authentication. After all, people say, aren’t I weakening its benefits by storing the TOTP generation key in the same place as my usernames and passwords rather than restricting it to my mobile device. This is true, and it is weaker to do this than to keep the two separately, but it’s not true to say that all of the benefits are negated: replay attacks by an attacker who intercepts a password are mitigated by this approach, for example, and these are a far more-common vector for identity theft than the theft and decryption of password safes.

Everybody has to make their own decisions on the balance of their convenience versus their security, but for me the sweet spot comes here: in preventing many of the most-common attacks against the kinds of accounts that I use and reinforcing my existing username/strong-unique-passwords approach without preventing me from getting stuff done. You’ll have to make your own decisions, but if you take one thing away from this, let it be that there’s nothing to stop you having multiple ways to produce TOTP/Google Authenticator credentials, and you should consider doing so.

× × ×

Poly Parents Evening

Our eldest, 4, started school this year and this week saw her first parents’ evening. This provided an opportunity for we, her parents, to “come out” to her teacher about our slightly-unconventional relationship structure. And everything was fine, which is nice.

Ruth, Dan, JTA and the kids at the top of the slides at a soft play area.
We’re a unusual shape for a family. But three of us are an unusual shape for being in a kids’ soft play area, too, I suppose.

I’m sure the first few months of every child’s school life are a time that’s interesting and full of change, but it’s been particularly fascinating to see the ways in which our young academic’s language has adapted to fit in with and be understood by her peers.

I first became aware of these changes, I think, when I overheard her describing me to one of her school friends as her “dad”: previously she’d always referred to me as her “Uncle Dan”. I asked her about it afterwards and she explained that I was like a dad, and that her friend didn’t have an “Uncle Dan” so she used words that her friend would know. I’m not sure whether I was prouder about the fact that she’d independently come to think of me as being like a bonus father figure, or the fact that she demonstrated such astute audience management.

School work showing a family description
She’s since gotten better at writing on the lines (and getting “b” and “d” the right way around), but you can make out “I have two dads”.

I don’t object to being assigned this (on-again, off-again, since then) nickname. My moniker of Uncle Dan came about as a combination of an effort to limit ambiguity (“wait… which dad?”) and an attempt not to tread on the toes of actual-father JTA: the kids themselves are welcome to call me pretty-much whatever they’re comfortable with. Indeed, they’d be carrying on a family tradition if they chose-for-themselves what to call me: Ruth and her brothers Robin and Owen address their father not by a paternal noun but by his first name, Tom, and this kids have followed suit by adopting “Grand-Tom” as their identifier for him.

Knowing that we were unusual, though, we’d taken the time to do some groundwork before our eldest started school. For example we shared a book about and spent a while talking about how families differ from one another: we figure that an understanding that families come in all kinds of shapes and sizes is a useful concept in general from a perspective of diversity and and acceptance. In fact, you can hear how this teaching pays-off in the language she uses to describe other aspects of the differences she sees in her friends and their families, too.

Still, it was a little bit of a surprise to find myself referred to as a “dad” after four years of “Uncle Dan”.

JTA with his youngest, on a slide.
I’ve no idea what the littler one – picture here with his father – will call me when he’s older, but this week has been a “terrible 2s” week in which he’s mostly called me “stop it” and “go away”.

Nonetheless: in light of the fact that she’d clearly been talking about her family at school and might have caused her teacher some confusion, when all three of us “parents” turned up to parents’ evening we opted to introduce ourselves and our relationship. Which was all fine (as you’d hope: as I mentioned the other day, our unusual relationship structure is pretty boring, really), and the only awkwardness was in having to find an additional chair than the teacher had been expecting to use with which to sit at the table.

There’s sometimes a shortage of happy “we did a thing, and it went basically the same as it would for a family with monogamous parents” poly-family stories online, so I thought this one was worth sharing.

And better yet: apparently she’s doing admirably at school. So we all celebrated with an after-school trip to one of our favourite local soft play centres.

Kids at soft play.
Run run run run run run run STOP. Eat snack. Run run run run run run…
× × × ×

Bodleian Advent Calendar

Hot on the tail of Pong, I wanted to share another mini-project I’ve developed for the Bodleian: this year’s digital advent calendar:

Bodleian 2018 digital advent calendar
If you look closely, you’ll find I’ve shown you a sneak-peek at some of what’s behind tomorrow’s door. Shh. Don’t tell our social media officer.

As each door is opened, a different part of a (distinctly-Bodleian/Oxford) winter scene unfolds, complete with an array of fascinating characters connected to the history, tradition, mythology and literature of the area. It’s pretty cool, and you should give it a go.

If you want to make one of your own – for next year, presumably, unless you’ve an inclination to count-down in this fashion to something else that you’re celebrating 25 days hence – I’ve shared a version of the code that you can adapt for yourself.

Sample advent calendar
The open-source version doesn’t include the beautiful picture that the Bodleian’s does, so you’ll have to supply your own.

Features that make this implementation a good starting point if you want to make your own digital advent calendar include:

  • Secure: your server’s clock dictates which doors are eligible to be opened, and only content legitimately visible on a given date can be obtained (no path-traversal, URL-guessing, or traffic inspection holes).
  • Responsive: calendar adapts all the way down to tiny mobiles and all the way up to 4K fullscreen along with optimised images for key resolutions.
  • Friendly: accepts clicks and touches, uses cookies to remember the current state so you don’t have to re-open doors you opened yesterday (unless you forgot to open one yesterday), “just works”.
  • Debuggable: a password-protected debug mode makes it easy for you to test, even on a production server, without exposing the secret messages behind each door.
  • Expandable: lots of scope for the future, e.g. a progressive web app version that you can keep “on you” and which notifies you when a new door’s ready to be opened, was one of the things I’d hoped to add in time for this year but didn’t quite get around to.

I’ve no idea if it’s any use to anybody, but it’s available on GitHub if you want it.

×