Murder in the Library

Two years after our last murder mystery party, almost three years since the one before, and much, much longer since our last in-person one, we finally managed to have another get-the-guests-in-one-place murder mystery party, just like old times. And it was great!

A woman dressed in a tweed waistcoat, glasses perched atop her head, sips a glass of champagne.
Full credit goes to Ruth who did basically all the legwork this time around. Cheers!

It’s still been been over a decade since we played a mystery that I wrote (though I’m hoping to rectify that within the year): this time we played D’Avekki StudiosMurder in the Library.

D’Avekki’s murder mystery sets use an unusual mechanic that I’ve discussed before online with other murder mystery party authorship enthusiasts1 but never tried in practice: a way of determining at random who the murderer is when play begins. This approach has a huge benefit in that it means that you can assign characters to players using a subset of those available (rather than the usual challenges that often come up when, for example, somebody need to play somebody of a different gender than their own) and, more-importantly, it protects you from the eventuality that a player drops-out at short notice. This latter feature proved incredibly useful as we had a total of three of our guests pull out unexpectedly!

Three people sit alongside a dining table: a woman with dark hair, holding her play notes, a man wearing sunglasses and a Hawaiian shirt, grinning, and a short-haired woman in a conforming white off-the-shoulder top and bright red lipstick. There are beer bottles, champagne flutes, and Coke cans on the table.
Most of our guests were old hands at murder mystery games, but for Owen’s date Kirsty this was a completely novel experience.

The challenge of writing a murder mystery with such a mechanic is to ensure that the script and evidence adapt to the various possible murderers. When I first examined the set that was delivered to us, I was highly skeptical: the approach is broadly as follows2:

  1. At the start of the party, the players secretly draw lots to determine who is the murderer: the player who receives the slip marked with an X is the murderer.
  2. Each character “script” consists of (a) an initial introduction, (b) for each of three acts, a futher introduction which sets up two follow-up questions, (c) the answers to those two follow-up questions, (d) a final statement of innocence, and (e) a final statement of guilt, for use by the murderer.
  3. In addition, each script has a handful of underlined sections, which are to be used only if you are the murderer. This means that the only perceivable difference between one person and another being the murderer is that the only who is the murderer will present a small amount of additional information. The writing is designed such that this additional piece of evidence will be enough to make the case against them be compelling (e.g. because their story becomes internally-inconsistent).
A group of people sit around a dining table; one man, wearing spectacles on his floury face, is taking the photo in "selfie" mode. The other visible characters appear to be a chef, a vicar, a librarian, a thrillseeker, and an aspiring starlet. With the exception of the photographer, they're all looking off to the side, listening to an unseen character speak.
As I recently discovered, glasses make me look like an entirely different person.

The writing was good overall: I especially appreciated the use of a true crime podcast as a framing device (expertly delivered thanks to Rory‘s radio voice). It was also pleasing to see, in hindsight, how the story had been assembled such that any character could be the murderer, but only one would give away a crucial clue. The downside of the format is pretty obvious, though: knowing what the mechanic is, a detective only needs to look at each piece of evidence that appears and look for a connection with each statement given by every other player, ruling out any “red herring” pairings that connect to every other player (as is common with just about the entire genre, all of the suspects had viable motives: only means and opportunity may vary).

It worked very well, but I wonder if – now the formula’s understood by us – a second set in a similar style wouldn’t be as successful.

A party of unlikely characters stands around and points accusingly at the "murderer" among them: it's Kirsty.
Our classic end-of-murder-mystery-party photo post makes a comeback. Extra-special hat tip to Kirsty, who ended up by coincidence being the murderer at her first ever such event and did astoundingly well. From left to right: Rory (Major Clanger), Simon (Chef Flambé), JTA (Noah Sinner), Kirsty (Phyllis Ora), Ruth (Dusty Tomes), Liz (Ruby Daggers), Owen (Max Cruise), and me (Professor Pi).

That said, nobody correctly fingered the murderer this time around. Maybe we’re out of practice? Or maybe the quality of the hints in such a wide-open and dynamic murderer-selection mechanic is less-solid than we’re used to? It’s hard to say: I’d certainly give another D’Avekki a go to find out.

Footnotes

1 There are dozens of us. Dozens!

2 We made a minor adaptation to the formula to fit with our experience of what makes a compelling party, but the fundamentals here are as-published.

× × × ×

The Page With No Code

It all started when I saw no-ht.ml, Terence Eden‘s hilarious response to Salma Alam-Naylor‘s excellent HTML is all you need to make a website. The latter is an argument against both the silly amount of JavaScript with which websites routinely burden their users, but also even against depending on CSS. As a fan of CSS Naked Day and a firm believer in using JS only for progressive enhancement, I’m obviously in favour.

Screenshot showing Terence Eden's no-ht.ml website, which uses plain text ASCII/Unicode art to argue that you don't need HTML.
Obviously no-ht.ml is to be taken as tongue-in-cheek, but as you’re about to see: it caught my interest and got me thinking: how could I go even further.

Terence’s site works by delivering a document with a claimed MIME type of text/html, but which contains only the (invalid) “HTML” code <!doctype UNICODE><meta charset="UTF-8"><plaintext> (to work around browsers’ wish to treat the page as HTML). This is followed by a block of UTF-8 plain text making use of spacing and emoji to illustrate and decorate the content. It’s frankly very silly, and I love it.1

I think it’s possible to go one step further, though, and create a web page with no code whatsoever. That is, one that you can read as if it were a regular web page, but where using View Source or e.g. downloading the page with curl will show you… nothing.

I present: The Page With No Code! (It’ll probably only work if you’re using Firefox, for reasons that will become apparent later.)

Screenshot showing my webpage, "The Page With No Code". Using white text (and some emojis) on a blue gradient background, it describes the same thought process as I describe in this blog post, and invites the reader to "View Source" and see that the page genuinely does appear to have no code.
I’d encourage you to visit The Page With No Code, use View Source to confirm for yourself that it truly has no code, and see if you can work out for yourself how it manages this feat… before coming back here for an explanation. Again: probably Firefox-only.

Once you’ve had a look for yourself and had a chance to form an opinion, here’s an explanation of the black magic that makes this atrocity possible:

  1. The page is blank. It’s delivered with Content-Type: text/html. Your browser interprets a completely-blank page as faulty and corrects it to a functionally-blank minimal HTML page: <html><head></head><body></body></html>.
  2. <body> and <html> elements can be styled with CSS; this includes the ability to add content: ::before and ::after each element. If only we could load a stylesheet then content injection is possible.
  3. We use the fourth way to inject CSS – a Link: HTTP header – to deliver a CSS payload (this, unfortunately, only works in Firefox). To further obfuscate what’s happening and remove the need for a round-trip, this is encoded as a data: URI.
Screenshot showing HTTP headers returned from a request to the No Code Webpage. A Link: header is highlighted, it contains a data: URL with a base64-encoded CSS stylesheet.
The stylesheet – and all the page content – is right there in the Link: header if you just care to decode it! Observe that while 5.84kB of data are transferred, the browser rightly states that the page is zero bytes in size.

This is one of the most disgusting things I’ve ever coded, and that’s saying a lot. I’m so proud of myself. You can view the code I used to generate this awful thing on Github.

My server-side implementation of this broke in 2023 after I upgraded Nginx; my new version doesn’t support the super-long Link: header needed to make this hack work, so I’ve updated the page to use the Link: to reference the CSS file rather than embed it via a data URI. It’s not as cool, but it at least means you can still see the page. Thanks to Thomas Bradshaw for pointing out the problem.

Footnotes

1 My first reaction was “why not just deliver something with Content-Type: text/plain; charset=utf-8 and dispense with the invalid code, but perhaps that’s just me overthinking the non-existent problem.

I’d Like to Change my Mother’s Maiden Name

Following their security incident last month, many users of LastPass are in the process of cycling their security credentials for many of their accounts1. I don’t use LastPass2, but I’ve had ocassion to cycle credentials before, so I appreciate the pain that people are going through.

It’s not just passwords, though: it may well be your “security question” answers you need to rotate too. Your passwords quickly become worthless if an attacker can guess the answers to your “security questions” at services that use them. If you’re using a password safe anyway, you should either:

  1. Answer security questions with long strings of random garbage3, or
  2. Ensure that you use different answers for every service you use, as you would with passwords.4

In the latter case, you’re probably storing your security answers in a password safe5. If the password safe they’re stored in is compromised, you need to change the answers to those security questions in order to secure the account.

This leads to the unusual situation where you can need to call up your bank and say: “Hi, I’d like to change my mother’s maiden name.” (Or, I suppose, father’s middle name, first pet’s name, place of birth, or whatever.) Banks in particular are prone to disallowing you from changing your security answers over the Internet, but all kinds of other businesses can also make this process hard… presumably because a well-meaning software engineer couldn’t conceive of any reason that a user might want to.

I sometimes use a pronouncable password generator to produce fake names for security question answers. And I’ll tell you what: I get some bemused reactions when I say things like “I’d like to change my mother’s maiden name from Tuyiborhooniplashon to Mewgofartablejuki.”

But at least it forestalls them asking me “So why did you change your surname to ‘Q’?”

Footnotes

1 If you use LastPass, you should absolutely plan to do this. IMHO, LastPass’s reassurances about the difficulty in cracking the encryption on the leaked data is a gross exaggeration. I’m not saying you need to panic – so long as your master password is reasonably-long and globally-unique – but perhaps cycle all your credentials during 2023. Oh, and don’t rely on your second factor: it doesn’t help with this particular incident.

2 I used to use LastPass, until around 2016, and I still think it’s a good choice for many people, but nowadays I carry an encrypted KeePassXC password safe on a pendrive (with an automated backup onto an encrypted partition on our household NAS). This gives me some security and personalisation benefits, at the expense of only a little convenience.

3 If you’re confident that you could never lose your password (or rather: that you could never lose your password without also losing the security question answers because you would store them in the same place!), there’s no value in security questions, and the best thing you can do might be to render them unusable.

4 If you’re dealing with a service that uses the security questions in a misguided effort to treat them as a second factor, or that uses them for authentication when talking to them on the telephone, you’ll need to have usable answers to the questions for when they come up.

5 You can, of course, use a different password safe for your randomly-generatred security question answers than you would for the password itself; perhaps a more-secure-but-less-convenient one; e.g. an encrypted pendrive kept in your fire safe?

Geohashing expedition 2023-01-02 51 -1

This checkin to geohash 2023-01-02 51 -1 reflects a geohashing expedition. See more of Dan's hash logs.

Location

Muswell Hill, Piddington, Oxfordshire

Participants

Expedition

I bundled the dog into the car and drove out to Piddington, a couple of kilometres North of the hashpoint. Cherwell Council advertise a circular walk that seems to circle from the village (which looked like a good place to park) up to Muswell Hill, the summit of which is near the hashpoint.

She and I walked through Piddington, past the church, and up onto the path. A soggy kilometre or so later we quickly discovered that this was going to be more-challenging than I’d anticipated. We quickly got bogged down in a flooded field and needed to double-back. With my socks already soaking wet and the dog in a similar condition, we found a different route that looped around the entire hill and through an alpaca farm (or were they llamas?), then we worked our way up the South face of the hill, over the summit, and down to the hashpoint. We got there at 11:00 UTC, took a quick look around and pulled the closest thing a dog can manage to a silly grin, and then hacked our way back (by road) to Piddington for the drive home and some dry clothes.

Tracklog

Entire expedition

Walking part only

Photo

Dan - a man with a beard, wearing a grey fleece with a white poppy attached - crouches alongside his French Bulldog in a green field under a blue sky with a few wispy clouds.
Multi-species silly grin.

Video

Also available on YouTube.

Email newsletters via RSS

I love feeds!

Maybe you’ve heard already, but I love RSS.

I love it so much that I retrofit sites without feeds into it for the convenience of my favourite reader FreshRSS: working around (for example) the lack of feeds in The Far Side (twice), in friends’ blogs, and in my URL shortener. Whether tracking my progress binging webcomic history, subscribing to YouTube channels, or filtering-out sports news, feeds are the centre of my digital life.

Illustration showing a web application with an RSS feed; the RSS feed is sending data to my RSS reader (represented by FreshRSS's icon).

 

There’s been a bit of a resurgence lately of sites whose only subscription option is email, or – worse yet – who provide certain “exclusive” content only to email subscribers.

I don’t want to go giving an actual email address to every damn service, because:

  • It’s not great for privacy, even when (as usual) I use a unique alias for each sender.
  • It’s usually harder to unsubscribe than I’d like, and rarely consistent: you need to find a recent message, click a link, sometimes that’s enough or sometimes you need to uncheck a box or click a button, or sometimes you’ll get another email with something to click in it…
  • I rarely want to be notified the very second a new issue is published; email is necessarily more “pushy” than I like a subscription to be.
  • I don’t want to use my email Inbox to keep track of which articles I’ve read/am still going to read: that’s what a feed reader is for! (It also provides tagging, bookmarking, filtering, standardised and bulk unsubscribing tools, etc.)

So what do I do? Well…

Illustration showing a web application using MailChimp to send an email newsletter to OpenTrashMail, to which FreshRSS is subscribed.

I already operate an OpenTrashMail instance for one-shot throwaway email addresses (which I highly recommend). And OpenTrashMail provides a rich RSS feed. Sooo…

How I subscribe to newsletters (in my feed reader)

If I want to subscribe to your newsletter, here’s what I do:

  1. Put an email address (I usually just bash the keyboard to make a random one, then put @-a-domain-I-control on the end, where that domain is handled by OpenTrashMail) in to subscribe.
  2. Put https://my-opentrashmail-server/rss/the-email-address-I-gave-you/rss.xml into my feed reader.
  3. That’s all. There is no step 3.

Now I get your newsletter alongside all my other subscriptions. If I want to unsubscribe I just tell my feed reader to stop polling the RSS feed (You don’t even get to find out that I’ve unsubscribed; you’re now just dropping emails into an unmonitored box, but of course I can resubscribe and pick up from where I left off if I ever want to).

Obviously this approach isn’t suitable for personalised content or sites for which your email address is used for authentication, because anybody who can guess the random email address can get the feed! But it’s ideal for those companies who’ll ocassionally provide vouchers in exchange for being able to send you other stuff to your Inbox, because you can simply pipe their content to your feed reader, then add a filter to drop anything that doesn’t contain the magic keyword: regular vouchers, none of the spam. Or for blogs that provide bonus content to email subscribers, you can get the bonus content in the same way as the regular content, right there in a folder of your reader. It’s pretty awesome.

If you don’t already have and wouldn’t benefit from running OpenTrashMail (or another trashmail system with feed support) it’s probably not worth setting one up just for this purpose. But otherwise, I can certainly recommend it.

Why Did Media Players Look Like That?

You don’t really see it any more, but: if you downloaded some media player software a couple of decades ago, it’d probably appear in a weird-shaped window, and I’ve never understood why.

Composite screenshot showing Sonique, Windows Media Player and BSplayer music players, among others, in a variety of windows that are either unusually-shaped, look like conventional Hi-Fis, or both.Mostly, these designs are… pretty ugly. And for what? It’s also worth noting that this kind of design can be found in all kinds of applications, in media players that it was almost ubiquitous.

You might think that they’re an overenthusiastic kind of skeuomorphic design: people trying to make these players look like their physical analogues. But hardware players were still pretty boxy-looking at this point, either because of the limitations of their data storage1. By the time flash memory-based portable MP3 players became commonplace their design was copying software players, not the other way around.

Composite screenshot showing Windows Media Player, the (old) iTunes companion widget, KMPlayer, and other media players. All of them have unusually-shaped windows, often with organic corners.

So my best guess is that these players were trying to stand out as highly-visible. Like: they were things you’d want to occupy a disproportionate amount of desktop space. Maybe other people were listening to music differently than me… but for me, back when screen real estate was at such a premium2, a music player’s job was to be small, unintrusive, and out-of-the-way.

WinAmp music player in minified mode: just a sliver of a music player, small, showing just back/forward/play/pause/stop controls, play time, and a mini-equaliser. The timer shows we're 3 seconds into a track.
I used to run Winamp in its very-smallest minified size, tucked up at the top of the screen, using the default skin or one that made it even less-obtrusive.

It’s a mystery to me why anybody would (or still does) make media player software or skins for them that eat so much screen space, frequently looking ugly while they do so, only to look like a hypothetical hardware device that wouldn’t actually become commonplace until years after this kind of player design premiered!

Maybe other people listened to music on their computer differently from me: putting it front and centre, not using their computer for other tasks at the same time. And maybe for these people the choice of player and skin was an important personalisation feature; a fashion statement or a way to show off their personal identity. But me? I didn’t get it then, and I don’t get it now. I’m glad that this particular trend seems to have died and windows are, for the most part, rounded rectangles once more… even for music player software!

Footnotes

1 A walkman, minidisc player, or hard drive-based digital music device is always going to look somewhat square because of what’s inside.

2 I “only” had 1600 × 1200 (UXGA) pixels on the very biggest monitor I owned before I went widescreen, and I spent a lot of time on monitors at lower resolutions e.g. 1024 × 768 (XGA); on such screens, wasting space on a music player when you’re mostly going to be listening “in the background” while you do something else seemed frivolous.

× × ×

Transferring to a new phone network, 2022 edition

This is a repost promoting content originally published elsewhere. See more things Dan's reposted.

…removing a SIM tray is harder than it looks when you don’t wear earrings. I had to search everywhere to find one of those little SIM tools…

Stuart writes a fun article about his experience of changing mobile network. It’s worth a read, and there’s only one “Dan pro tip” I’d add:

If you have a case on your mobile phone, tuck one of those SIM extractor tools into the case, behind your phone. It’s exactly where you need it to be, if you need one yourself (you probably need to remove the case to access the SIM tray anyway), but beyond that: it means you’re always carrying one for when a friend needs one. They’re also useful for pressing those tiny “factory reset” buttons you see sometimes.

A SIM extractor has been sneakily part of my “everyday carry” for about a decade and it’s proven its value time and time again.

Geohashing expedition 2022-12-05 51 -1

This checkin to geohash 2022-12-05 51 -1 reflects a geohashing expedition. See more of Dan's hash logs.

Location

Bridleway behind Cokethorpe School, West Oxfordshire, UK.

Participants

Expedition

When I saw this hashpoint appear I thought to myself: that’s eminently achievable! I hoped I might be able to slip away from work for a lunchtime cycle to claim it.

But the gods of technology didn’t approve of my plan and turned my workday into a catastrophe of the kind that only a computer can, and the chance of taking a long lunch evaporated quickly. But fortune dealt me a second hand when the weather held off into the evening, and I instead opted for a post-dinner huckle in the dark out to this hashpoint.

I set out around 18:30, South through Stanton Harcourt then North up the adorably-named Ducklington Road. It took some time to sight the somewhat-concealed bridleway around the hill of Cokethorpe School. And then, another challenge – navigating by OpenStreetMap I missed my turning and went straight through a farmyard, and had to carry my bike over a fence at the other end. Turns out the map is wrong and I later found a sign indicating the true course of the bridleway; I’ll get that corrected.

I abandoned my bike for the final 50 metres, trekking through the thick grass of an unmown meadow to the hashpoint and arriving around 19:00. No panoramic photo today it’s too dark – but you get a silly grin.

Pleased with this fast expedition, I diverted on my route home to the Harcourt Arms pub for a pint of their surprisingly-delicious seasonal guest ale, Fairytale of Brew York, which genuinely tastes like stollen. There, I wrote up this expedition report, but I’ll have to get home before I can extract my GPSr‘s tracklog.

Tracklog

Map showing a journey from Sutton, near Stanton Harcourt, along the B4449 then up the A415 past Cokethorpe School, then along a bridleway and into a field (where a chequered flag icon appears), then back to the centre of Stanton Harcourt (where a beer icon appears) before returning to the start point in Sutton.

Download tracklog

Photos

Reply to The ethics of syndicating comments using WebMentions

In his blog post “The ethics of syndicating comments using WebMentions”, Terence Eden said:

I want to see what people are writing in public about my posts. I also want to direct people to the conversations which are happening elsewhere on the web. But people – quite rightly – might not want their content permanently stored by my site.

So I think I have a few options.

  1. Do nothing. My site; my rules. If you don’t want me to grab your hot takes, don’t post them in public. (Feels a bit rude, TBQH.)
  2. Be reactive. If someone asks me to remove their content, do so. (But, of course, how will they know I’ve made a copy?)
  3. Stop syndicating comments. (I don’t wanna!)
  4. Replace the verbatim comments with a link saying “Fred mentioned this article on Twitter” . (A bit of a disruptive experience for readers.)
  5. Use oEmbed to capture the user’s comment and dynamically load it from the 3rd party site. That would update automatically if the user changes their name or deleted the comment. (A massive faff to set up.)

Terence describes a problem that I’ve wrestled with myself. If somebody comments directly on my blog using the form at the bottom of a post, that’s a pretty strong indicator of them giving their consent for their comment to be published at the bottom of that post (at my discretion). If somebody publicly replies somewhere my post is syndicated, that’s less-obvious, but still pretty clear. If somebody merely mentions my post publicly, writing their own post and linking to mine… that’s a real fuzzy area.

I take a minimal approach; only capturing their full content if it’s short and otherwise trying to extract a snippet that contains the bit that mentioned my content, and I think that works great. But Terence points out an important follow-up: what if the commenter deletes that content?

My approach so far has always been a reactive one – the second in Terence’s list – and I think it’s a morally-acceptable stance for a personal blogger. But I’m not sure it scales. I find myself asking: what if a news outlet did this, taking my self-published feedback to their story and publishing it on their site, even if I later amended, retracted, or deleted it on my own? If somebody’s making money out of my content, that feels different: I’ve always been clear that what I write on my blog is permissively-licensed, but that permissiveness is based on the prohibition of commercial use of my content.

Perhaps down the line this can be solved technologically: something machine-readable akin to the <link rel="license" ...> tag could state an author’s preference for how their content is syndicated by third parties they’ve mentioned, answering questions like:

  • Can you quote me, or just link to me? Who do these rules apply to? (Should we be attaching metadata to individual links?)
  • Should you inform me that you’ve done so, and if so: how (WebMention, etc.)?
  • If you (or your site) observe that my content has disappeared or changed for an extended time, should that be taken as revokation of consent to syndicate it?

Right now, the relevant technologies are not well-established enough to even begin this kind of work, but if a modern interconected federated web of personal websites takes off, it’s the kind of question we might one day have to answer.

For now my gut feeling is that option #2 (reactive moderation of syndicated comments) is ethically-sufficient for personal websites. But I’ll be watching the feedback Terence (who probably gets many more readers than I) receives in case my gut doesn’t represent the majority!

Announcers and Automation

Duration

Podcast Version

This post is also available as a podcast. Listen here, download for later, or subscribe wherever you consume podcasts.

Nowadays if you’re on a railway station and hear an announcement, it’s usually a computer stitching together samples1. But back in the day, there used to be a human with a Tannoy microphone sitting in the back office, telling you about the platform alternations and destinations.

I had a friend who did it as a summer job, once. For years afterwards, he had a party trick that I always quite enjoyed: you’d say the name of a terminus station on a direct line from Preston, e.g. Edinburgh Waverley, and he’d respond in his announcer-voice: “calling at Lancaster, Oxenholme the Lake District, Penrith, Carlisle, Lockerbie, Haymarket, and Edinburgh Waverley”, listing all of the stops on that route. It was a quirky, beautiful, and unusual talent. Amazingly, when he came to re-apply for his job the next summer he didn’t get it, which I always thought was a shame because he clearly deserved it: he could do the job blindfold!

There was a strange transitional period during which we had machines to do these announcements, but they weren’t that bright. Years later I found myself on Haymarket station waiting for the next train after mine had been cancelled, when a robot voice came on to announce a platform alteration: the train to Glasgow would now be departing from platform 2, rather than platform 1. A crowd of people stood up and shuffled their way over the footbridge to the opposite side of the tracks. A minute or so later, a human announcer apologised for the inconvenience but explained that the train would be leaving from platform 1, and to disregard the previous announcement. Between then and the train’s arrival the computer tried twice more to send everybody to the wrong platform, leading to a back-and-forth argument between the machine and the human somewhat reminiscient of the white zone/red zone scene from Airplane! It was funny perhaps only because I wasn’t among the people whose train was in superposition.

Clearly even by then we’d reached the point where the machine was well-established and it was easier to openly argue with it than to dig out the manual and work out how to turn it off. Nowadays it’s probably even moreso, but hopefully they’re less error-prone.

The "Mercado de Abasto" (central wholesale fruit and vegetable market) of Rosario, Argentina, 1931. Horses with carts work alongide automobiles and an omnibus.

When people talk about how technological unemployment, they focus on the big changes, like how a tipping point with self-driving vehicles might one day revolutionise the haulage industry… along with the social upheaval that comes along with forcing a career change on millions of drivers.

But in the real world, automation and technological change comes in salami slices. Horses and carts were seen alongside the automobile for decades. And you still find stations with human announcers. Even the most radically-disruptive developments don’t revolutionise the world overnight. Change is inevitable, but with preparation, we can be ready for it.

Footnotes

1 Like ScotRail’s set, voiced by Alison McKay, which computers can even remix for you over a low-fi hiphop beat if you like.

Note #20856

Things The Other Child Did Wrong That Lead To The Fight that I’ve heard so far today:

  • Clapped too loudly
  • Sang too loudly
  • Sang too quietly
  • Sang the wrong words
  • Put their feet too close to the dog
  • Ate the last grape
  • Ate the wrong grape (!?)
  • Finished brushing their teeth first
  • Clapped too loudly, again
  • Called somebody “buttocks”
  • Said somebody had buttocks
  • Expressed interest in going a different route to school
  • Put shoes on in wrong order

The Far Side in FreshRSS

A few yeras ago, I wanted to subscribe to The Far Side‘s “Daily Dose” via my RSS reader. The Far Side doesn’t have an RSS feed, so I implemented a proxy/middleware to bridge the two.

Browser debugger running document.evaluate('//li[@class="blog__post-preview"]', document).iterateNext() on Beverley's weblog and getting the first blog entry.
If you’re looking for a more-general instruction on using XPath scraping in FreshRSS, this isn’t it.
The release of version 1.20.0 of my favourite RSS reader FreshRSS provided a new mechanism for subscribing to content from sites that didn’t provide feeds: XPath scraping. I demonstrated the use of this to subscribe to my friend Beverley‘s blog, but this week I figured it was time to have a go at retiring my middleware and subscribing directly to The Far Side from FreshRSS.

It turns out that FreshRSS’s XPath Scraping is almost enough to achieve exactly what I want. The big problem is that the image server on The Far Side website tries to prevent hotlinking by checking the Referer: header on requests, so we need a proxy to spoof that. I threw together a quick PHP program to act as a proxy (if you don’t have this, you’ll have to click-through to read each comic), then configured my FreshRSS feed as follows:

FreshRSS "HTML + XPath" configuration page, configured as described below.

  • Feed URL: https://www.thefarside.com/
    The “Daily Dose” gets published to The Far Side‘s homepage each day.
  • XPath for finding new items: //div[@class="card tfs-comic js-comic"]
    Finds each comic on the page. This is probably a little over-specific and brittle; I should probably switch to using the contains function at some point. I subsequently have to use parent:: and ancestor:: selectors which is usually a sign that your screen-scraping is suboptimal, but in this case it’s necessary because it’s only at this deep level that we start seeing really specific classes.
  • Item title: concat("Far Side #", parent::div/@data-id)
    The comics don’t have titles (“The one with the cow”?), but these seem to have unique IDs in the data-id attribute of the parent <div>, so I’m using those as a reference.
  • Item content: descendant::div[@class="card-body"]
    Within each item, the <div class="card-body"> contains the comic and its text. The comic itself can’t be loaded this way for two reasons: (1) the <img src="..."> just points to a placeholder (the site uses JavaScript-powered lazy-loading, ugh – the actual source is in the data-src attribute), and (2) as mentioned above, there’s anti-hotlink protection we need to work around.
  • Item link: descendant::input[@data-copy-item]/@value
    Each comic does have a unique link which you can access by clicking the “share” button under it. This makes a hidden text <input> appear, which we can identify by the presence of the data-copy-item attribute. The contents of this textbox is the sharing URL for the comic.
  • Item thumbnail: concat("https://example.com/referer-faker.php?pw=YOUR-SECRET-PASSWORD-GOES-HERE&referer=https://www.thefarside.com/&url=", descendant::div[@class="tfs-comic__image"]/img/@data-src)
    Here’s where I hook into my special proxy server, which spoofs the Referer: header to work around the anti-hotlinking code. If you wanted you might be able to come up with an alternative solution using a custom JavaScript loaded into your FreshRSS instance (there’s a plugin for that!), perhaps to load an iframe of the sharing URL? Or you can host a copy of my proxy server yourself (you can’t use mine, it’s got a password and that password isn’t YOUR-SECRET-PASSWORD-GOES-HERE!)
  • Item date: ancestor::div[@class="tfs-page__full tfs-page__full--md"]/descendant::h3
    There’s nothing associating each comic with the date it appeared in the Daily Dose, so we have to ascend up to the top level of the page to find the date from the heading.
  • Item unique ID: parent::div/@data-id
    Giving FreshRSS a unique ID can help it stop showing duplicates. We use the unique ID we discovered earlier; this way, if the Daily Dose does a re-run of something it already did since I subscribed, I won’t be shown it again. Omit this if you want to see reruns.
Far Side comic #12326, from 23 November 2022, shown in FreshRSS. The comic shows two bulls dressed in trenchcoats and hats browsing a china shop; one staff member says to the other "I got a bad feeling about this, Harriet."
Hurrah; once again I can laugh at repeats of Gary Larson’s best work alongside my other morning feeds.

There’s a moral to this story: when you make your website deliberately hard to consume, fewer people will access it in the way you want! The Far Side‘s website is actively hostile to users (JavaScript lazy-loading, anti-right click scripts, hotlink protection, incorrect MIME types, no feeds etc.), and an inevitable consequence of that is that people like me will find and share workarounds to that hostility.

If you’re ad-supported or collect webstats and want to keep traffic “on your site” on this side of 2004, you should make it as easy as possible for people to subscribe to content. Consider The Oatmeal or Oglaf, for example, which offer RSS feeds that include only a partial thumbnail of each comic and a link through to the full thing. I don’t feel the need to screen-scrape those sites because they’ve given me a subscription option that works, and I routinely click-through to both of them to enjoy their latest content!

Conversely, the Far Side‘s aggressive anti-subscription technology ultimately means that there are fewer actual visitors to their website… because folks like me work to circumvent them.

And now you know how I did so.

Update: want the new content that’s being published to The Far Side in FreshRSS, too? I’ve got a recipe for that!

×

When Twitter Loses, WordPress Wins

This is a repost promoting content originally published elsewhere. See more things Dan's reposted.

You know who’s having a killer month? Automattic. Everyone who’s leaving Twitter seem to fall in at least one of these three camps:

  1. They have gone back to the blogosphere. (using WordPress, or WordPress.com)
  2. They have gone to Tumblr
  3. They have gone to the fediverse (of which a fairly large percentage are WordPress installs)

In all of these cases, Automattic wins.

Some smart observations here by Alex. A fourth point worth noting is that Matt has openly suggested that former Twitter engineers might like to come join us in Automattic and help make the web a better place. We’ve changed our careers pages a little lately but we’re still the same awesome company!

Alex went on to say:

I’ll be downright shocked if Matt isn’t working very hard to get Tumblr on the fediverse ASAP. He has so much to gain in supporting this movement, and very little to lose.

That’s definitely on his mind too, which I can safely say without leaking anything because he’s hinted at it himself. Exciting times.

Spring ’83 Came And Went

Just in time for Robin Sloan to give up on Spring ’83, earlier this month I finally got aroud to launching STS-6 (named for the first mission of the Space Shuttle Challenger in Spring 1983), my experimental Spring ’83 server. It’s been a busy year; I had other things to do. But you might have guessed that something like this had been under my belt when I open-sourced a keygenerator for the protocol the other day.

If you’ve not played with Spring ’83, this post isn’t going to make much sense to you. Sorry.

Introducing STS-6

Screenshots showing STS-6, listing the most-recent blog posts on DanQ.me, in two different display styles.
My output looks distinctly different in The Kingswood Palimpsest then in The Oakland Follower-Sentinel (two key reference Spring ’83 clients), and that’s fine and expected.

My server is, as far as I can tell, very different from any others in a few key ways:

  • It does not allow third-party publishing at all. Some might argue that this undermines the aim of the exercise, but I disagree. My IndieWeb inclinations lead me to favour “self-hosted” content, shared from its owners’ domain. Also: the specification clearly states that a server must implement a denylist… I guess my denylist simply includes all keys that are not specifically permitted.
  • It’s geared towards dynamic content. My primary board self-publishes whenever I produce a new blog post, listing the most recent blog posts published. I have another half-implemented which shows a summary of the most-recent post, and another which would would simply use a WordPress page as its basis – yes, this was content management, but published over Spring ’83.
  • It provides helpers to streamline content production. It supports internal references to other boards you control using the format {{board:123}}which are automatically converted to addresses referencing the public key of the “current” keypair for that board. This separates the concept of a board and its content template from that board’s keypairs, making it easier to link to a board. To put it another way, STS-6 links are self-healing on the server-side (for local boards).
  • It helps automate content-fitting. Spring ’83 strictly requires a maximum board size of 2,217 bytes. STS-6 can be configured to fit a flexible amount of dynamic content within a template area while respecting that limit. For my posts list board, the number of posts shown is moderated by the size of the resulting board: STS-6 adds more and more links to the board until it’s too big, and then removes one!
  • It provides “hands-off” key management features. You can pregenerate a list of keys with different validity periods and the server will automatically cycle through them as necessary, implementing and retroactively-modifying <link rel="next"> connections to keep them current.

I’m sure that there are those who would see this as automating something that was beautiful because it was handcrafted; I don’t know whether or not I agree, but had Spring ’83 taken off in a bigger way, it would always only have been a matter of time before somebody tried my approach.

From a design perspective, I enjoyed optimising an SVG image of my header so it could meaningfully fit into the board. It’s pretty, and it’s tolerably lightweight.

If you want to see my server in action, patch this into your favourite Spring ’83 client: https://s83.danq.dev/10c3ff2e8336307b0ac7673b34737b242b80e8aa63ce4ccba182469ea83e0623

A dead end?

Without Robin’s active participation, I feel that Spring ’83 is probably coming to a dead end. It’s been a lot of fun to play with and I’d love to see what ideas the experience of it goes on to inspire next, but in its current form it’s one of those things that’s an interesting toy, but not something that’ll make serious waves.

In his last lab essay Robin already identified many of the key issues with the system (too complicated, no interpersonal-mentions, the challenge of keys-as-identifiers, etc.) and while they’re all solvable without breaking the underlying mechanisms (mentions might be handled by Webmention, perhaps, etc.), I understand the urge to take what was learned from this experiment and use it to help inform the decisions of the next one. Just as John Postel’s Quote of the Day protocol doesn’t see much use any more (although maybe if my finger server could support QotD?) but went on to inspire the direction of many subsequent “call-and-response” protocols, including HTTP, it’s okay if Spring ’83 disappears into obscurity, so long as we can learn what it did well and build upon that.

Meanwhile: if you’re looking for a hot new “like the web but lighter” protocol, you should probably check out Gemini. (Incidentally, you can find me at gemini://danq.me, but that’s something I’ll write about another day…)

×

Oxford Geek Nights #52

On Wednesday this week, three years and two months after Oxford Geek Nights #51, Oxford Geek Night #52. Originally scheduled for 15 April 2020 and then… postponed slightly because of the pandemic, its reapparance was an epic moment that I’m glad to have been a part of.

Matt Westcott stands to the side of a stage, drinking beer, while centrestage a cross-shaped "pharmacy sign" projects an animation of an ambulance rocketing into a starfield.
A particular highlight of the night was witnessing “Gasman” Matt Westcott show off his epic demoscene contribution Pharmageddon, which is presented via a “pharmacy sign”. Here’s a video, if you’re interested.

Ben Foxall also put in a sterling performance; hearing him talk – as usual – made me say “wow, I didn’t know you could do that with a web browser”. And there was more to learn, too: Jake Howard showed us how robots see, Steve Buckley inspired us to think about how technology can make our homes more energy-smart (this is really cool and sent me down a rabbithole of reading!), and Joe Wass showed adorable pictures of his kid exploring the user interface of his lockdown electronics project.

Digital scoreboard showing Dan Q in the lead with 5,561, Nick in second place with 5,442, and RaidIndigo in third with 5,398.
Oh, and there was a quiz competition too, and guess who came out on top after an incredibly tight race.

But mostly I just loved the chance to hang out with geeks again; chat to folks, make connections, and enjoy that special Oxford Geek Nights atmosphere. Also great to meet somebody from Perspectum, who look like they’d be great to work for and – after hearing about – I had in mind somebody to suggest for a job with them… but it looks like the company isn’t looking for anybody with their particular skills on this side of the pond. Still, one to watch.

Dan, outdoors on a grassy path, wearing a grey hoodie. On his head is a "trucker cap" emblazoned with the word "GEEK" and, in smaller writing "#OGN52".
My prize for winning the competition was an extremely-limited-edition cap which I love so much I’ve barely taken it off since.

Huge thanks are due to Torchbox, Perspectum and everybody in attendance for making this magical night possible!

Oh, and for anybody who’s interested, I’ve proposed to be a speaker at the next Oxford Geek Nights, which sounds like it’ll be towards Spring 2023. My title is “Yesterday’s Internet, Today!” which – spoilers! – might have something to do with the kind of technology I’ve been playing with recently, among other things. Hope to see you there!

× × ×