Note #27196

Today, for the first time ever, I simultaneously published a piece of content across five different media: a Weblog post, a video essay, a podcast episode, a Gemlog post, and a Spartanlog post.

Must be about something important, right?

Nope, it’s a meandering journey to coming up with a design for a £5 coin that will never exist. Delightfully pointless. Being the Internet I want to see in the world.

🔗 https://danq.me/the-beer-token

The rise of Whatever

This is a repost promoting content originally published elsewhere. See more things Dan's reposted.

A freaking excellent longread by Eevee (Evelyn Woods), lamenting the direction of popular technological progress and general enshittification of creator culture. It’s ultimately uplifting, I feel, but it’s full of bitterness until it gets there. I’ve pulled out a couple of highlights to try to get you interested, but you should just go and read the entire thing:

And so the entire Web sort of congealed around a tiny handful of gigantic platforms that everyone on the fucking planet is on at once. Sometimes there is some sort of partitioning, like Reddit. Sometimes there is not, like Twitter.

That’s… fine, I guess. Things centralize. It happens. You don’t get tubgirl spam raids so much any more, at least.

But the centralization poses a problem. See, the Web is free to look at (by default), but costs money to host. There are free hosts, yes, but those are for static things getting like a thousand visitors a day, not interactive platforms serving a hundred million. That starts to cost a bit. Picture logs being shoveled into a steam engine’s firebox, except it’s bundles of cash being shoveled into… the… uh… website hole.

I don’t want to help someone who opens with “I don’t know how to do this so I asked ChatGPT and it gave me these 200 lines but it doesn’t work”. I don’t want to know how much code wasn’t actually written by anyone. I don’t want to hear how many of my colleagues think Whatever is equivalent to their own output.

I glimpsed someone on Twitter a few days ago, also scoffing at the idea that anyone would decide not to use the Whatever machine. I can’t remember exactly what they said, but it was something like: “I created a whole album, complete with album art, in 3.5 hours. Why wouldn’t I use the make it easier machine?”

This is kind of darkly fascinating to me, because it gives rise to such an obvious question: if anyone can do that, then why listen to your music? It takes a significant chunk of 3.5 hours just to listen to an album, so how much manual work was even done here? Apparently I can just go generate an endless stream of stuff of the same quality! Why would I want your particular brand of Whatever?

Nobody seems to appreciate that if you can make a computer do something entirely on its own, then that becomes the baseline.

Do things. Make things. And then put them on your website so I can see them.

Clearly this all ties in to stuff that I’ve been thinking, lately. Expect more posts and reposts in this vein, I guess?

Internet Services^H Provider

Do you remember when your domestic ISP – Internet Service Provider – used to be an Internet Services Provider? They were only sometimes actually called that, but what I mean is: when ISPs provided more than one Internet service? Not just connectivity, but… more.

Web page listing 'Standard Services' for dial-up and leased line connections, including: user homepages, FTP, email, usenet, IRC, email-to-fax, and fax-to-email services.
One of the first ISPs I subscribed to had a “standard services” list longer than most modern ISPs complete services list!

ISPs twenty years ago

It used to just be expected that your ISP would provide you with not only an Internet connection, but also some or all of:

  • A handful of email inboxes, plus SMTP relaying
  • Shared or private FTP storage1
  • Hosting for small Websites/homepages
  • Usenet access
  • Email-to-fax and/or fax-to-email services
  • Caching forward proxies (this was so-commonplace that it isn’t even listed in the “standard services” screenshot above)
  • One or more local nodes to IRC networks
  • Sometimes, licenses for useful Internet software
  • For leased-line (technically “broadband”, by the original definition) connections: a static IP address or IP pool
Stylish (for circa 2000) webpage for HoTMetaL Pro 6.0, advertising its 'unrivaled [sic] editing, site management and publishing tools'.
I don’t remember which of my early ISPs gave me a free license for HoTMetaL Pro, but I was very appreciative of it at the time.

ISPs today

The ISP I hinted at above doesn’t exist any more, after being bought out and bought out and bought out by a series of owners. But I checked the Website of the current owner to see what their “standard services” are, and discovered that they are:

  • A pretty-shit router2
  • Optional 4G backup connectivity (for an extra fee)
  • A voucher for 3 months access to a streaming service3

The connection is faster, which is something, but we’re still talking about the “baseline” for home Internet access then-versus-now. Which feels a bit galling, considering that (a) you’re clearly, objectively, getting fewer services, and (b) you’re paying more for them – a cheap basic home Internet subscription today, after accounting for inflation, seems to cost about 25% more than it did in 2000.4

Are we getting a bum deal?

An xternal 33.6kbps serial port dial-up modem.
Not every BBS nor ISP would ever come to support the blazing speeds of a 33.6kbps modem… but when you heard the distinctive scream of its negotiation at close to the Shannon Limit of the piece of copper dangling outside your house… it felt like you were living in the future.

Would you even want those services?

Some of them were great conveniences at the time, but perhaps not-so-much now: a caching server, FTP site, or IRC node in the building right at the end of my dial-up connection? That’s a speed boost that was welcome over a slow connection to an unencrypted service, but is redundant and ineffectual today. And if you’re still using a fax-to-email service for any purpose, then I think you have bigger problems than your ISP’s feature list!

Some of them were things I wouldn’t have recommend that you depend on, even then: tying your email and Web hosting to your connectivity provider traded one set of problems for another. A particular joy of an email address, as opposed to a postal address (or, back in the day, a phone number), is that it isn’t tied to where you live. You can move to a different town or even to a different country and still have the same email address, and that’s a great thing! But it’s not something you can guarantee if your email address is tied to the company you dial-up to from the family computer at home. A similar issue applies to Web hosting, although for a true traditional “personal home page”: a little information about yourself, and your bookmarks, it would be fine.

But some of them were things that were actually useful and I miss: honestly, it’s a pain to have to use a third-party service for newsgroup access, which used to be so-commonplace that you’d turn your nose up at an ISP that didn’t offer it as standard. A static IP being non-standard on fixed connections is a sad reminder that the ‘net continues to become less-participatory, more-centralised, and just generally more watered-down and shit: instead of your connection making you “part of” the Internet, nowadays it lets you “connect to” the Internet, which is a very different experience.5

But the Web hosting, for example, wasn’t useless. In fact, it served an important purpose in lowering the barrier to entry for people to publish their first homepage! The magical experience of being able to just FTP some files into a directory and have them be on the Web, as just a standard part of the “package” you bought-into, was a gateway to a participatory Web that’s nowadays sadly lacking.

'Setting Up your Web Site, Step by Step Instructions' page, describing use of an FTP client to upload web pages.
A page like this used to be absolutely standard on the Website6 of any ISP worth its salt.

Yeah, sure, you can set up a static site (unencumbered by any opinionated stack) for free on Github Pages, Neocities, or wherever, but the barrier to entry has been raised by just enough that, doubtless, there are literally millions of people who would have taken that first step… but didn’t.

And that makes me sad.

Footnotes

1 ISP-provided shared FTP servers would also frequently provide locally-available copies of Internet software essentials for a variety of platforms. This wasn’t just a time-saver – downloading Netscape Navigator from your ISP rather than from half-way across the world was much faster! – it was also a way to discover new software, curated by people like you: a smidgen of the feel of a well-managed BBS, from the comfort of your local ISP!

2 ISP-provided routers are, in my experience, pretty crap 50% of the time… although they’ve been improving over the last decade as consumers have started demanding that their WiFi works well, rather than just works.

3 These streaming services vouchers are probably just a loss-leader for the streaming service, who know that you’ll likely renew at full price afterwards.

4 Okay, in 2000 you’d have also have had to pay per-minute for the price of the dial-up call… but that money went to BT (or perhaps Mercury or KCOM), not to your ISP. But my point still stands: in a world where technology has in general gotten cheaper and backhaul capacity has become underutilised, why has the basic domestic Internet connection gotten less feature-rich and more-expensive? And often with worse customer service, to boot.

5 The problem of your connection not making you “part of” the Internet is multiplied if you suffer behind carrier-grade NAT, of course. But it feels like if we actually cared enough to commit to rolling out IPv6 everywhere we could obviate the need for that particular turd entirely. And yet… I’ll bet that the ISPs who currently use it will continue to do so, even as the offer IPv6 addresses as-standard, because they buy into their own idea that it’s what their customers want.

6 I think we can all be glad that we no longer write “Web Site” as two separate words, but you’ll note that I still usually correctly capitalise Web (it’s a proper noun: it’s the Web, innit!).

× × × ×

Part of the Internet, or connecting to the Internet?

Some time in the last 25 years, ISPs stopped saying they made you “part of” the Internet, just that they’d help you “connect to” the Internet.

Most people don’t need a static IP, sure. But when ISPs stopped offering FTP and WWW hosting as a standard feature (shit though it often was), they became part of the tragic process by which the Internet became centralised, and commoditised, and corporate, and just generally watered-down.

The amount of effort to “put something online” didn’t increase by a lot, but it increased by enough that millions probably missed-out on the opportunity to create their first homepage.

You MUST listen to RFC 2119

This is a repost promoting content originally published elsewhere. See more things Dan's reposted.

With thanks to Ruth for sharing this with me:

RFC 2119 establishes language around requirement levels. Terms like “MUST”, “MUST NOT”, “SHOULD”, and “SHOULD NOT” are helpful when coordinating with engineers. I reference it a lot for work, as I create a lot of accessible component specifications.

Because of this familiarity—and because I’m an ass—I fired back in Discord:

I want to hire a voice actor to read 2119 in the most over the top, passive-aggressive way possible
wait, this is an achievable goal oh no

It turns out you can just pay people to do things.

I found a voice actor and hired them with the task of “Reading this very dry technical document in the most over-the-top sarcastic, passive-aggressive, condescending way possible. Like, if you think it’s too much, take that feeling, ignore it, and crank things up one more notch.”

RFC 2119 is one of few RFCs I can identify by number alone, too. That and RFCs 1945 and 1866, for some reason, and RFC 2822 (and I guess, by proxy, 822) because I’ve had to implement its shitty date format more times than I’d like to count.

But anyway: if you’ve ever wanted to hear a (sarcastic, passive aggressive) dramatic reading of RFC 2119, Eric – and the actor he found – have got you covered!

The Difference Between Downloading and Streaming

What’s the difference between “streaming” and “downloading” video, audio, or some other kind of linear media?1

Screenshot from Vimeo's settings, showing the 'What can people do with your videos?' section. The 'Download them' checkbox is highlighted and a question mark has been scrawled alongside it.
Many platforms make a firm distinction between streaming and downloading, implying that they’re very different. But they’re not.

They’re basically the same thing

Despite what various platforms would have you believe, there’s no significant technical difference between streaming and downloading.

Suppose you’re choosing whether to download or stream a video2. In both cases3:

  • The server gets frames of video from a source (file, livestream, etc.)
  • The server sends those frames to your device
  • Your device stores them while it does something with them
Want to keep a copy of this animation? You don’t have to trick your computer into retaining it as it streams because I’ve open-sourced it, and the code used to produce it.

So what’s the difference?

The fundamental difference between streaming and downloading is what your device does with those frames of video:

Does it show them to you once and then throw them away? Or does it re-assemble them all back into a video file and save it into storage?

Screenshot from YouTube music video of Rick Astley's "Never Gonna Give You Up", 34 seconds in, showing Rick singing outdoors at night. the red YouTube progress bar goes a little over half way through the darker grey buffering indicator along the timeline bar.
When you’re streaming on YouTube, the video player running on your computer retains a buffer of frames ahead and behind of your current position, so you can skip around easily: the darker grey part of the timeline shows which parts of the video are stored on – that is, downloaded to – your computer.

Buffering is when your streaming player gets some number of frames “ahead” of where you’re watching, to give you some protection against connection issues. If your WiFi wobbles for a moment, the buffer protects you from the video stopping completely for a few seconds.

But for buffering to work, your computer has to retain bits of the video. So in a very real sense, all streaming is downloading! The buffer is the part of the stream that’s downloaded onto your computer right now. The question is: what happens to it next?

All streaming is downloading

So that’s the bottom line: if your computer deletes the frames of video it was storing in the buffer, we call that streaming. If it retains them in a file, we call that downloading.

That definition introduces a philosophical problem. Remember that Vimeo checkbox that lets a creator decide whether people can (i.e. are allowed to) download their videos? Isn’t that somewhat meaningless if all streaming is downloading.

Because if the difference between streaming and downloading is whether their device belonging to the person watching the video deletes the media when they’re done. And in virtually all cases, that’s done on the honour system.

Comic showing a conversation between Firefox and Netflix, as represented by their respective logos. Firefox says 'Hey, send me a copy of Dispicable Me 4?'. Netflix replies: 'Promise you'll delete it when you're done watching it?'. Firefox responds: 'Umm... sure!'
This kind of conversation happens, over the HTTP protocol, all the time. Probably most of the time the browser is telling the truth, but there’s no way to know for certain.

When your favourite streaming platform says that it’s only possible to stream, and not download, their media… or when they restrict “downloading” as an option to higher-cost paid plans… they’re relying on the assumption that the user’s device can be trusted to delete the media when the user’s done watching it.

But a user who owns their own device, their own network, their own screen or speakers has many, many opportunities to not fulfil the promise of deleting media it after they’ve consumed it: to retain a “downloaded” copy for their own enjoyment, including:

  • Intercepting the media as it passes through their network on the way to its destination device
  • Using client software that’s been configured to stream-and-save, rather than steam-and-delete, the content
  • Modifying “secure” software (e.g. an official app) so that it retains a saved copy rather than deleting it
  • Capturing the stream buffer as it’s cached in device memory or on the device’s hard disk
  • Outputting the resulting media to a different device, e.g. using a HDMI capture device, and saving it there
  • Exploiting the “analogue4 hole”5: using a camera, microphone, etc. to make a copy of what comes out of the screen/speakers6

Okay, so I oversimplified (before you say “well, actually…”)

It’s not entirely true to say that streaming and downloading are identical, even with the caveat of “…from the server’s perspective”. There are three big exceptions worth thinking about:

Exception #1: downloads can come in any order

When you stream some linear media, you expect the server to send the media in strict chronological order. Being able to start watching before the whole file has downloaded is a big part of what makes steaming appealing, to the end-user. This means that media intended for streaming tends to be stored in a way that facilitates that kind of delivery. For example:

  • Media designed for streaming will often be stored in linear chronological order in the file, which impacts what kinds of compression are available.
  • Media designed for streaming will generally use formats that put file metadata at the start of the file, so that it gets delivered first.
  • Video designed for streaming will often have frequent keyframes so that a client that starts “in the middle” can decode the buffer without downloading too much data.

No such limitation exists for files intended for downloading. If you’re not planning on watching a video until it’s completely downloaded, the order in which the chunks arrives is arbitrary!

But these limitations make the set of “files suitable for streaming” a subset of the set of “files suitable for downloading”. It only makes it challenging or impossible to stream some media intended for downloading… it doesn’t do anything to prevent downloading of media intended for streaming.

Exception #2: streamed media is more-likely to be transcoded

A server that’s streaming media to a client exists in a sort-of dance: the client keeps the server updated on which “part” of the media it cares about, so the server can jump ahead, throttle back, pause sending, etc. and the client’s buffer can be kept filled to the optimal level.

This dance also allows for a dynamic change in quality levels. You’ve probably seen this happen: you’re watching a video on YouTube and suddenly the quality “jumps” to something more (or less) like a pile of LEGO bricks7. That’s the result of your device realising that the rate at which it’s receiving data isn’t well-matched to the connection speed, and asking the server to send a different quality level8.

The server can – and some do! – pre-generate and store all of the different formats, but some servers will convert files (and particularly livestreams) on-the-fly, introducing a few seconds’ delay in order to deliver the format that’s best-suited to the recipient9. That’s not necessary for downloads, where the user will often want the highest-quality version of the media (and if they don’t, they’ll select the quality they want at the outset, before the download begins).

Exception #3: streamed media is more-likely to be encumbered with DRM

And then, of course, there’s DRM.

As streaming digital media has become the default way for many people to consume video and audio content, rights holders have engaged in a fundamentally-doomed10 arms race of implementing copy-protection strategies to attempt to prevent end-users from retaining usable downloaded copies of streamed media.

Take HDCP, for example, which e.g. Netflix use for their 4K streams. To download these streams, your device has to be running some decryption code that only works if it can trace a path to the screen that it’ll be outputting to that also supports HDCP, and both your device and that screen promise that they’re definitely only going to show it and not make it possible to save the video. And then that promise is enforced by Digital Content Protection LLC only granting a decryption key and a license to use it to manufacturers.11

Fingers hold a small box, about half the size of a deck of cards, labelled "ezcoo 4K HDML 2.0 Splitter 1x2". On the side facing the camera can be seen a "HDMI In" port and an "EDID" switch that can be set to 4K7.1, 4K5.1, or COPY1.
The real hackers do stuff with software, but people who just want their screens to work properly in spite of HDCP can just buy boxes like this (which I bought for a couple of quid on eBay). Obviously you could use something like this and a capture card to allow you to download content that was “protected” to ensure that you could only stream it, I suppose, too.

Anyway, the bottom line is that all streaming is, by definition, downloading, and the only significant difference between what people call “streaming” and “downloading” is that when “streaming” there’s an expectation that the recipient will delete, and not retain, a copy of the video. And that’s it.

Footnotes

1 This isn’t the question I expected to be answering. I made the animation in this post for use in a different article, but that one hasn’t come together yet, so I thought I’d write about the technical difference between streaming and downloading as an excuse to use it already, while it still feels fresh.

2 I’m using the example of a video, but this same principle applies to any linear media that you might stream: that could be a video on Netflix, a livestream on Twitch, a meeting in Zoom, a song in Spotify, or a radio show in iPlayer, for example: these are all examples of media streaming… and – as I argue – they’re therefore also all examples of media downloading because streaming and downloading are fundamentally the same thing.

3 There are a few simplifications in the first half of this post: I’ll tackle them later on. For the time being, when I say sweeping words like “every”, just imagine there’s a little footnote that says, “well, actually…”, which will save you from feeling like you have to say so in the comments.

4 Per my style guide, I’m using the British English spelling of “analogue”, rather than the American English “analog” which you’ll often find elsewhere on the Web when talking about the analog hole.

5 The rich history of exploiting the analogue hole spans everything from bootlegging a 1970s Led Zeppelin concert by smuggling recording equipment in inside a wheelchair (definitely, y’know, to help topple the USSR and not just to listen to at home while you get high) to “camming” by bribing your friendly local projectionist to let you set up a video camera at the back of the cinema for their test screening of the new blockbuster. Until some corporation tricks us into installing memory-erasing DRM chips into our brains (hey, there’s a dystopic sci-fi story idea in there somewhere!) the analogue hole will always be exploitable.

6 One might argue that recreating a piece of art from memory, after the fact, is a very-specific and unusual exploitation of the analogue hole: the one that allows us to remember (or “download”) information to our brains rather than letting it “stream” right through. There’s evidence to suggest that people pirated Shakespeare’s plays this way!

7 Of course, if you’re watching The LEGO Movie, what you’re seeing might already look like a pile of LEGO bricks.

8 There are other ways in which the client and server may negotiate, too: for example, what encoding formats are supported by your device.

9 My NAS does live transcoding when Jellyfin streams to devices on my network, and it’s magical!

10 There’s always the analogue hole, remember! Although in practice this isn’t even remotely necessary and most video media gets ripped some-other-way by clever pirate types even where it uses highly-sophisticated DRM strategies, and then ultimately it’s only legitimate users who end up suffering as a result of DRM’s burden. It’s almost as if it’s just, y’know, simply a bad idea in the first place, or something. Who knew?

11 Like all these technologies, HDCP was cracked almost immediately and every subsequent version that’s seen widespread rollout has similarly been broken by clever hacker types. Legitimate, paying users find themselves disadvantaged when their laptop won’t let them use their external monitor to watch a movie, while the bad guys make pirated copies that work fine on anything. I don’t think anybody wins, here.

× × ×

Yr Wyddfa’s First Email

On Wednesday, Vodafone announced that they’d made the first ever satellite video call from a stock mobile phone in an area with no terrestrial signal. They used a mountain in Wales for their experiment.

It reminded me of an experiment of my own, way back in around 1999, which I probably should have made a bigger deal of. I believe that I was the first person to ever send an email from the top of Yr Wyddfa/Snowdon.

Nowadays, that’s an easy thing to do. You pull your phone out and send it. But back then, I needed to use a Psion 5mx palmtop, communicating over an infared link using a custom driver (if you ever wondered why I know my AT-commands by heart… well, this isn’t exactly why, but it’s a better story than the truth) to a Nokia 7110 (fortunately it was cloudy enough to not interfere with the 9,600 baud IrDA connection while I positioned the devices atop the trig point), which engaged a GSM 2G connection, over which I was able to send an email to myself, cc:’d to a few friends.

It’s not an exciting story. It’s not even much of a claim to fame. But there you have it: I was (probably) the first person to send an email from the summit of Yr Wyddfa. (If you beat me to it, let me know!)

Recreational programming

This is a repost promoting content originally published elsewhere. See more things Dan's reposted.

the world needs more recreational programming.
like, was this the most optimal or elegant way to code this?

no, but it was the most fun to write.

Yes. This.

As Baz Luhrmann didn’t say, despite the implications of this video: code one thing every day that amuses you.

There is no greater way to protest the monetisation of the Web, the descent into surveillance capitalism, and the monoculture of centralised social media silos… than to create things just for the hell of it. Maybe that’s Kirby eating a blog post. Maybe that’s whatever slippy stuff Lu put out this week. Maybe it’s a podcast exclusively about random things that interest one person.

The pre-corporate Web was fun and weird. Nowadays, keeping the Internet fun and weird is relegated to a counterculture.

But making things (whether code, or writing, or videos, or whatever) “just because” is a critical part of that counterculture. It’s beautiful art flying in the face of rampant commercialism. The Web provides a platform where you can share what you create: it doesn’t have to be good, or original, or world-changing… there’s value in just creating and giving things away.

(And, of course, the weirder the better.)

Bad Names for Servers

Six or seven years ago our eldest child, then a preschooler, drew me a picture of the Internet1. I framed it and I keep it on the landing outside my bedroom – y’know, in case I get lost on the Internet and need a map:

Framed child-drawn picture showing multiple circles, connected by lines, each with a number 0 or 1 in it.
Lots of circles, all connected to one another, passing zeroes and ones around. Around this time she’d observed that I wrote my number zeroes “programmer-style” (crossed) and clearly tried to emulate this, too.

I found myself reminded of this piece of childhood art when she once again helped me with a network map, this weekend.

As I kick off my Automattic sabbatical I’m aiming to spend some of this and next month building a new server architecture for Three Rings. To share my plans, this weekend, I’d been drawing network diagrams showing my fellow volunteers what I was planning to implement. Later, our eldest swooped in and decided to “enhance” the picture with faces and names for each server:

Network diagram but with entities having faces and named Chungus, Atul, Summer, Gwen, Alice, Astrid, and Demmy.
I don’t think she intended it, but she’s made the primary application servers look the grumpiest. This might well fit with my experience of those servers, too.

I noted that she named the read-replica database server Demmy2, after our dog.

French Bulldog with her tongue sticking out.
You might have come across our dog before, if you followed me through Bleptember.

It’s a cute name for a server, but I don’t think I’m going to follow it. The last thing I want is for her to overhear me complaining about some possible future server problem and misinterpret what I’m saying. “Demmy is a bit slow; can you give her a kick,” could easily cause distress, not to mention “Demmy’s dying; can we spin up a replacement?”

I’ll stick to more-conventional server names for this new cluster, I think.

Footnotes

1 She spelled it “the Itnet”, but still: max props to her for thinking “what would he like a picture of… oh; he likes the Internet! I’ll draw him that!”

2 She also drew ears and a snout on the Demmy-server, in case the identity wasn’t clear to me!

× × ×

Digital Dustbusting

tl;dr: I’m tidying up and consolidating my personal hosting; I’ve made a little progress, but I’ve got a way to go – fortunately I’ve got a sabbatical coming up at work!

At the weekend, I kicked-off what will doubtless be a multi-week process of gradually tidying and consolidating some of the disparate digital things I run, around the Internet.

I’ve a long-standing habit of having an idea (e.g. gamebook-making tool Twinebook, lockpicking puzzle game Break Into Us, my Cheating Hangman game, and even FreeDeedPoll.org.uk!), deploying it to one of several servers I run, and then finding it a huge headache when I inevitably need to upgrade or move said server because there’s such an insane diversity of different things that need testing!

Screenshot from Cheating Hangman: I guessed an 'E', but when I guessed an 'O' I was told that there was one (the computer was thinking of 'CLOSE'), but now there isn't because it's switched to a different word that ends with 'E'.
My “cheating hangman” game spun out from my analysis of the hardest words for an optimal player to guess, which was in turn inspired by the late Nick Berry’s examination of optimal strategy.

I can simplify, I figured. So I did.

And in doing so, I rediscovered several old projects I’d neglected or forgotten about. I wonder if anybody’s still using any of them?

Hosting I’ve tidied so far…

  • Cheating Hangman is now hosted by GitHub Pages.
  • DNDle, my Wordle-clone where you have to guess the Dungeons & Dragons 5e monster’s stat block, is now hosted by GitHub Pages. Also, I fixed an issue reported a month ago that meant that I was reporting Giant Scorpions as having a WIS of 19 instead of 9.
  • Abnib, which mostly reminds people of upcoming birthdays and serves as a dumping ground for any Abnib-related shit I produce, is now hosted by GitHub Pages.
  • RockMonkey.org.uk, which doesn’t really do much any more, is now hosted by GitHub Pages.
  • EGXchange, my implementation of a digital wallet for environmentally-friendly cryptocurrency EmmaGoldCoin, which I’ve written about before, is now hosted by GitHub Pages.
  • Sour Grapes, the single-page promo for a (remote) murder mystery party I hosted during a COVID lockdown, is now hosted by GitHub Pages.
  • A convenience-page for giving lost people directions to my house is now hosted by GitHub Pages.
  • Dan Q’s Things is now automatically built on a schedule and hosted by GitHub Pages.
  • Robin’s Improbable Blog, which spun out from 52 Reflect, wasn’t getting enough traffic to justify “proper” hosting so now it sits in a Docker container on my NAS.
  • My μlogger server, which records my location based on pings from my phone, has also moved to my NAS. This has broken Find Dan Q, but I’m not sure if I’ll continue with that in its current form anyway.
  • All of my various domain/subdomain redirects have been consolidated on, or are in the process of moving to, to a tiny Linode/Akamai instance. It’s a super simple plain Nginx server that does virtually nothing except redirect people – this is where I’ll park the domains I register but haven’t found a use for yet, in future.
Screenshot showing EGXchange, saying "everybody has an EGX wallet, log in to yours now".
I was pretty proud of EGXchange.org, but I’ll be first to admit that it’s among the stupider of my throwaway domains.

It turns out GitHub pages is a fine place to host simple, static websites that were open-source already. I’ve been working on improving my understanding of GitHub Actions anyway as part of what I’ve been doing while wearing my work, volunteering, and personal hats, so switching some static build processes like DNDle’s to GitHub Actions was a useful exercise.

Stuff I’m still to tidy…

There’s still a few things I need to tidy up to bring my personal hosting situation under control:

DanQ.me

Screenshot showing this blog post.
You’re looking at it. But later this year, you might be looking at it… elsewhere?

This is the big one, because it’s not just a WordPress blog: it’s also a Gemini, Spartan, and Gopher server (thanks CapsulePress!), a Finger server, a general-purpose host to a stack of complex stuff only some of which is powered by Bloq (my WordPress/PHP integrations): e.g. code to generate the maps that appear on my geopositioned posts, code to integrate with the Fediverse, a whole stack of configuration to make my caching work the way I want, etc.

FreeDeedPoll.org.uk

Right now this is a Ruby/Sinatra application, but I’ve got a (long-running) development branch that will make it run completely in the browser, which will further improve privacy, allow it to run entirely-offline (with a service worker), and provide a basis for new features I’d like to provide down the line. I’m hoping to get to finishing this during my Automattic sabbatical this winter.

Screenshot showing freedeedpoll.org.uk
The website’s basically unchanged for most of a decade and a half, and… umm… it looks it!

A secondary benefit of it becoming browser-based, of course, is that it can be hosted as a static site, which will allow me to move it to GitHub Pages too.

Geohashing.site

When I took over running the world’s geohashing hub from xkcd‘s Randall Munroe (and davean), I flung the site together on whatever hosting I had sitting around at the time, but that’s given me some headaches. The outbound email transfer agent is a pain, for example, and it’s a hard host on which to apply upgrades. So I want to get that moved somewhere better this winter too. It’s actually the last site left running on its current host, so it’ll save me a little money to get it moved, too!

Screenshot from Geohashing.site's homepage.
Geohashing’s one of the strangest communities I’m honoured to be a part of. So it’d be nice to treat their primary website to a little more respect and attention.

My FreshRSS instance

Right now I run this on my NAS, but that turns out to be a pain sometimes because it means that if my home Internet goes down (e.g. thanks to a power cut, which we have from time to time), I lose access to the first and last place I go on the Internet! So I’d quite like to move that to somewhere on the open Internet. Haven’t worked out where yet.

Next steps

It’s felt good so far to consolidate and tidy-up my personal web hosting (and to rediscover some old projects I’d forgotten about). There’s work still to do, but I’m expecting to spend a few months not-doing-my-day-job very soon, so I’m hoping to find the opportunity to finish it then!

× × × × ×

What if Emails were Multilingual?

Multilingual emails

Back when I was a student in Aberystwyth, I used to receive a lot of bilingual emails from the University and its departments1. I was reminded of this when I received an email this week from CACert, delivered in both English and German.

Top part of an email from CACert, which begins with the text "German translation below / Deutsche Uebersetzung weiter unten".
Simply putting one language after the other isn’t terribly exciting. Although to be fair, the content of this email wasn’t terribly exciting either.

Wouldn’t it be great if there were some kind of standard for multilingual emails? Your email client or device would maintain an “order of preference” of the languages that you speak, and you’d automatically be shown the content in those languages, starting with the one you’re most-fluent in and working down.

The Web’s already got this functionality2, and people have been sending multilingual emails for much longer than they’ve been developing multilingual websites3!

Enter RFC8255!

It turns out that this is a (theoretically) solved problem. RFC8255 defines a mechanism for breaking an email into multiple different languages in a way that a machine can understand and that ought to be backwards-compatible (so people whose email software doesn’t support it yet can still “get by”). Here’s how it works:

  1. You add a Content-Type: multipart/multilingual header with a defined boundary marker, just like you would for any other email with multiple “parts” (e.g. with a HTML and a plain text version, or with text content and an attachment).
  2. The first section is just a text/plain (or similar) part, containing e.g. some text to explain that this is a multilingual email, and if you’re seeing this then your email client probably doesn’t support them, but you should just be able to scroll down (or else look at the attachments) to find content in the language you read.
  3. Subsequent sections have:
    • Content-Disposition: inline, so that for most people using non-compliant email software they can just scroll down until they find a language they can read,
    • Content-Type: message/rfc822, so that an entire message can be embedded (which allows other headers, like the Subject:, to be translated too),
    • a Content-Language: header, specifying the ISO code of the language represented in that section, and
    • optionally, a Content-Translation-Type: header, specifying either original (this is the original text), human (this was translated by a human), or automated (this was the result of machine translation) – this could be used to let a user say e.g. that they’d prefer a human translation to an automated one, given the choice between two second languages.

Let’s see a sample email:

Content-Type: multipart/multilingual;
 boundary=10867f6c7dbe49b2cfc5bf880d888ce1c1f898730130e7968995bea413a65664
To: <b24571@danq.me>
From: <rfc8255test-noreply@danq.link>
Subject: Does your email client support RFC8255?
Mime-Version: 1.0
Date: Fri, 27 Sep 2024 10:06:56 +0000

--10867f6c7dbe49b2cfc5bf880d888ce1c1f898730130e7968995bea413a65664
Content-Transfer-Encoding: quoted-printable
Content-Type: text/plain; charset=utf-8

This is a multipart message in multiple languages. Each part says the
same thing but in a different language. If your email client supports
RFC8255, you will see this message in your preferred language out of
those available. Otherwise, you will probably see each language after
one another or else each language in a separate attachment.

--10867f6c7dbe49b2cfc5bf880d888ce1c1f898730130e7968995bea413a65664
Content-Disposition: inline
Content-Type: message/rfc822
Content-Language: en
Content-Translation-Type: original

Subject: Does your email client support RFC8255?
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 7bit
MIME-Version: 1.0

RFC8255 is a standard for sending email in multiple languages. This
is the original email in English. It is embedded alongside the same
content in a number of other languages.

--10867f6c7dbe49b2cfc5bf880d888ce1c1f898730130e7968995bea413a65664
Content-Disposition: inline
Content-Type: message/rfc822
Content-Language: fr
Content-Translation-Type: automated

Subject: Votre client de messagerie prend-il en charge la norme RFC8255?
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 7bit
MIME-Version: 1.0

RFC8255 est une norme permettant d'envoyer des courriers
électroniques dans plusieurs langues. Le présent est le courriel
traduit en français. Il est intégré à côté du même contenu contenu
dans un certain nombre d'autres langues.

--10867f6c7dbe49b2cfc5bf880d888ce1c1f898730130e7968995bea413a65664--
Why not copy-paste this into a raw email and see how your favourite email client handles it! That’ll be fun, right?

Can I use it?

That proposed standard turns seven years old next month. Sooo… can we start using it?4

Turns out… not so much. I discovered that NeoMutt supports it:

NeoMutt’s implementation is basic, but it works: you can specify a preference order for languages and it respects it, and if you don’t then it shows all of the languages as a series of attachments. It can apparently even be used to author compliant multilingual emails, although I didn’t get around to trying that.

Support in other clients is… variable.

A reasonable number of them don’t understand the multilingual directives but still show the email in a way that doesn’t suck:

Screenshot from Thunderbird, showing each language one after the other.
Mozilla Thunderbird does a respectable job of showing each language’s subject and content, one after another.

Some shoot for the stars but blow up on the launch pad:

Screenshot from GMail, showing each language one after the other, but with a stack of extra headers and an offer to translate it to English for me (even though the English is already there).
GMail displays all the content, but it pretends that the alternate versions are forwarded messages and adds a stack of meaningless blank headers to each. And then offers to translate the result for you, even though the content is already right there in English.

Others still seem to be actively trying to make life harder for you:

ProtonMail’s Web interface shows only the fallback content, putting the remainder into .eml attachments… which is then won’t display, forcing you to download them and find some other email client to look at them in!5

And still others just shit the bed at the idea that you might read an email like this one:

Screenshot from Outlook 365, showing the message "This message might have been moved or deleted".
Outlook 365 does appallingly badly, showing the subject in the title bar, then the words “(No subject)”, then the message “This message might have been removed or deleted”. Just great.

That’s just the clients I’ve tested, but I can’t imagine that others are much different. If you give it a go yourself with something I’ve not tried, then let me know!

I guess this means that standardised multilingual emails might be forever resigned to the “nice to have but it never took off so we went in a different direction” corner of the Internet, along with the <keygen> HTML element and the concept of privacy.

Footnotes

1 I didn’t receive quite as much bilingual email as you might expect, given that the University committed to delivering most of its correspondence in both English and Welsh. But I received a lot more than I do nowadays, for example

2 Although you might not guess it, given how many websites completely ignore your Accept-Language header, even where it’s provided, and simply try to “guess” what language you want using IP geolocation or something, and then require that you find whatever shitty bit of UI they’ve hidden their language selector behind if you want to change it, storing the result in a cookie so it inevitably gets lost and has to be set again the next time you visit.

3 I suppose that if you were sending HTML emails then you might use the lang="..." attribute to mark up different parts of the message as being in different languages. But that doesn’t solve all of the problems, and introduces a couple of fresh ones.

4 If it were a cool new CSS feature, you can guarantee that it’d be supported by every major browser (except probably Safari) by now. But email doesn’t get so much love as the Web, sadly.

5 Worse yet, if you’re using ProtonMail with a third-party client, ProtonMail screws up RFC8255 emails so badly that they don’t even work properly in e.g. NeoMutt any more! ProtonMail swaps the multipart/multilingual content type for multipart/mixed and strips the Content-Language: headers, making the entire email objectively less-useful.

× × × ×

So… I’m A Podcast

Duration

Podcast Version

This post is also available as a podcast. Listen here, download for later, or subscribe wherever you consume podcasts.

Observant readers might have noticed that some of my recent blog posts – like the one about special roads, my idea for pressure-cooking tea, and the one looking at the history of window tax in two countries1 – are also available as podcast.

Podcast cover showing Dan touching his temple and speaking into a microphone, captioned 'a podcast nobody asked for, about things only Dan Q cares about'.

Why?

Like my occasional video content, this isn’t designed to replace any of my blogging: it’s just a different medium for those that might prefer it.

For some stories, I guess that audio might be a better way to find out what I’ve been thinking about. Just like how the vlog version of my post about my favourite video game Easter Egg might be preferable because video as a medium is better suited to demonstrating a computer game, perhaps audio’s the right medium for some of the things I write about, too?

But as much as not, it’s just a continuation of my efforts to explore different media over which a WordPress blog can be delivered2. Also, y’know, my ongoing effort to do what I’m bad at in the hope that I might get better at a wider diversity of skills.

How?

Let’s start by understanding what a “podcast” actually is. It is, in essence, just an RSS feed (something you might have heard me talk about before…) with audio enclosures – basically, “attachments” – on each item. The idea was spearheaded by Dave Winer back in 2001 as a way of subscribing to rich media like audio or videos in such a way that slow Internet connections could pre-download content so you didn’t have to wait for it to buffer.3

Mapping of wp-admin metadata fields to parts of a podcast feed.
Podcasts are pretty simple, even after you’ve bent over backwards to add all of the metadata that Apple Podcasts (formerly iTunes) expects to see. I looked at a couple of WordPress plugins that claimed to be able to do the work for me, but eventually decided it was simple enough to just add some custom metadata fields that could then be included in my feeds and tweak my theme code a little.

Here’s what I had to do to add podcasting capability to my theme:

The tag

I use a post tag, dancast, to represent posts with accompanying podcast content4. This way, I can add all the podcast-specific metadata only if the user requests the feed of that tag, and leave my regular feeds untampered . This means that you don’t get the podcast enclosures in the regular subscription; that might not be what everybody would want, but it suits me to serve podcasts only to people who explicitly ask for them.

It also means that I’m able to use a template, tag-dancast.php, in my theme to generate a customised page for listing podcast episodes.

The feed

Okay, onto the code (which I’ve open-sourced over here). I’ve use a series of standard WordPress hooks to add the functionality I need. The important bits are:

  1. rss2_item – to add the <enclosure>, <itunes:duration>, <itunes:image>, and <itunes:explicit> elements to the feed, when requesting a feed with my nominated tag. Only <enclosure> is strictly required, but appeasing Apple Podcasts is worthwhile too. These are lifted directly from the post metadata.
  2. the_excerpt_rss – I have another piece of post metadata in which I can add a description of the podcast (in practice, a list of chapter times); this hook swaps out the existing excerpt for my custom one in podcast feeds.
  3. rss_enclosure – some podcast syndication platforms and players can’t cope with RSS feeds in which an item has multiple enclosures, so as a safety precaution I strip out any enclosures that WordPress has already added (e.g. the featured image).
  4. the_content_feed – my RSS feed usually contains the full text of every post, because I don’t like feeds that try to force you to go to the original web page5 and I don’t want to impose that on others. But for the podcast feed, the text content of the post is somewhat redundant so I drop it.
  5. rss2_ns – of critical importance of course is adding the relevant namespaces to your XML declaration. I use the itunes namespace, which provides the widest compatibility for specifying metadata, but I also use the newer podcast namespace, which has growing compatibility and provides some modern features, most of which I don’t use except specifying a license. There’s no harm in supporting both.
  6. rss2_head – here’s where I put in the metadata for the podcast as a whole: license, category, type, and so on. Some of these fields are effectively essential for best support.

You’re welcome, of course, to lift any of all of the code for your own purposes. WordPress makes a perfectly reasonable platform for podcasting-alongside-blogging, in my experience.

What?

Finally, there’s the question of what to podcast about.

My intention is to use podcasting as an alternative medium to my traditional blog posts. But not every blog post is suitable for conversion into a podcast! Ones that rely on images (like my post about dithering) aren’t a great choice. Ones that have lots of code that you might like to copy-and-paste are especially unsuitable.

Dan, a microphone in front of him, smiles at the camera.
You’re listening to Radio Dan. 100% Dan, 100% of the time.(Also I suppose you might be able to hear my dog snoring in the background…)

Also: sometimes I just can’t be bothered. It’s already some level of effort to write a blog post; it’s like an extra 25% effort on top of that to record, edit, and upload a podcast version of it.

That’s not nothing, so I’ve tended to reserve podcasts for blog posts that I think have a sort-of eccentric “general interest” vibe to them. When I learn something new and feel the need to write a thousand words about it… that’s the kind of content that makes it into a podcast episode.

Which is why I’ve been calling the endeavour “a podcast nobody asked for, about things only Dan Q cares about”. I’m capable of getting nerdsniped easily and can quickly find my way down a rabbit hole of learning. My podcast is, I guess, just a way of sharing my passion for trivial deep dives with the rest of the world.

My episodes are probably shorter than most podcasts: my longest so far is around fifteen minutes, but my shortest is only two and a half minutes and most are about seven. They’re meant to be a bite-size alternative to reading a post for people who prefer to put things in their ears than into their eyes.

Anyway: if you’re not listening already, you can subscribe from here or in your favourite podcasting app. Or you can just follow my blog as normal and look for a streamable copy of podcasts at the top of selected posts (like this one!).

Footnotes

1 I’ve also retroactively recorded a few older ones. Have a look/listen!

2 As well as Web-based non-textual content like audio (podcasts) and video (vlogs), my blog is wholly or partially available over a variety of more-exotic protocols: did you find me yet on Gemini (gemini://danq.me/), Spartan (spartan://danq.me/), Gopher (gopher://danq.me/), and even Finger (finger://danq.me/, or run e.g. finger blog@danq.me from your command line)? Most of these are powered by my very own tool CapsulePress, and I’m itching to try a few more… how about a WordPress blog that’s accessible over FTP, NNTP, or DNS? I’m not even kidding when I say I’ve got ideas for these…

3 Nowadays, we have specialised media decoder co-processors which reduce the size of media files. But more-importantly, today’s high-speed always-on Internet connections mean that you probably rarely need to make a conscious choice between streaming or downloading.

4 I actually intended to change the tag to podcast when I went-live, but then I forgot, and now I can’t be bothered to change it. It’s only for my convenience, after all!

5 I’m very grateful that my favourite feed reader makes it possible to, for example, use a CSS selector to specify the page content it should pre-download for you! It means I get to spend more time in my feed reader.

× × ×

[Bloganuary] Communicate Early, Communicate Often

This post is part of my attempt at Bloganuary 2024. Today’s prompt is:

In what ways do you communicate online?

What a curious question! For me, it’s perhaps best divided into public and private communication, for which I use very different media:

Public

I’ve written before about how this site – my blog – is the centre of my digital “ecosystem”. And while the technical details may have changed since that post was published, the fundamentals have not: everything about my public communication revolves around this, right here.

Diagram showing the DanQ.me ecosystem and surrounding tools, showing how everything centres on DanQ.me (but is syndicated elsewhere).
There’ve been some changes since I last drew a chart of my “ecosystem” back in 2019. Some of these are reflected in my hastily-amended diagram, above.

For example:

A golden cornfield with setting sun, superimposed with "Reap what you wow. Plant your content into the field of your own website."
This is what I’m talking about.

Private

For private communication online, I perhaps mostly use the following (in approximate order of volume):

  • Slack: we use Slack at Automattic; we use Slack at Three Rings; we’ve even got a “household” instance running for The Green!3
  • WhatsApp: the UI‘s annoying (but improving), but its the go-to communications platform of my of my friends and family, so it’s a big part of my online communications strategy.4
  • Email: Good old-fashioned email5. I prefer to encrypt, or at least sign, my email: sure, PGP/GPG‘s not perfect6, but it’s better than, y’know, not securing your email at all.
  • Discord: I’m in a couple of Discord servers, but the only one I pay any reasonable amount of attention to is the Geohashing one.
  • Various videoconferencing tools including Google Meet, Zoom, and Around. Sometimes you’ve just gotta get (slightly more) face-to-face.
  • Signal: I feel like everybody’s on WhatsApp now, and the Signal app got annoying when it stopped being able to not only send but even receive SMS messages (which aren’t technically Internet messages, usually), but I still send/receive a few Signal messages in a typical month.

That’s a very different set of tech stacks than I use in my “public” communication!

Footnotes

1 My thinking is, at least in part: I’ve seen platforms come and go, and my blog’s outlived them. I’ve seen platforms change their policies or technology in ways that undermine the content I put on them, but the stuff on my blog remains under my control and I can “fix” it if I wish. Owning your data is awesome, although I perhaps do it to a more-extreme extent than many.

2 I’ve used to joke that I syndicate content to e.g. Facebook to support readers who haven’t learned yet to use a feed reader. I used to, and I still do, too.

3 A great thing about having a “personal” Slack installation is that you can hook up your own integrations and bots to e.g. remind you to bring the milk in.

4 I’ve been experimenting with Texts to centralise several of my other platforms; I’m not convinced by it yet, but I love the thinking! Long ago, I used to love using Pidgin for simultaneous access to IRC, ICQ, MSN Messenger, Google Talk, Yahoo! Messenger and all that jazz, so I fully approve of the concept.

5 Okay, not actually old-fashioned because I’m not suggesting you use UUCP to send mail to protonmail!danq!dan or DECnet to deliver to danq.me::dan or something!

6 Most of the metadata including sender, recipient, and in most cases even subject is not encrypted.

× ×

Gemini and Spartan without a browser

A particular joy of the Gemini and Spartan protocols – and the Markdown-like syntax of Gemtext – is their simplicity.

Screenshot showing this blog post as viewed over the Gemini protocol in the Lagrange browser
The best way to explore Geminispace is with a browser like Lagrange browser, of course.

Even without a browser, you can usually use everyday command-line tools that you might have installed already to access relatively human-readable content.

Here are a few different command-line options that should show you a copy of this blog post (made available via CapsulePress, of course):

Gemini

Gemini communicates over a TLS-encrypted channel (like HTTPS), so we need a to use a tool that speaks the language. Luckily: unless you’re on Windows you’ve probably got one installed already1.

Using OpenSSL

This command takes the full gemini:// URL you’re looking for and the domain name it’s at. 1965 refers to the port number on which Gemini typically runs –

printf "gemini://danq.me/posts/gemini-without-a-browser\r\n" | \
  openssl s_client -ign_eof -connect danq.me:1965

Using GnuTLS

GnuTLS closes the connection when STDIN closes, so we use cat to keep it open. Note inclusion of --no-ca-verification to allow self-signed certificates (optionally add --tofu for trust-on-first-use support, per the spec).

{ printf "gemini://danq.me/posts/gemini-without-a-browser\r\n"; cat -; } | \
  gnutls-cli --no-ca-verification danq.me:1965

Using Ncat

Netcat reimplementation Ncat makes Gemini requests easy:

printf "gemini://danq.me/posts/gemini-without-a-browser\r\n" | \
  ncat --ssl danq.me 1965

Spartan

Spartan is a little like “Gemini without TLS“, but it sports an even-more-lightweight request format which makes it especially easy to fudge requests2.

Using Telnet

Note the use of cat to keep the connection open long enough to get a response, as we did for Gemini over GnuTLS.

{ printf "danq.me /posts/gemini-without-a-browser 0\r\n"; cat -; } | \
  telnet danq.me 300

Using cURL

cURL supports the telnet protocol too, which means that it can be easily coerced into talking Spartan:

printf "danq.me /posts/gemini-without-a-browser 0\r\n" | \
  curl telnet://danq.me:300

Using Ncat/Netcat

Because TLS support isn’t needed, this also works perfectly well with Netcat – just substitute nc/netcat or whatever your platform calls it in place of ncat:

printf "danq.me /posts/gemini-without-a-browser 0\r\n" | \
  ncat danq.me 300

I hope these examples are useful to somebody debugging their capsule, someday.

Footnotes

1 You can still install one on Windows, of course, it’s just less-likely that your operating system came with such a command-line tool built-in

2 Note that the domain and path are separated in a Spartan request and followed by the size of the request payload body: zero in all of my examples

×

Yesterday’s Internet Today! (Woo DM 2023)

The week before last I had the opportunity to deliver a “flash talk” of up to 4 minutes duration at a work meetup in Vienna, Austria. I opted to present a summary of what I’ve learned while adding support for Finger and Gopher protocols to the WordPress installation that powers DanQ.me (I also hinted at the fact that I already added Gemini and Spring ’83 support, and I’m looking at other protocols). If you’d like to see how it went, you can watch my flash talk here or on YouTube.

If you love the idea of working from wherever-you-are but ocassionally meeting your colleagues in person for fabulous in-person events with (now optional) flash talks like this, you might like to look at Automattic’s recruitment pages

The presentation is a shortened, Automattic-centric version of a talk I’ll be delivering tomorrow at Oxford Geek Nights #53; so if you’d like to see it in-person and talk protocols with me over a beer, you should come along! There’ll probably be blog posts to follow with a more-detailed look at the how-and-why of using WordPress as a CMS not only for the Web but for a variety of zany, clever, retro, and retro-inspired protocols down the line, so perhaps consider the video above a “teaser”, I guess?