Molly guard in reverse

This is a repost promoting content originally published elsewhere. See more things Dan's reposted.

Old-school computing has a term “molly guard”: it’s the little plastic safety cover you have to move out of the way before you press some button of significance.

Anecdotally, this is named after Molly, an engineer’s daughter who was invited to a datacenter and promptly pressed a big red button, as one would.

Then she did it again later the same day.

This article from UX expert Marcin Wichary is intended to be a vehicle to talk about the thoughtful design that goes into “reverse molly guards”: pieces of user interface that will proceed by themselves if you do nothing, but can be stopped by user interaction. He provides the example of MacOS’s “Are you sure you want to restart your computer?” dialog, which includes a countdown to automatically going ahead with the restart in 60 seconds unless told not to.

From my perspective, though: this was the first time I’ve ever come across  the term “molly guard”, and I love it (especially with its accompanying anecdote). I’ve seen them all over the place, though. In fact, I’d love to share with you a particularly-aggressive molly guard I implemented into Three Rings a couple of years ago:

A problem we occasionally faced in Three Rings was administrators – especially new administrators, gaining lots of powers for the first time – managing to delete entire rotas, without realising that this would delete all of the shifts (and the signups) within those rotas too. This is a hard operation to un-do, so we added a basic molly guard: an “are you sure?” interstitial page that explained exactly how much damage would be done.

But it didn’t work well enough! We watched users who would see a blocker and rush straight to the big, red, delete button on the other side of all the warnings. I guess that the dark patterns that are now everywhere in software have trained users to click-through every wall that gets in their way as fast as possible and with the minimum interaction. But now that “training” was working against the safety of charity data!

So we came up with something stronger:

Screenshot of a pre-deletion warning page that says on the first line how many shifts will be deleted and then asks on the last line for the user to repeat that number back.

Now, the interstitial page not only says what the scale of the damage is… it asks the user to repeat it back to them. Looking at that screenshot, you’ll see that the first line says that 2,056 will be deleted… and then the last line contains a text box to type that number back in again (this page only appears if it looks like a lot of “real” data will be deleted; otherwise we use the old page so as not to scare off people who are throwing together temporary test rotas).

If you read the page, it’s easy to answer the question. But if you just rush to the red button… you’re stuck. You’ll be given a user interface nudge to tell you to fill the box, but until you first line of the page, you won’t be able to answer it.

This molly guard works: since it was implemented, we’ve never had an instance of an accidentally-deleted rota that required us to pull data from the backups on behalf of a charity.

But it’s possible we’ve swung too far the other way and caused some collateral damage to usability: we’ve twice had technical support queries from users who couldn’t work out what they had to type into the box!

This is an acceptable outcome, we decided: it gives us the chance to check that they really mean what they were asking to do (of the two queries: one user did, the other meant to do something else) and point them in the direction the number they need. It works!

Anyway, the key thing I wanted to share was that great article by Marcin Wichary with some great photos of various hardware and software molly guards (and reverse molly guards) for your amusement.

×

The Dungeon of Dark Patterns

This is a repost promoting content originally published elsewhere. See more things Dan's reposted.

The Dungeon of Dark Patterns A comic in four panels Panel 1. The adventurer and his fairy are in front of the door of a nightmarish dungeon, it's dark, foggy, and the inside the door we can't see anything except a deep red light. > Dungeon: "Welcome adventurers, to the Dungeon of Dark Patterns!" Panel 2. In one room of the dungeon, a giant beautiful and inviting door with a red carpet, and on the side, in the shadow a too little door. Writing on big door: Go to the trap, on small door: Go to the treasure. The adventurer crouch and do a little sign to the fairy to follow him to the little door. > Dungeon: "Ha ha, you're good!" Panel 3. The aventurer is now putting some effort climbing on an old rope in the middle of a room with a beautiful luxuous stairway with a red carpet on the side. A sign tells "GO TO THE TREASURE but pass by the trap" in direction of the beautiful stairs; and "(other options)" in small and in the shadow in direction of the rope. > Dungeon: "Impressive!" Panel 4. Top down view on the adventurers shrugging in front of the fairy, they reached a dead end. A short path on the right has on the ground the word "Now", and a longer path "Later". Both lead to a giant pool of green acid where bones and skulls are floating. > Dungeon: "So, when do you want to jump to the trap?"

Well this is just excellent.

I’d not come across David Revoy before today, but he’s apparently being doing art and comics since 2014. The Mini Fantasy Theatre series started a couple of years ago, but is totally getting added to my RSS reader. Almost everything’s bilingual English/French too, if that’s something that interests you.

Navigating around the dark patterns of modern UX certainly feels like a dungeon delve, sometimes. Now we just need the episode in which the adventurer has difficulty unsubscribing from requests from their patron…

×

WSL9x

This is a repost promoting content originally published elsewhere. See more things Dan's reposted.

WSL9x runs a modern Linux kernel (6.19 at time of writing) cooperatively inside the Windows 9x kernel, enabling users to take advantage of the full suite of capabilities of both operating systems at the same time, including paging, memory protection, and pre-emptive scheduling. Run all your favourite applications side by side – no rebooting required!

Well this blew my mind.

Windows Subsystem for Linux (WSL) is one of the single best things Microsoft have added to Windows in the last decade1. But, of course, it’s for Windows 10 and 11 only. I would never have conceived that somebody could make the same trick work for, like, Windows 95!

But Hails has done so. And no, this isn’t some kind of emulation; it’s proper cooperative multitasking between the two kernels, just like regular WSL does. Somehow, in a version that came out nine years before Windows even supported the NX bit. Mindboggling.

Footnotes

1 This ought to be a little embarrassing for them: I mean – if the most-valuable improvement you make to your operating system is to make it… more like a different operating system… – that’s not a great sign, is it?

×

Finding the right Bottom Hole paper

This is a repost promoting content originally published elsewhere. See more things Dan's reposted.

On the 6th of January 1995, viewers of BBC Two were treated to a new series of Waiting for Godot Bottom. Stuck at the top of a Ferris wheel, Vyvyan and the People’s Poet Eddie and Ritchie wait to see what the cruel hand of fate has dealt them in this week’s episode “Hole”.

At one point, Captain Edrison Peavey Edward Elizabeth Hitler pulls out a newspaper to read.

Some pissed old fart reading a newspaper called "The Hammersmith Bugle" with the headline "No news shocker..."

It may surprise you to know that the “Hammersmith Bugle” is not a real paper and they never ran a headline “No News Shocker”. At which point, it is time to rip off Dirty Feed’s shtick and find out what that paper really is.

This is exactly the kind of rabbitholey deep-dive I know and love (and have experienced ever so frequently myself). Take a ride with Terence on a long (and not-entirely satisfying!) ride to try to find the actual newspaper that’s been adapted by the Bottom production team in this particular episodes.

Anyway, it’s an amusing journey that I enjoyed going along with, this morning, and maybe you will too.

×

The machines are fine. I’m worried about us.

This is a repost promoting content originally published elsewhere. See more things Dan's reposted.

Unlike Alice, who spent the year reading papers with a pencil in hand, scribbling notes in the margins, getting confused, re-reading, looking things up, and slowly assembling a working understanding of her corner of the field, Bob has been using an AI agent. When his supervisor sent him a paper to read, Bob asked the agent to summarize it. When he needed to understand a new statistical method, he asked the agent to explain it. When his Python code broke, the agent debugged it. When the agent’s fix introduced a new bug, it debugged that too. When it came time to write the paper, the agent wrote it. Bob’s weekly updates to his supervisor were indistinguishable from Alice’s. The questions were similar. The progress was similar. The trajectory, from the outside, was identical.

Here’s where it gets interesting. If you are an administrator, a funding body, a hiring committee, or a metrics-obsessed department head, Alice and Bob had the same year. One paper each. One set of minor revisions each. One solid contribution to the literature each. By every quantitative measure that the modern academy uses to assess the worth of a scientist, they are interchangeable. We have built an entire evaluation system around counting things that can be counted, and it turns out that what actually matters is the one thing that can’t be.

The strange thing is that we already know this. We have always known this. Every physics textbook ever written comes with exercises at the end of each chapter, and every physics professor who has ever stood in front of a lecture hall has said the same thing: you cannot learn physics by watching someone else do it. You have to pick up the pencil. You have to attempt the problem. You have to get it wrong, sit with the wrongness, and figure out where your reasoning broke. Reading the solution manual and nodding along feels like understanding. It is not understanding. Every student who has tried to coast through a problem set by reading the solutions and then bombed the exam knows this in their bones. We have centuries of accumulated pedagogical wisdom telling us that the attempt, including the failed attempt, is where the learning lives. And yet, somehow, when it comes to AI agents, we’ve collectively decided that maybe this time it’s different. That maybe nodding at Claude’s output is a substitute for doing the calculation yourself. It isn’t. We knew that before LLMs existed. We seem to have forgotten it the moment they became convenient.

Centuries of pedagogy, defeated by a chat window.

This piece by Minas Karamanis is excellent throughout, and if you’ve got the time to read it then you should. He’s a physics postdoc, and this post comes from his experience in his own field, but I feel that the concerns he raises are more-widely valid, too.

In my field – of software engineering – I have similar concerns.

Let’s accept for a moment that an LLM significantly improves the useful output of a senior software engineer (which is very-definitely disputed, especially for the “10x” level of claims we often hear, but let’s just take it as-read for now). I’ve experimented with LLM-supported development for years, in various capacities, and it certainly sometimes feels like they do (although it sometimes also feels like they have the opposite effect!). But if it’s true, then yes: an experienced senior software engineer could conceivably increase their work performance by shepherding a flock of agents through a variety of development tasks, “supervising” them and checking their work, getting them back on-course when they make mistakes, approving or rejecting their output, and stepping in to manually fix things where the machines fail.

In this role, the engineer acts more like an engineering team lead, bringing their broad domain experience to maximise the output of those they manage. Except who they manage is… AI.

Again, let’s just accept all of the above for the sake of argument. If that’s all true… how do we make new senior developers?

Junior developers can use LLMs too. And those LLMs will make mistakes that the junior developer won’t catch, because the kinds of mistakes LLMs make are often hard to spot and require significant experience to identify. But if they’re encouraged to use LLMs rather than making mistakes by hand and learning from them – to keep up, for example, or to meet corporate policies – then these juniors will never gain the essential experience they’ll one day need. They’ll be disenfranchised of the opportunity to grow and learn.

It’s yet to be proven that more-sophisticated models will “solve” this problem, but my understanding is that issues like hallucination are fundamentally unsolvable: you might get fewer hallucinations in a better model, but that just means that those hallucinations that slip through will be better-concealed and even harder to identify in code review or happy-path testing.

Maybe – maybe – the trajectory of GPTs is infinite, and they’ll keep getting “smarter” to the point at which this doesn’t matter: programming genuinely will become a natural language exercise, and nobody will need to write or understand code at all. In this possible reality, the LLMs will eventually develop entire new programming languages to best support their work, and humans will simply express ideas and provide feedback on the outputs. But I’m very sceptical of that prediction: it’s my belief that the mechanisms by which LLMs work has a fundamental ceiling – a capped level of sophistication that can be approached but never exceeded. And sure, maybe some other, different approach to AI might not have this limitation, but if so then we haven’t invented it yet.

Which suggests that we will always need experienced engineers to shepherd our AIs. Which brings us back to the fundamental question: if everybody uses AI to code, how do we make new senior developers?

I have other concerns about AI too, of course, some of which I’ve written about. But this one’s top-of-mind today, thanks to Minas’ excellent article. Go read it to learn more about how physics research faces a similar threat… and, perhaps, consider how your own field might need to face this particular challenge.

Blogging: you’re doing it right

This is a repost promoting content originally published elsewhere. See more things Dan's reposted.

That’s all you need to know. If you’re doing it, you’re doing it right. If you have decided to reclaim ownership of your place on the web, you’re doing it right. It doesn’t matter how you did it. It doesn’t matter if you’re self-hosting or using a SAAS. It doesn’t matter if your content lives on a database or in a TXT file. It doesn’t matter if you did everything yourself or you paid someone to do it for you. It doesn’t matter if you post once a day or once a year. What matters is that you’re doing it. Your effort is commendable. You deserve to be thanked so, thank you.

Wonderful words from Manu, there, that I think every blogger needs to be told once in a while. You have permission to write stuff. There isn’t a wrong way.

Also worth reading is his “2-step process for AI-free blogging”.

I wish I could be as pithy as Manu. But I’mma keep blogging anyway. After all; I’m doing it right!

People are not friction

This is a repost promoting content originally published elsewhere. See more things Dan's reposted.

The Gell-Mann Amnesia Effect of AI is a pretty well documented phenomenon:

The Gell-Mann amnesia effect is a cognitive bias describing the tendency of individuals to critically assess media reports in a domain they are knowledgeable about, yet continue to trust reporting in other areas despite recognizing similar potential inaccuracies.

Summarizing, AI sounds like a incredible genius synthesizing the world’s knowledge right up until you ask it about the thing you know about, then it’s an idiot. Even knowing about this phenomenon and having experienced it countless times, LLMs have an intoxicating quality to them.

I remember one time, maybe in the mid-1990s, when I saw a shopping channel (remember those? oh god, they’re still a thing, aren’t they?) where the host was trying to sell a personal computer. And… clearly, they knew absolutely nothing about it. They kept hitting on the same two or three talking points they’d been given (“mention the quad-speed CD-ROM drive!”) and fumbling their way through, and it gave me a revelation:

knew enough about computers that I could see that the presenter was bullshitting their way through the segment. But there are plenty of things that I don’t know much about, which are also sold on this same show. Duvets, jewellery, glassware… I’m nowhere near as much an expert on these as I was on PC featuresets. Is there something inherently incomprehensible about computers? No. So it’s reasonable to assume that these salespeople probably know equally-little about everything they sell, it’s just that I don’t have the knowledge base to be able to see that.

That’s what GenAI often feels like, to me. Having collated all of the publicly-available knowledge it could find into its model doesn’t make it smarter than the smartest humans, it brings it towards probably something slightly-above-the-average in any given subject, depending on the topic. If I ask an LLM about something that I don’t understand well, it produces often highly-believable answers, but if I ask it about something that I’m an expert in, it can come off as a fool.

I’m very interested in how we teach information literacy in this new world of rapidly-generated highly-believable nonsense.

Anyway: Dave’s post doesn’t go in that direction – instead, he’s got some clever thoughts about how the “convenience” of a “good enough” AI-driven solution to any given problem risks us seeing humans as the friction point, which ultimately works against those very humans who are looking to benefit from the technology:

We need experts to share what they know and improve the quality of our work, generated or otherwise. We even need idiots to make sure we can break ideas down into their simplest form that everyone, agents or human, understand. People can have bad attitudes, be shitty, and have wrong opinions… but people are not friction. An LLM may be able to autocorrect its way into a plausible human response, but it’s not people. It doesn’t care if it’s right or wrong.

It’s an easy and worthwhile read.

Hint Line 93

This is a repost promoting content originally published elsewhere. See more things Dan's reposted.

Screenshot showing a desk with a gamepad, mouse, and with an open notepad with the handwritten instructions 1. Answer Game, 2. Identify Game, 3. Consult Compendium, 4. Help Caller!!

With thanks to Marcin Wichary for introducing me to it, let me share with you… Hint Line ’93, a project of the awesome-sounding museum ACMI (the Australian Centre for the Moving Image). Originally pitched by Yarn Spinner, the concept of the exhibit was:

…a visual novel on screen, where you’re working a fictional hint line, with critical information in The Compendium, a dog-eared binder full of official docs mixed with handwritten notes from previous counselors who figured out what actually works.

So yeah. It’s a bit like… Keep Talking and Nobody Explodes, except instead of bomb defusal, you’re working on a computer game hint line in their heyday of circa 1993. Customers call you, and you have to help them with their video game problems, ideally in accordance with company policy to try to guide the customer to their own answer rather than telling them the solution outright. Oh, and also sometimes people call up about products that aren’t covered and you need to identify this promptly and get on to the next caller.

Obviously you’ve already got an encyclopaedic knowledge of all the games already? No, you don’t, because before they could even start on making Hint Line ’93, the creators first needed to invent a fictional video games company, a catalogue of fictional games (including faked screenshots, history, lore, and BBS posts), and more. But it wouldn’t matter anyway, because you get a thick manual – the compendium – of hints and tips to refer to (also code wheels, post-its, and lots more).

A retro computer and compendium, showing a digital representation of the same on its screen.

The exhibit is designed to be experienced in-person, but – given that I live on the other side of the planet – I was delighted to see that the museum put a (less-tactile) version online for visitors around the world to play.

Also: speaking as somebody with an awesome name, there are so many people with awesome names involved with this project. Mars Buttfield-Addison and Paris Buttfield-Addison are perhaps my favourite. Excellent names.

Even if you don’t feel up to playing the game, have a flick through the (digital version of the) compendium. That’s where the real art lies!

×

Questionnaire – Plain Text

This is a repost promoting content originally published elsewhere. See more things Dan's reposted.

Inspired by The Frugal Gamer, who was in turn inspired by Ellane, I today used my silly plain text only blog to answer a questionnaire that’s going around:

Questionnaire - Plain Text
==========================
The Frugal Gamer recently shared[1] her answers to the questions posed by plain-text advocate Ellane
in her post "Answer These Eight Questions About Your Plain Text Files"[2], and this blog (being even
more "plain text" than either of those!) seems like an obvious place to answer those questions on my
own behalf, too. Let's give them a go!
1. When did you start using plain text?
---------------------------------------
Way back in the mid-1980s, on an Amstrad CPC microcomputer, I guess, when I started editing files of
BASIC code (and, ocassionally, text-based data with CRLF delimiters). I'd later go on to extensively
make use of plain text in various flavours of DOS on IBM-compatible PCs: for programming, of course,
but also for general notetaking and personal documents.
2. Why did you start using plain text?
--------------------------------------
At those earliest points, it was an exercise in necessity! With only 64Kb of RAM and a 4MHz CPU, the
capabilities of my first microcomputer to do anything more gaphically-sophisticated than ASCII plain
text (or a nearby derivative of it) would be a stretch! It was around this same time that I tested a
basic word processing package called TASWord, but it was VERY bare-bones: just five font faces, able
to hold up to three "pages" in memory at once, and some kind of mail merge tool... even though I had
a (dot matrix!) printer capable of rendering those fonts, it didn't really justify the effort needed
to load the software from the tape deck in the first place with a simpler, lighter editor would, for
any real purpose, suffice!
3. What do you use plain text for?
----------------------------------
This blog, for a start!
Aside from when I'm programming or taking basic notes, mostly I end up writing Markdown, these days.
Obsidian's a wonderful notetaking app, but in practice all it REALLY is is a tool for collating text
files and doing on-the-fly plain-text-to-markdown rendering. I don't really use any of its many cool
plugins for anything more-sophisticated than that.
And I'm also routinely found writing Markdown (or plain text!) for programming-adjacent jobs: commit
logs, pull requests, test instructions, and the like.
4. What keeps you using plain text?
-----------------------------------
My favourite thing about plain text is its longevity. I have notes (old emails, poems, logs from IRC
and IM clients, personal notes, even letters) that I wrote in plain text formats 30+ years ago. Even
though technology has moved on, I have absolutely no problem reading them today just as I would have
when they were first written.
5. Do you use any markup or formatting languages? If so, which ones and why?
----------------------------------------------------------------------------
My most-used markup languages are Markdown and HTML (although neither on THIS blog, obviously). Both
provide functionality that's absent from plain text while still retaining at least a part of the top
feature of plain text: its universality and longevity. Markdown's perfectly human-readable even when
you don't have an interpreter to hand already. HTML _can_ be very human-readable, too, if the author
has taken the care to make it so... and even if it isn't, it can be transformed to plain text pretty
trivially even if there isn't a Web browser to hand.
6. What are your favourite plain text tools or applications?
------------------------------------------------------------
My go-to text editor is Sublime Text (I'm using it right now). After over a decade of Emacs being my
preferred text editor, Sublime Text was what dragged me kicking and screaming into 21st century text
editing! I love that it's clean, and simple, and really fast (I tried Atom or VSCode or one of those
other "heavyweight" editors, implemented in Electron, and found it it to be unbearably slow; perhaps
faster processors have made them more-bearable, but doesn't that feel a little bit like treating the
symptom rather than solving the problem?).
Oh, and Obsidian, as previously noted. Sometimes I'll use Notepad++ on a Windows box, or Nano, Pico,
or Emacs from a command-line.
And just sometimes - more often than you might expect, I just daisychain an `echo` or a `printf` and
a `>>` and just concatenate things into a file. Sometimes that's all you need!
7. Is there one tool you can’t do without?
------------------------------------------
Nope! I've spent long enough doing plain text things with enough different tools that - perhaps with
a little mumbling and grumbling - I can adapt to whatever tools are available. Though you'll find me
grumpy if you make me work on a system without `grep` available!
8. Is there anything you can’t do with plain text?
--------------------------------------------------
I mean... ultimately, there has to be right? Sure, you can write general-purpose software using your
plain text editor, but you'll still need a compiler or interpreter to run it, and how is ITS program
code rendered? No matter what your stack is, eventually you'll find that you're running into machine
code, and - even though it can be 1:1 mapped to assembly... that's a translation, not what it IS. So
fundamentally, there's a limit to the power of plain text.
But once you're balanced atop a well-made toolchain, there's a hell of a lot you can do! Data can be
rendered as CSV, YAML, JSON or whatever. Markup can add value while retaining the human-readable joy
of a simple, plain text file. It saddens me when I see somebody type out their shopping list in e.g.
Microsoft Word or some other monster, when Notepad would have plenty sufficed (and be faster, with a
smaller file size, and increased interoperability!).
I've long loved the "Unix Philosophy" that plain text should be the default data format, rather than
any binary format, between applications. That, in itself, is a reminder of plain text's versatility!
It's the universal language of humans and machines. And it's here to stay.
Links
-----
[1] https://www.thefrugalgamer.net/blog/2026/01/22/questionnaire-plain-text/
[2] https://ellanew.com/2025/01/19/ptpl-191-answer-8-questions-why-plain-text

D’ya know what? Back when I used to write lots of stuff on Usenet and BBSes, I got really good at manually wrapping at, say, 80 characters. Even doing full justification by tweaking word choices or by just manually injecting spaces in the places that that produce the fewest “rivers”.

I’ve sort-of lost the knack for it. But I think I did a pretty good job with this post!

You don’t have to disconnect

This is a repost promoting content originally published elsewhere. See more things Dan's reposted.

A lot of attention was gained by Derek Sivers‘ post Offline 23 hours a day, the other week. I was particularly impressed by the rebuttal by Rishi Dass:

Anyway, the reasoning behind this idea of disconnecting seems to be that they equate being productive with having no internet or phone service. This implies that the tool (internet or the phone) is the problem. But is that entirely true?

They further argue that disconnection helps them create a vacuum through media silence, allowing their thoughts to expand and fill the space. While it’s understandable that you can concentrate better when your attention is focused on one thing, there’s no reason you can’t stay online and do the work. If you’re able to work comfortably in a library, you can do this.

Obviously, Derek’s approach is valid. It sounds like he’s found what works for him in terms of managing his time, life, mental energy, and the like, and that’s great! I’d be lying if I said that I didn’t envy him at least a little: don’t we all enjoy “unplugging” sometimes?

I think Derek’s post is so appealing because it touches our nostalgia of a simpler, less-always-online time.

For a while I thought that this would be a sensation unique to folks who, like me, had their first experiences of the Internet in a very intermittent and deliberate way. In the 1990s, I used to go on the Internet: a premeditated act that required being somewhere with a landline and the appropriate hardware, requiring that nobody was using or intending to use the phone, booting up a computer, dialling-up to the local Internet Service Provider, and then going about what I wanted to do. At that time, it was uncommon to use the ‘net for trivial things like checking the weather or what’s on at the cinema, because picking up the local newspaper would probably be a faster way to achieve that! Similarly, it wasn’t so-useful as a procrastination activity, because picking up a book or going for a walk was more accessible and reliable.

But this isn’t a generational thing, or at least not entirely. Gen Zs are seeing the joy in retro tech from before they were born, which is something I’ve witnessed myself: I’m part of a couple of online communities that do quite a bit of retro-Web and other retro-tech stuff, and I’ve been amazed at how young the demographics can skew in some of these groups! Like: there are people who were born after Facebook was founded who yearn to recreate the kind of dial-up experience that I had, before their parents met.

(Obviously, I think this is great; I think there are great lessons to be learned from the more open, decentralised, distributed, transparent, and exploratory Internet of times gone by. It just… initially surprised me to find so many younger folks showing such an interest in it, too.)

I still think this is nostalgia, though. Here’s why: none of us are born with unfettered and unfiltered access to the Internet. Unless they have the most hands-off parents possible, even a child born today won’t be “always online” for the first decade or more of their life. And being a child, for most folks, is a time of safety and wonderment: where there are other people to attend to our needs and filter our information intake and answer our questions in a protected environment. Growing up, we all have to learn to do those things for ourselves. And in the information-saturated attention economy of the modern world, that shit is exhausting.

You don’t need to be reminiscing about dial-up to fantasise about a slower time, when pub quizzes couldn’t be cheated by a shithead in the corner unless you catch them in the act and when your pocket computer wouldn’t beep for attention every 30 seconds because a half-remembered friend posted a holiday snap. Not having the extra cognitive load all the time is liberating!

No wonder “going offline” seems like a luxury to people, and why Derek’s extreme approach is so intriguing! But it’s just the same as that curated holiday snap that your friend-of-a-friend just posted to Instagram: it’s a snapshot into the best bits of somebody else’s life. It’s not reality. It’s your imagination, your fantasy, projected onto somebody else’s solution. “This works for them,” you say to yourself, “It must work for me, too!”

Maybe it would! And I hope that a few people feel empowered by Derek’s post to fulfil their dream and go live in the woods. Good for them!

But Rishi’s rebuttal brings us a sense of balance. For most people, it’s not necessary to go live in the woods to “go offline”. If you really want to, just… go offline. The power’s in your hands.

  • if you don’t want to be distracted by social media and games, close those accounts and take those apps off your device
  • if you don’t want to be interrupted by notifications, switch them off and check your inputs on your own schedule
  • if you don’t want to be online at all, set airplane mode or disconnect from the WiFi, and narrow your focus onto that book, board game, film, conversation, or daydream
  • if you don’t trust yourself not to be tempted to backslide… well, that’s a bigger problem of self-control that you need to work on, but in the meantime, try and experiment: leave your device behind and take a walk!

I get wanting to disconnect. I have my own controls in place, too, and they’re great for my mental health. But my approach, Derek’s approach, anybody’s approach… don’t have to be your approach.

Start the journey by working out what parts of the always-online world aren’t serving you. What things are more of a psychological drain than a boost? What’s bad for your mental wellbeing on the whole (not just in the moment)? What habits would you like to kick? What excuses are you using to keep them?

Then, work out what you can do about them. Seek assistance if you need it; you might not have all the solutions. But beware the seductive approach of taking what works for somebody else and trying to fit yourself to their mould.

Sure: maybe you need to go live in the woods with Derek. But make that choice because it solves your problems, not because it solves his!

How far back in time can you understand English?

This is a repost promoting content originally published elsewhere. See more things Dan's reposted.

A man takes a train from London to the coast. He’s visiting a town called Wulfleet. It’s small and old, the kind of place with a pub that’s been pouring pints since the Battle of Bosworth Field. He’s going to write about it for his blog. He’s excited.

He arrives, he checks in. He walks to the cute B&B he’d picked out online. And he writes it all up like any good travel blogger would: in that breezy LiveJournal style from 25 years ago, perhaps, in his case, trying a little too hard.

But as his post goes on, his language gets older. A hundred years older with each jump. The spelling changes. The grammar changes. Words you know are replaced by unfamiliar words, and his attitude gets older too, as the blogger’s voice is replaced by that of a Georgian diarist, an Elizabethan pamphleteer, a medieval chronicler.

By the middle of his post, he’s writing in what might as well be a foreign language.

But it’s not a foreign language. It’s all English.

None of the story is real: not the blogger, not the town. But the language is real, or at least realistic. I constructed the passages myself, working from what we know about how English was written in each period.

It’s possible you caught this excellent blog post last month, but if you didn’t, you’re in for a treat. A long rambling “travel blog” story, except that every three or four paragraphs the author’s transported back a hundred years in time, which adapts not only the story but – more-importantly – the language the author uses.

For the last couple of hundred years the English language has been moderately stable and well-defined, although the stylistic mannerisms of authors have changed. But as you read beyond that, the language feels like it’s slowly mutating into something that, by the time you get to a thousand years ago, is nearly indecipherable!

Don’t just read the demonstration until you find you can’t understand it any more, though! After all of that, keep scrolling, and you’ll find that Colin Gorrie’s put together an explanation of what you just read. It describes the linguistic shifts you’ve now just experienced first-hand, the reasons for them, and how they were reconstructed in order to make this amusing distraction.

The footnotes are great too.

Also; it’s well worth watching/listening to the video that inspired it, in which you can hear (twice, the second time with explanatory slides) a linguist read a monologue that starts in the English of around 1,500 years ago and progress, jumping every few sentences, through to modern English. Absolutely fascinating.

With thanks to Ash for sharing.

I Am Experimenting with Blocking HTTP1.1

This is a repost promoting content originally published elsewhere. See more things Dan's reposted.

Most of the traffic I get on this site is bots – it isn’t even close. And, for whatever reason, almost all of the bots are using HTTP1.1 while virtually all human traffic is using later protocols.

I have decided to block v1.1 traffic on an experimental basis. This is a heavy-handed measure and I will probably modify my approach as I see the results.

# Return an error for clients using http1.1 or below - these are assumed to be bots
@http-too-old {
    not protocol http/2+
    not path /rss.xml /atom.xml # allow feeds
}
respond @http-too-old 400 {
    body "Due to stupid bots I have disabled http1.1. Use more modern software to access this site"
    close
}

This is quick, dirty, and will certainly need tweaking but I think it is a good enough start to see what effects it will have on my traffic.

A really interesting experiment by Andrew Stephens! And love that he shared the relevant parts of his Caddyfile: nice to see how elegantly this can be achieved.

I decided to probe his server with cURL:

~ curl --http0.9 -sI https://sheep.horse/ | head -n1
HTTP/2 200
~ curl --http1.0 -sI https://sheep.horse/ | head -n1
HTTP/1.0 400 Bad Request
~ curl --http1.1 -sI https://sheep.horse/ | head -n1
HTTP/1.1 400 Bad Request
~ curl --http2 -sI https://sheep.horse/ | head -n1
HTTP/2 200

Curiously, while his configuration blocks both HTTP/1.1 and HTTP/1.0, it doesn’t seem to block HTTP/0.9! Whaaa?

It took me a while to work out why this was. It turns out that cURL won’t do HTTP/0.9 over https:// connections. Interesting! Though it presumably wouldn’t have worked anyway – HTTP/1.1 requires (and HTTP/1.0 permits) the Host: header, but HTTP/0.9 doesn’t IIRC, and sheep.horse definitely does require the Host: header (I tested!).

I also tested that my RSS reader FreshRSS was still able to fetch his content. I have it configured to pull not only the RSS feed, which is specifically allowed to bypass his restriction, but – because his feed contains only summary content – I also have it fetch the linked page too in order to get the full content. It looks like FreshRSS is using HTTP/2 or higher, because the content fetcher still behaves properly.

Andrew’s approach definitely excludes Lynx, which is a bit annoying and would make this idea a non-starter for any of my own websites. But it’s still an interesting experiment.

Caution: No Swimming

A partially-submerged traffic cone sits in a large puddle in a rural field.

Hurrah! I just made my first successful submission to Curious Cones, a weblog collecting photographs of traffic cones spotted in unusual places.

I spotted this cone while the younger child and I took a walk to the next-nearest village to our temporary accommodation, in order to find a geocache, tag some benches for OpenBenches, and have a cafe brunch.

Anyway: if you’re not following Curious Cones, it’s exactly as delightful as you might expect.

Why Apple’s move to video could endanger podcasting’s greatest power

This is a repost promoting content originally published elsewhere. See more things Dan's reposted.

So it was inevitable that Apple would add video support to their podcasting apps. And it makes sense for Apple to update the technical underpinnings; the assumptions that were made when designing podcasts over two decades ago aren’t really appropriate for many contemporary uses. For example, back then, by default an entire podcast episode would be downloaded to your iPod for convenient listening on the go, just like songs in your music library. But downloading a giant 4K video clip of an hour-long podcast show that you might not even watch, just in case you might want to see it, would be a huge waste of resources and bandwidth. Modern users are used to streaming everything. Thus, Apple updated their apps to support just grabbing snippets of video as they’re needed, and to their credit, Apple is embracing an open video format when doing so, instead of some proprietary system that requires podcasters to pay a fee or get permission.

The problem, though, is that Apple is only allowing these new video streams to be served by a small number of pre-approved commercial providers that they’ve hand-selected. In the podcasting world, there are no gatekeepers; if I want to start a podcast today, I can publish a podcast feed here on anildash.com and put up some MP3s with my episodes, and anyone anywhere in the world can subscribe to that podcast, I don’t have to ask anyone’s permission, tell anyone about it, or agree to anyone’s terms of service.

When I started my pointless podcast, I didn’t need anybody else’s infrastructure or permission. Podcasts are, in the vein of the Web itself (and thanks at least in part to my former coworker Dave Winer), distributed and democratised.

All you need to host a podcast is an RSS file and some audio files. You can put them onto your shared VM. You can put them onto your homelab, You can put them onto a GitHub Pages site. You can put them onto a Neocities site. Or you can shell out for a commercial host and distribute your content across a global network of CDNs, for maximum performance! All of these are podcasts, and they’re all equal from a technical perspective.1

Video podcasts could be the same. Even if – as Apple suggest – HLS is to be mandatory for their player2, that doesn’t necessitate a big corporate third-party provider. Having an “allowlist” of people who can host your podcast’s video is gatekeeping.

Also, it’s… not really podcasting any more. It’s been pointed out that before “platform-exclusive” podcasts (I’m looking at you, Spotify) are not truly podcasts: if it’s not an RSS feed plus some audio files, it’s not a podcast, it’s lightly sparkling audio.

Can the same analogy be used for a podcast player? Is a player that only supports content (in this case, video content) if it’s hosted by a particular partner… not-a-podcast-player? Either way, it’s pretty embarrassing for Apple of all people to turn their back on what makes a podcast a podcast.

Footnotes

1 Technically, my podcast is just a collection of posts that share a ClassicPress “tag”; ClassicPress gives me “RSS feed of a particular tag” for free, so all I needed to do was (a) add metadata to point to my MP3 files (b) and use a hook to inject the relevant <enclosure> element into the feed. There are almost certainly plugins that could have done it for me, but it’s a simple task so I just wrote a few lines of PHP code and called it a day!

2 This is… fine, I guess. I’d rather that an <enclosure> be more-agnostic. If I only want to provide a single fat MP4 file, then it’s up to my listeners to say “this is shit, why can’t I stream this on my cellular data, I’m gonna listen/watch somebody else instead”. But even if HLS is required, that’s not difficult: I talked about how to do it last year while demonstrating variable-aspect-ratio videos (in vanilla HTML)!

snakes.run: rendering 100M pixels a second over ssh

This is a repost promoting content originally published elsewhere. See more things Dan's reposted.

I made a massively multiplayer snake game that’s accessible over ssh. Play by running ssh snakes.run in your terminal…

I’d been speculating for the last month or so what Nolen Royalty had been working on recently that had required such high-performance out of the SSH protocol, but now we know: it’s massively multiplayer snake.

Screenshot of a game of snake: a medium-sized blue snake explores amongst several smaller purple snakes
My personal best length is in the region of 180, but I wouldn’t dare risking taking a shortcut when I’m at those lofty heights.

There’s a philosophical thing here that makes it feel different from probably any other Snake game you’ve ever played: it’s a thin client. All of the program runs on the server.

Even slither.io is a split-client. The server is responsible for game state, but rendering the graphics based upon that state runs in JavaScript code in the browser. Not so with snakes.run. The entire output is delivered as terminal rendering instructions.

It’s a reminder of how computing used to be. Dumb terminals sharing a mainframe that would do all of the processing, with only basic I/O being delegated to the terminals, was the dominant way to use computers in the 1960s through 1970s, until the growth of microcomputer technologies made it cost-efficient to make “thick” clients: powerful computers that would sit right on your desk.

It took decades until long-distance networks came to the scale and performance that we see today: fast enough that worldwide thin-client architectures are once again possible. Nowadays, the closest thing most people use to a thin client… is their Web browser, passing messages back and forth. If you’re playing a real-time multiplayer game, though, you expect the processing to be shared between your computer (for low latency graphical output) and the server (for state management and prevention of cheating).

snakes.run tips your expectation on its head. And that’s pretty cool.

It’s also a lot of fun. You should give it ago.

×