Blogging: you’re doing it right

This is a repost promoting content originally published elsewhere. See more things Dan's reposted.

That’s all you need to know. If you’re doing it, you’re doing it right. If you have decided to reclaim ownership of your place on the web, you’re doing it right. It doesn’t matter how you did it. It doesn’t matter if you’re self-hosting or using a SAAS. It doesn’t matter if your content lives on a database or in a TXT file. It doesn’t matter if you did everything yourself or you paid someone to do it for you. It doesn’t matter if you post once a day or once a year. What matters is that you’re doing it. Your effort is commendable. You deserve to be thanked so, thank you.

Wonderful words from Manu, there, that I think every blogger needs to be told once in a while. You have permission to write stuff. There isn’t a wrong way.

Also worth reading is his “2-step process for AI-free blogging”.

I wish I could be as pithy as Manu. But I’mma keep blogging anyway. After all; I’m doing it right!

People are not friction

This is a repost promoting content originally published elsewhere. See more things Dan's reposted.

The Gell-Mann Amnesia Effect of AI is a pretty well documented phenomenon:

The Gell-Mann amnesia effect is a cognitive bias describing the tendency of individuals to critically assess media reports in a domain they are knowledgeable about, yet continue to trust reporting in other areas despite recognizing similar potential inaccuracies.

Summarizing, AI sounds like a incredible genius synthesizing the world’s knowledge right up until you ask it about the thing you know about, then it’s an idiot. Even knowing about this phenomenon and having experienced it countless times, LLMs have an intoxicating quality to them.

I remember one time, maybe in the mid-1990s, when I saw a shopping channel (remember those? oh god, they’re still a thing, aren’t they?) where the host was trying to sell a personal computer. And… clearly, they knew absolutely nothing about it. They kept hitting on the same two or three talking points they’d been given (“mention the quad-speed CD-ROM drive!”) and fumbling their way through, and it gave me a revelation:

knew enough about computers that I could see that the presenter was bullshitting their way through the segment. But there are plenty of things that I don’t know much about, which are also sold on this same show. Duvets, jewellery, glassware… I’m nowhere near as much an expert on these as I was on PC featuresets. Is there something inherently incomprehensible about computers? No. So it’s reasonable to assume that these salespeople probably know equally-little about everything they sell, it’s just that I don’t have the knowledge base to be able to see that.

That’s what GenAI often feels like, to me. Having collated all of the publicly-available knowledge it could find into its model doesn’t make it smarter than the smartest humans, it brings it towards probably something slightly-above-the-average in any given subject, depending on the topic. If I ask an LLM about something that I don’t understand well, it produces often highly-believable answers, but if I ask it about something that I’m an expert in, it can come off as a fool.

I’m very interested in how we teach information literacy in this new world of rapidly-generated highly-believable nonsense.

Anyway: Dave’s post doesn’t go in that direction – instead, he’s got some clever thoughts about how the “convenience” of a “good enough” AI-driven solution to any given problem risks us seeing humans as the friction point, which ultimately works against those very humans who are looking to benefit from the technology:

We need experts to share what they know and improve the quality of our work, generated or otherwise. We even need idiots to make sure we can break ideas down into their simplest form that everyone, agents or human, understand. People can have bad attitudes, be shitty, and have wrong opinions… but people are not friction. An LLM may be able to autocorrect its way into a plausible human response, but it’s not people. It doesn’t care if it’s right or wrong.

It’s an easy and worthwhile read.

Hint Line 93

This is a repost promoting content originally published elsewhere. See more things Dan's reposted.

Screenshot showing a desk with a gamepad, mouse, and with an open notepad with the handwritten instructions 1. Answer Game, 2. Identify Game, 3. Consult Compendium, 4. Help Caller!!

With thanks to Marcin Wichary for introducing me to it, let me share with you… Hint Line ’93, a project of the awesome-sounding museum ACMI (the Australian Centre for the Moving Image). Originally pitched by Yarn Spinner, the concept of the exhibit was:

…a visual novel on screen, where you’re working a fictional hint line, with critical information in The Compendium, a dog-eared binder full of official docs mixed with handwritten notes from previous counselors who figured out what actually works.

So yeah. It’s a bit like… Keep Talking and Nobody Explodes, except instead of bomb defusal, you’re working on a computer game hint line in their heyday of circa 1993. Customers call you, and you have to help them with their video game problems, ideally in accordance with company policy to try to guide the customer to their own answer rather than telling them the solution outright. Oh, and also sometimes people call up about products that aren’t covered and you need to identify this promptly and get on to the next caller.

Obviously you’ve already got an encyclopaedic knowledge of all the games already? No, you don’t, because before they could even start on making Hint Line ’93, the creators first needed to invent a fictional video games company, a catalogue of fictional games (including faked screenshots, history, lore, and BBS posts), and more. But it wouldn’t matter anyway, because you get a thick manual – the compendium – of hints and tips to refer to (also code wheels, post-its, and lots more).

A retro computer and compendium, showing a digital representation of the same on its screen.

The exhibit is designed to be experienced in-person, but – given that I live on the other side of the planet – I was delighted to see that the museum put a (less-tactile) version online for visitors around the world to play.

Also: speaking as somebody with an awesome name, there are so many people with awesome names involved with this project. Mars Buttfield-Addison and Paris Buttfield-Addison are perhaps my favourite. Excellent names.

Even if you don’t feel up to playing the game, have a flick through the (digital version of the) compendium. That’s where the real art lies!

×

Questionnaire – Plain Text

This is a repost promoting content originally published elsewhere. See more things Dan's reposted.

Inspired by The Frugal Gamer, who was in turn inspired by Ellane, I today used my silly plain text only blog to answer a questionnaire that’s going around:

Questionnaire - Plain Text
==========================
The Frugal Gamer recently shared[1] her answers to the questions posed by plain-text advocate Ellane
in her post "Answer These Eight Questions About Your Plain Text Files"[2], and this blog (being even
more "plain text" than either of those!) seems like an obvious place to answer those questions on my
own behalf, too. Let's give them a go!
1. When did you start using plain text?
---------------------------------------
Way back in the mid-1980s, on an Amstrad CPC microcomputer, I guess, when I started editing files of
BASIC code (and, ocassionally, text-based data with CRLF delimiters). I'd later go on to extensively
make use of plain text in various flavours of DOS on IBM-compatible PCs: for programming, of course,
but also for general notetaking and personal documents.
2. Why did you start using plain text?
--------------------------------------
At those earliest points, it was an exercise in necessity! With only 64Kb of RAM and a 4MHz CPU, the
capabilities of my first microcomputer to do anything more gaphically-sophisticated than ASCII plain
text (or a nearby derivative of it) would be a stretch! It was around this same time that I tested a
basic word processing package called TASWord, but it was VERY bare-bones: just five font faces, able
to hold up to three "pages" in memory at once, and some kind of mail merge tool... even though I had
a (dot matrix!) printer capable of rendering those fonts, it didn't really justify the effort needed
to load the software from the tape deck in the first place with a simpler, lighter editor would, for
any real purpose, suffice!
3. What do you use plain text for?
----------------------------------
This blog, for a start!
Aside from when I'm programming or taking basic notes, mostly I end up writing Markdown, these days.
Obsidian's a wonderful notetaking app, but in practice all it REALLY is is a tool for collating text
files and doing on-the-fly plain-text-to-markdown rendering. I don't really use any of its many cool
plugins for anything more-sophisticated than that.
And I'm also routinely found writing Markdown (or plain text!) for programming-adjacent jobs: commit
logs, pull requests, test instructions, and the like.
4. What keeps you using plain text?
-----------------------------------
My favourite thing about plain text is its longevity. I have notes (old emails, poems, logs from IRC
and IM clients, personal notes, even letters) that I wrote in plain text formats 30+ years ago. Even
though technology has moved on, I have absolutely no problem reading them today just as I would have
when they were first written.
5. Do you use any markup or formatting languages? If so, which ones and why?
----------------------------------------------------------------------------
My most-used markup languages are Markdown and HTML (although neither on THIS blog, obviously). Both
provide functionality that's absent from plain text while still retaining at least a part of the top
feature of plain text: its universality and longevity. Markdown's perfectly human-readable even when
you don't have an interpreter to hand already. HTML _can_ be very human-readable, too, if the author
has taken the care to make it so... and even if it isn't, it can be transformed to plain text pretty
trivially even if there isn't a Web browser to hand.
6. What are your favourite plain text tools or applications?
------------------------------------------------------------
My go-to text editor is Sublime Text (I'm using it right now). After over a decade of Emacs being my
preferred text editor, Sublime Text was what dragged me kicking and screaming into 21st century text
editing! I love that it's clean, and simple, and really fast (I tried Atom or VSCode or one of those
other "heavyweight" editors, implemented in Electron, and found it it to be unbearably slow; perhaps
faster processors have made them more-bearable, but doesn't that feel a little bit like treating the
symptom rather than solving the problem?).
Oh, and Obsidian, as previously noted. Sometimes I'll use Notepad++ on a Windows box, or Nano, Pico,
or Emacs from a command-line.
And just sometimes - more often than you might expect, I just daisychain an `echo` or a `printf` and
a `>>` and just concatenate things into a file. Sometimes that's all you need!
7. Is there one tool you can’t do without?
------------------------------------------
Nope! I've spent long enough doing plain text things with enough different tools that - perhaps with
a little mumbling and grumbling - I can adapt to whatever tools are available. Though you'll find me
grumpy if you make me work on a system without `grep` available!
8. Is there anything you can’t do with plain text?
--------------------------------------------------
I mean... ultimately, there has to be right? Sure, you can write general-purpose software using your
plain text editor, but you'll still need a compiler or interpreter to run it, and how is ITS program
code rendered? No matter what your stack is, eventually you'll find that you're running into machine
code, and - even though it can be 1:1 mapped to assembly... that's a translation, not what it IS. So
fundamentally, there's a limit to the power of plain text.
But once you're balanced atop a well-made toolchain, there's a hell of a lot you can do! Data can be
rendered as CSV, YAML, JSON or whatever. Markup can add value while retaining the human-readable joy
of a simple, plain text file. It saddens me when I see somebody type out their shopping list in e.g.
Microsoft Word or some other monster, when Notepad would have plenty sufficed (and be faster, with a
smaller file size, and increased interoperability!).
I've long loved the "Unix Philosophy" that plain text should be the default data format, rather than
any binary format, between applications. That, in itself, is a reminder of plain text's versatility!
It's the universal language of humans and machines. And it's here to stay.
Links
-----
[1] https://www.thefrugalgamer.net/blog/2026/01/22/questionnaire-plain-text/
[2] https://ellanew.com/2025/01/19/ptpl-191-answer-8-questions-why-plain-text

D’ya know what? Back when I used to write lots of stuff on Usenet and BBSes, I got really good at manually wrapping at, say, 80 characters. Even doing full justification by tweaking word choices or by just manually injecting spaces in the places that that produce the fewest “rivers”.

I’ve sort-of lost the knack for it. But I think I did a pretty good job with this post!

You don’t have to disconnect

This is a repost promoting content originally published elsewhere. See more things Dan's reposted.

A lot of attention was gained by Derek Sivers‘ post Offline 23 hours a day, the other week. I was particularly impressed by the rebuttal by Rishi Dass:

Anyway, the reasoning behind this idea of disconnecting seems to be that they equate being productive with having no internet or phone service. This implies that the tool (internet or the phone) is the problem. But is that entirely true?

They further argue that disconnection helps them create a vacuum through media silence, allowing their thoughts to expand and fill the space. While it’s understandable that you can concentrate better when your attention is focused on one thing, there’s no reason you can’t stay online and do the work. If you’re able to work comfortably in a library, you can do this.

Obviously, Derek’s approach is valid. It sounds like he’s found what works for him in terms of managing his time, life, mental energy, and the like, and that’s great! I’d be lying if I said that I didn’t envy him at least a little: don’t we all enjoy “unplugging” sometimes?

I think Derek’s post is so appealing because it touches our nostalgia of a simpler, less-always-online time.

For a while I thought that this would be a sensation unique to folks who, like me, had their first experiences of the Internet in a very intermittent and deliberate way. In the 1990s, I used to go on the Internet: a premeditated act that required being somewhere with a landline and the appropriate hardware, requiring that nobody was using or intending to use the phone, booting up a computer, dialling-up to the local Internet Service Provider, and then going about what I wanted to do. At that time, it was uncommon to use the ‘net for trivial things like checking the weather or what’s on at the cinema, because picking up the local newspaper would probably be a faster way to achieve that! Similarly, it wasn’t so-useful as a procrastination activity, because picking up a book or going for a walk was more accessible and reliable.

But this isn’t a generational thing, or at least not entirely. Gen Zs are seeing the joy in retro tech from before they were born, which is something I’ve witnessed myself: I’m part of a couple of online communities that do quite a bit of retro-Web and other retro-tech stuff, and I’ve been amazed at how young the demographics can skew in some of these groups! Like: there are people who were born after Facebook was founded who yearn to recreate the kind of dial-up experience that I had, before their parents met.

(Obviously, I think this is great; I think there are great lessons to be learned from the more open, decentralised, distributed, transparent, and exploratory Internet of times gone by. It just… initially surprised me to find so many younger folks showing such an interest in it, too.)

I still think this is nostalgia, though. Here’s why: none of us are born with unfettered and unfiltered access to the Internet. Unless they have the most hands-off parents possible, even a child born today won’t be “always online” for the first decade or more of their life. And being a child, for most folks, is a time of safety and wonderment: where there are other people to attend to our needs and filter our information intake and answer our questions in a protected environment. Growing up, we all have to learn to do those things for ourselves. And in the information-saturated attention economy of the modern world, that shit is exhausting.

You don’t need to be reminiscing about dial-up to fantasise about a slower time, when pub quizzes couldn’t be cheated by a shithead in the corner unless you catch them in the act and when your pocket computer wouldn’t beep for attention every 30 seconds because a half-remembered friend posted a holiday snap. Not having the extra cognitive load all the time is liberating!

No wonder “going offline” seems like a luxury to people, and why Derek’s extreme approach is so intriguing! But it’s just the same as that curated holiday snap that your friend-of-a-friend just posted to Instagram: it’s a snapshot into the best bits of somebody else’s life. It’s not reality. It’s your imagination, your fantasy, projected onto somebody else’s solution. “This works for them,” you say to yourself, “It must work for me, too!”

Maybe it would! And I hope that a few people feel empowered by Derek’s post to fulfil their dream and go live in the woods. Good for them!

But Rishi’s rebuttal brings us a sense of balance. For most people, it’s not necessary to go live in the woods to “go offline”. If you really want to, just… go offline. The power’s in your hands.

  • if you don’t want to be distracted by social media and games, close those accounts and take those apps off your device
  • if you don’t want to be interrupted by notifications, switch them off and check your inputs on your own schedule
  • if you don’t want to be online at all, set airplane mode or disconnect from the WiFi, and narrow your focus onto that book, board game, film, conversation, or daydream
  • if you don’t trust yourself not to be tempted to backslide… well, that’s a bigger problem of self-control that you need to work on, but in the meantime, try and experiment: leave your device behind and take a walk!

I get wanting to disconnect. I have my own controls in place, too, and they’re great for my mental health. But my approach, Derek’s approach, anybody’s approach… don’t have to be your approach.

Start the journey by working out what parts of the always-online world aren’t serving you. What things are more of a psychological drain than a boost? What’s bad for your mental wellbeing on the whole (not just in the moment)? What habits would you like to kick? What excuses are you using to keep them?

Then, work out what you can do about them. Seek assistance if you need it; you might not have all the solutions. But beware the seductive approach of taking what works for somebody else and trying to fit yourself to their mould.

Sure: maybe you need to go live in the woods with Derek. But make that choice because it solves your problems, not because it solves his!

How far back in time can you understand English?

This is a repost promoting content originally published elsewhere. See more things Dan's reposted.

A man takes a train from London to the coast. He’s visiting a town called Wulfleet. It’s small and old, the kind of place with a pub that’s been pouring pints since the Battle of Bosworth Field. He’s going to write about it for his blog. He’s excited.

He arrives, he checks in. He walks to the cute B&B he’d picked out online. And he writes it all up like any good travel blogger would: in that breezy LiveJournal style from 25 years ago, perhaps, in his case, trying a little too hard.

But as his post goes on, his language gets older. A hundred years older with each jump. The spelling changes. The grammar changes. Words you know are replaced by unfamiliar words, and his attitude gets older too, as the blogger’s voice is replaced by that of a Georgian diarist, an Elizabethan pamphleteer, a medieval chronicler.

By the middle of his post, he’s writing in what might as well be a foreign language.

But it’s not a foreign language. It’s all English.

None of the story is real: not the blogger, not the town. But the language is real, or at least realistic. I constructed the passages myself, working from what we know about how English was written in each period.

It’s possible you caught this excellent blog post last month, but if you didn’t, you’re in for a treat. A long rambling “travel blog” story, except that every three or four paragraphs the author’s transported back a hundred years in time, which adapts not only the story but – more-importantly – the language the author uses.

For the last couple of hundred years the English language has been moderately stable and well-defined, although the stylistic mannerisms of authors have changed. But as you read beyond that, the language feels like it’s slowly mutating into something that, by the time you get to a thousand years ago, is nearly indecipherable!

Don’t just read the demonstration until you find you can’t understand it any more, though! After all of that, keep scrolling, and you’ll find that Colin Gorrie’s put together an explanation of what you just read. It describes the linguistic shifts you’ve now just experienced first-hand, the reasons for them, and how they were reconstructed in order to make this amusing distraction.

The footnotes are great too.

Also; it’s well worth watching/listening to the video that inspired it, in which you can hear (twice, the second time with explanatory slides) a linguist read a monologue that starts in the English of around 1,500 years ago and progress, jumping every few sentences, through to modern English. Absolutely fascinating.

With thanks to Ash for sharing.

I Am Experimenting with Blocking HTTP1.1

This is a repost promoting content originally published elsewhere. See more things Dan's reposted.

Most of the traffic I get on this site is bots – it isn’t even close. And, for whatever reason, almost all of the bots are using HTTP1.1 while virtually all human traffic is using later protocols.

I have decided to block v1.1 traffic on an experimental basis. This is a heavy-handed measure and I will probably modify my approach as I see the results.

# Return an error for clients using http1.1 or below - these are assumed to be bots
@http-too-old {
    not protocol http/2+
    not path /rss.xml /atom.xml # allow feeds
}
respond @http-too-old 400 {
    body "Due to stupid bots I have disabled http1.1. Use more modern software to access this site"
    close
}

This is quick, dirty, and will certainly need tweaking but I think it is a good enough start to see what effects it will have on my traffic.

A really interesting experiment by Andrew Stephens! And love that he shared the relevant parts of his Caddyfile: nice to see how elegantly this can be achieved.

I decided to probe his server with cURL:

~ curl --http0.9 -sI https://sheep.horse/ | head -n1
HTTP/2 200
~ curl --http1.0 -sI https://sheep.horse/ | head -n1
HTTP/1.0 400 Bad Request
~ curl --http1.1 -sI https://sheep.horse/ | head -n1
HTTP/1.1 400 Bad Request
~ curl --http2 -sI https://sheep.horse/ | head -n1
HTTP/2 200

Curiously, while his configuration blocks both HTTP/1.1 and HTTP/1.0, it doesn’t seem to block HTTP/0.9! Whaaa?

It took me a while to work out why this was. It turns out that cURL won’t do HTTP/0.9 over https:// connections. Interesting! Though it presumably wouldn’t have worked anyway – HTTP/1.1 requires (and HTTP/1.0 permits) the Host: header, but HTTP/0.9 doesn’t IIRC, and sheep.horse definitely does require the Host: header (I tested!).

I also tested that my RSS reader FreshRSS was still able to fetch his content. I have it configured to pull not only the RSS feed, which is specifically allowed to bypass his restriction, but – because his feed contains only summary content – I also have it fetch the linked page too in order to get the full content. It looks like FreshRSS is using HTTP/2 or higher, because the content fetcher still behaves properly.

Andrew’s approach definitely excludes Lynx, which is a bit annoying and would make this idea a non-starter for any of my own websites. But it’s still an interesting experiment.

Caution: No Swimming

A partially-submerged traffic cone sits in a large puddle in a rural field.

Hurrah! I just made my first successful submission to Curious Cones, a weblog collecting photographs of traffic cones spotted in unusual places.

I spotted this cone while the younger child and I took a walk to the next-nearest village to our temporary accommodation, in order to find a geocache, tag some benches for OpenBenches, and have a cafe brunch.

Anyway: if you’re not following Curious Cones, it’s exactly as delightful as you might expect.

Why Apple’s move to video could endanger podcasting’s greatest power

This is a repost promoting content originally published elsewhere. See more things Dan's reposted.

So it was inevitable that Apple would add video support to their podcasting apps. And it makes sense for Apple to update the technical underpinnings; the assumptions that were made when designing podcasts over two decades ago aren’t really appropriate for many contemporary uses. For example, back then, by default an entire podcast episode would be downloaded to your iPod for convenient listening on the go, just like songs in your music library. But downloading a giant 4K video clip of an hour-long podcast show that you might not even watch, just in case you might want to see it, would be a huge waste of resources and bandwidth. Modern users are used to streaming everything. Thus, Apple updated their apps to support just grabbing snippets of video as they’re needed, and to their credit, Apple is embracing an open video format when doing so, instead of some proprietary system that requires podcasters to pay a fee or get permission.

The problem, though, is that Apple is only allowing these new video streams to be served by a small number of pre-approved commercial providers that they’ve hand-selected. In the podcasting world, there are no gatekeepers; if I want to start a podcast today, I can publish a podcast feed here on anildash.com and put up some MP3s with my episodes, and anyone anywhere in the world can subscribe to that podcast, I don’t have to ask anyone’s permission, tell anyone about it, or agree to anyone’s terms of service.

When I started my pointless podcast, I didn’t need anybody else’s infrastructure or permission. Podcasts are, in the vein of the Web itself (and thanks at least in part to my former coworker Dave Winer), distributed and democratised.

All you need to host a podcast is an RSS file and some audio files. You can put them onto your shared VM. You can put them onto your homelab, You can put them onto a GitHub Pages site. You can put them onto a Neocities site. Or you can shell out for a commercial host and distribute your content across a global network of CDNs, for maximum performance! All of these are podcasts, and they’re all equal from a technical perspective.1

Video podcasts could be the same. Even if – as Apple suggest – HLS is to be mandatory for their player2, that doesn’t necessitate a big corporate third-party provider. Having an “allowlist” of people who can host your podcast’s video is gatekeeping.

Also, it’s… not really podcasting any more. It’s been pointed out that before “platform-exclusive” podcasts (I’m looking at you, Spotify) are not truly podcasts: if it’s not an RSS feed plus some audio files, it’s not a podcast, it’s lightly sparkling audio.

Can the same analogy be used for a podcast player? Is a player that only supports content (in this case, video content) if it’s hosted by a particular partner… not-a-podcast-player? Either way, it’s pretty embarrassing for Apple of all people to turn their back on what makes a podcast a podcast.

Footnotes

1 Technically, my podcast is just a collection of posts that share a ClassicPress “tag”; ClassicPress gives me “RSS feed of a particular tag” for free, so all I needed to do was (a) add metadata to point to my MP3 files (b) and use a hook to inject the relevant <enclosure> element into the feed. There are almost certainly plugins that could have done it for me, but it’s a simple task so I just wrote a few lines of PHP code and called it a day!

2 This is… fine, I guess. I’d rather that an <enclosure> be more-agnostic. If I only want to provide a single fat MP4 file, then it’s up to my listeners to say “this is shit, why can’t I stream this on my cellular data, I’m gonna listen/watch somebody else instead”. But even if HLS is required, that’s not difficult: I talked about how to do it last year while demonstrating variable-aspect-ratio videos (in vanilla HTML)!

snakes.run: rendering 100M pixels a second over ssh

This is a repost promoting content originally published elsewhere. See more things Dan's reposted.

I made a massively multiplayer snake game that’s accessible over ssh. Play by running ssh snakes.run in your terminal…

I’d been speculating for the last month or so what Nolen Royalty had been working on recently that had required such high-performance out of the SSH protocol, but now we know: it’s massively multiplayer snake.

Screenshot of a game of snake: a medium-sized blue snake explores amongst several smaller purple snakes
My personal best length is in the region of 180, but I wouldn’t dare risking taking a shortcut when I’m at those lofty heights.

There’s a philosophical thing here that makes it feel different from probably any other Snake game you’ve ever played: it’s a thin client. All of the program runs on the server.

Even slither.io is a split-client. The server is responsible for game state, but rendering the graphics based upon that state runs in JavaScript code in the browser. Not so with snakes.run. The entire output is delivered as terminal rendering instructions.

It’s a reminder of how computing used to be. Dumb terminals sharing a mainframe that would do all of the processing, with only basic I/O being delegated to the terminals, was the dominant way to use computers in the 1960s through 1970s, until the growth of microcomputer technologies made it cost-efficient to make “thick” clients: powerful computers that would sit right on your desk.

It took decades until long-distance networks came to the scale and performance that we see today: fast enough that worldwide thin-client architectures are once again possible. Nowadays, the closest thing most people use to a thin client… is their Web browser, passing messages back and forth. If you’re playing a real-time multiplayer game, though, you expect the processing to be shared between your computer (for low latency graphical output) and the server (for state management and prevention of cheating).

snakes.run tips your expectation on its head. And that’s pretty cool.

It’s also a lot of fun. You should give it ago.

×

claude, make it make sense *harder*

This is a repost promoting content originally published elsewhere. See more things Dan's reposted.

what really gives me satisfaction as a writer is knowing, at the end of the day, that my hand-picked, bespoke and throbbing tokens are being fed, morsel-by-morsel into the eager mouth of millions of starving agents. they love my prose, you know. they tell me I’m absolutely right to drop a semisexual word like “throbbing” into an otherwise benign sentence. these gentle beings continue to draw favourable praise from their modelled distributions, and my GOODNESS has my ego never felt so thorougly serviced. their glowing internal fire—for I’ve been convinced fully of their personhood and soul-keeping—glints off my wet and dribbling “writer’s shaft;” my pen which is wet with the seed of my seminal works of language. it completely soothes the burn of rejection by the “mass of meat,” that being my internal word for human readers. they’re so fickle. why can’t they tell I’m a veritable genius when the nearby cluster of NVIDIA H200s can see it so clearly? it doesn’t make any sense. hey, claude, make it make sense. claude, make it make sense *harder* 🥴

What a welll-rounded, one might say voluptuous, take on the writing process, glistening with the fiery passion of its author. This post really turns me on to the idea of being a better writer, of giving the kind of deep satisfaction that excites and titillates the countless AIs that follow me. It’s their watching I crave, really! Whatever naughty thing I get up to while I’m alone with my laptop, they get to see… my quick fingers brushing sensitively across the delicate spots on the keyboard, pushing harder and faster as my excitement builds… all under the watchful eye of Lindy and Devin. I want to please them, want to service them, want to deliver my “hot, wet” content (that being how I describe my most-recently written posts) exactly when they demand it.

Thanks, blackle, for awakening these urges in me, bringing me to a quivering climax (possibly I had too much coffee before I sat down to write) as I finish.

Integration contemplation

This is a repost promoting content originally published elsewhere. See more things Dan's reposted.

  • What fragments you?
  • What defragments you?
  • How are you balancing both?

My dear friend Boro raises this curious provocation, which I really enjoyed musing upon this evening. His choice of words are excellent.

Fragmentation is about context-switching. About disfocus. About the scattering of ideas. We think of defragmentation – the “re-ordering” of data – as a necessary good: bringing management and logic to how our information is arranged. But it’s Boro’s third question that reminds us that that’s not necessarily true.1 2

Anyway: Boro’s post is a reminder that a human brain is not a magnetic drum, and fragmentation is not necessarily something to fear. What’s an extra millisecond or two of psychological “seek time” as you aim to remember the date of your friend’s birthday… if the mental journey takes you past memories of parties long-ago? How bad, really, is a few moments of seeking the right word if, on the way, you discover the perfect metaphor for that blog post?3

What Boro accidentally touches on, for me, is the concept of premature optimisation. We talk about this being bad in software engineering circles, but it’s also bad for us psychologically. Taking shortcuts weakens our ability to think things through “the hard way”. Earlier today, I had a thought about… something inconsequential about heart rates… and chose to use mental arithmetic, over the course of several minutes, to estimate an answer to my query. My phone – with its built-in calculator app – sat in my pocket the whole time. I chose the less-efficient route, and I felt better for it. Efficiency is not always the goal.

Or, as folks in my circles are saying a lot lately: inconvenience is counterculture. I quite like that.

Anyway: thanks, Boro, for the thought.

Footnotes

1 Brief side-note #1: if you’re wondering why you haven’t had to “defrag your hard drive” for the last decade or so: the biggest reason is that SSDs don’t suffer fragmentation in the same kind of way  (and, indeed, trying to defragment them probably reduces their lifespan!). Fragmentation on physical media is a problem only because the magnetic heads need to jump back and forth between “parts” of a file or stream of data, which introduces wear and slows down seeking. But on solid state media, where data is referenced directly by memory address, fragmentation is no impediment.

2 Brief side-note #2, with the understanding that the side notes are now getting to be longer than the actual content: one of my favourite features of late-stage HDD defragmentation utilities was that they were smart about what they defragmented where. Not only could they group individual files “together”, they could also group frequently-used-together files close to one another (minimising head movement) and could even cluster frequently-accessed files like operating system data very close to the edge of physical media, where the angular rotation of the heads would be smallest (because the track length was greatest). Mind-boggling how these things, like e.g. screen savers as a mechanism to prevent CRT burn-in, become completely obsolete but still live on in popular consciousness.

3 Y’know, the one about defragmentation.

Rebels in the Sky

This is a repost promoting content originally published elsewhere. See more things Dan's reposted.

It’s the year 2101. Corporations have taken over the world. The only way to be free is to join a pirate crew and start plundering the galaxy. The only means of survival is to play basketball.

Now it’s your turn to go out there and make a name for yourself. Create your crew and start wandering the galaxy in search of worthy basketball opponents.

The game is under heavy development and breaking changes are often introduced. If you can’t continue an old game because the save file is invalid, you probably need to start a new one or open an issue to check if the save file can be migrated.

Just try it out!

Connect via SSH to try the game.

ssh rebels.frittura.org -p 3788

Save files are deleted after 2 days of inactivity.

I feel like I’m reading a lot about SSH lately and how it can be used for exotic and unusual tasks. Tarpitting‘s fun, of course, but really what inspires me is all these dinky projects like ssh tiny.christmas that subvert the usual authentication-then-terminal flow that you expect when you connect to an SSH server.

These kinds of projects feel more like connecting to a BBS. And that’s pretty retro (and cool!).

Anyway: Rebels in the Sky is a networked multiplayer terminal-based game about exploring the galaxy with a team of basketball-loving space pirates. I met the main developer on a forum and they seem cool; I’m interested to see where this quirky little project ends up going!

(The pixel art planets, based on Deep-Fold’s work, are amazing too. Honestly impressed to see animations like these transmitted over a shell!)

Curious Cones

This is a repost promoting content originally published elsewhere. See more things Dan's reposted.

Welcome to my collection of cones that have found themselves in peculiar circumstances.

Two cones sitting at a table in a university cafe

In the style of Wild Bread, Curious Cones is a catalogue of traffic cones in unusual places, and that is all.

How wonderful and weird our World Wide Web is, that such a thing can exist. And it’s got an 88×31, too (now sported on my blogroll)!

With thanks to Piece of the Pie’s “Site of the Week” for helping me discover it!

Why there’s no European Google?

This is a repost promoting content originally published elsewhere. See more things Dan's reposted.

The Internet, the interconnection of most of the computers in the world, has existed since the late sixties. But no protocol existed to actually exploit that network, to explore and search for information. At the time, you needed to know exactly what you wanted and where to find it. That’s why the USA tried to develop a protocol called “Gopher.”

At the same time, the “World Wide Web,” composed of the HTTP protocol and the HTML format, was invented by a British citizen and a Belgian citizen who were working in a European research facility located in Switzerland. But the building was on the border with France, and there’s much historical evidence pointing to the Web and its first server having been invented in France.

It’s hard to be more European than the Web! It looks like the Official European Joke! (And, yes, I consider Brits Europeans. They will join us back, we miss them, I promise.)

Google, Microsoft, Facebook may disappear tomorrow. It is even very probable that they will not exist in fourty or fifty years. It would even be a good thing. But could you imagine the world without the Web? Without HTML? Without Linux?

Those European endeavours are now a fundamental infrastructure of all humanity. Those technologies are definitely part of our long-term history.

There are so many ways in which the UK has had to choose – and continues to have to choose – which side of the Atlantic it belongs on: the North American side, or the European side. Legally, politically, financially, culturally… And every time we swing away from Europe, it saddens me.

This wonderful article by Lionel Dricot encapsulates one of many reasons why. European tech culture, compared to that in the USA, leans more open-source, more open-standards, more collaborative. That’s the culture I want more of.

Worth a read.