How an RM Nimbus Taught Me a Hacker Mentality

What can I make it do?

It’s been said that when faced with a new piece of technology, a normal person asks “what does it do?”, but a hacker asks “what can I make it do?”.

This kind of curiosity is integral to a hacker mindset.

A 'glider', a highly-recognisable self-replicating pattern from Conway's Game of Life, sometimes used in hacker symbolism.
An RM Nimbus was not the first computer on which I played Game of Life1. But this glider is here symbolically, anyway.

I can trace my hacker roots back further than my first experience of using an RM Nimbus M-Series in circa 19922. But there was something particular about my experience of this popular piece of British edutech kit which provided me with a seminal experience that shaped my “hacker identity”. And it’s that experience about which I’d like to tell you:

Shortly after I started secondary school, they managed to upgrade their computer lab from a handful of Nimbus PC-186s to a fancy new network of M-Series PC-386s. The school were clearly very proud of this cutting-edge new acquisition, and we watched the teachers lay out the manuals and worksheets which smelled fresh and new and didn’t yet have their corners frayed nor their covers daubed in graffiti.

An RM Nimbus PC-186 at its launch menu; a DOS-based function key list menu to run a variety of different programs, alongside RM manuals.
I only got to use the schools’ older computers – this kind! – once or twice before the new ones were delivered.

Program Manager

The new ones ran Windows 3 (how fancy!). Well… kind-of. They’d been patched with a carefully-modified copy of Program Manager that imposed a variety of limitations. For example, they had removed the File > Run… menu item, along with an icon for File Manager, in order to restrict access to only the applications approved by the network administrator.

A special program was made available to copy files between floppy disks and the user’s network home directory. This allowed a student to take their work home with them if they wanted. The copying application – whose interface was vastly inferior to File Manager‘s – was limited to only copying files with extensions in its allowlist. This meant that (given that no tool was available that could rename files) the network was protected from anybody introducing any illicit file types.

Bring a .doc on a floppy? You can copy it to your home directory. Bring a .exe? You can’t even see it.

To young-teen-Dan, this felt like a challenge. What I had in front of me was a general-purpose computer with a limited selection of software but a floppy drive through which media could be introduced. What could I make it do?

This isn’t my school’s computer lab circa mid-1990s (it’s this school) but it has absolutely the same energy. Except that I think Solitaire was one of the applications that had been carefully removed from Program Manager.

Spoiler: eventually I ended up being able to execute pretty much anything I wanted, but we’ll get to that. The journey is the important part of the story. I didn’t start by asking “can I trick this locked-down computer lab into letting my friends and I play Doom deathmatches on it?” I started by asking “what can I make it do?”; everything else built up over time.

I started by playing with macros. Windows used to come with a tool called Recorder,3 which you could use to “record” your mouse clicks and keypresses and play them back.

Recorder + Paintbrush made for an interesting way to use these basic and limited tools to produce animations. Like this one, except at school I’d have put more effort in4.

Microsoft Word

Then I noticed that Microsoft Word also had a macro recorder, but this one was scriptable using a programming language called WordBasic (a predecessor to Visual Basic for Applications). So I pulled up the help and started exploring what it could do.

And as soon as I discovered the Shell function, I realised that the limitations that were being enforced on the network could be completely sidestepped.

Screenshot showing Microsoft Word's 'Macro Editor' on Windows 3.1. The subroutine being defined contains the code 'Shell("WINFILE.EXE")'; the 'Shell' command is described in the WordBasic Help file, which is also visible.
A Windows 3 computer that runs Word… can run any other executable it has access to. Thanks, macro editor.

Now that I could run any program I liked, I started poking the edges of what was possible.

  • Could I get a MS-DOS prompt/command shell? Yes, absolutely5.
  • Could I write to the hard disk drive? Yes, but any changes got wiped when the computer performed its network boot.
  • Could I store arbitrary files in my personal network storage? Yes, anything I could bring in on floppy disks6 could be persisted on the network server.

I didn’t have a proper LAN at home7 So I really enjoyed the opportunity to explore, unfettered, what I could get up to with Windows’ network stack.

Screenshot from Windows 3.11; a Microsoft Paint window is partially-concealed behind a WinChat conversation with 'RMNET013'. The other participant is warning the user to look busy and stop drawing dicks in Paint because the teacher is coming. The user is responding with confusion.
The “WinNuke” NetBIOS remote-crash vulnerability was a briefly-entertaining way to troll classmates, but unlocking WinPopup/Windows Chat capability was ultimately more-rewarding.

File Manager

I started to explore the resources on the network. Each pupil had their own networked storage space, but couldn’t access one another’s. But among the directories shared between all students, I found a directory to which I had read-write access.

I created myself a subdirectory and set the hidden bit on it, and started dumping into it things that I wanted to keep on the network8.

By now my classmates were interested in what I was achieving, and I wanted in the benefits of my success. So I went back to Word and made a document template that looked superficially like a piece of coursework, but which contained macro code that would connect to the shared network drive and allow the user to select from a series of programs that they’d like to run.

Gradually, compressed over a series of floppy disks, I brought in a handful of games: Commander Keen, Prince of Persia, Wing Commander, Civilization, Wolfenstein 3D, even Dune II. I got increasingly proficient at modding games to strip out unnecessary content, e.g. the sound and music files9, minimising the number of floppy disks I needed to ZIP (or ARJ!) content to before smuggling it in via my shirt pocket, always sure not to be carrying so many floppies that it’d look suspicious.

Screenshot of Windows 3.11 File Manager connected to a network with shares rmnet, shared, and students. Shared contains a hidden directory called 'dan'.
The goldmine moment – for my friends, at least – was the point at which I found a way to persistently store files in a secret shared location, allowing me to help them run whatever they liked without passing floppy disks around the classroom (which had been my previous approach).

In a particularly bold move, I implemented a simulated login screen which wrote the entered credentials into the shared space before crashing the computer. I left it running, unattended, on computers that I thought most-likely to be used by school staff, and eventually bagged myself the network administrator’s password. I only used it twice: the first time, to validate my hypothesis about the access levels it granted; the second, right before I finished school, to confirm my suspicion that it wouldn’t have been changed during my entire time there10.

Are you sure you want to quit?

My single biggest mistake was sharing my new-found power with my classmates. When I made that Word template that let others run the software I’d introduced to the network, the game changed.

When it was just me, asking the question what can I make it do?, everything was fun and exciting.

But now half a dozen other teens were nagging me and asking “can you make it do X?”

This wasn’t exploration. This wasn’t innovation. This wasn’t using my curiosity to push at the edges of a system and its restrictions! I didn’t want to find the exploitable boundaries of computer systems so I could help make it easier for other people to do so… no: I wanted the challenge of finding more (and weirder) exploits!

I wanted out. But I didn’t want to say to my friends that I didn’t want to do something “for” them any more11.

I figured: I needed to get “caught”.

16-bit Windows screenshot with a background image from WarGames. A dialog box asks 'Are you sure you want to quit? If you quit, you will lose the ability to: (a) use network chat tools, (b) play videogames awhen you should be doing coursework, (c) impress your friends and raise your otherwise-pathetic social status'; the cursor hovers over a 'Yes, I'm out' button.
I considered just using graphics software to make these screenshots… but it turned out to be faster to spin up a network of virtual machines running Windows 3.11 and some basic tools. I actually made the stupid imaginary dialog box you’re seeing.12

I chose… to get sloppy.

I took a copy of some of the software that I’d put onto the shared network drive and put it in my own home directory, this time un-hidden. Clearly our teacher was already suspicious and investigating, because within a few days, this was all that was needed for me to get caught and disciplined13.

I was disappointed not to be asked how I did it, because I was sufficiently proud of my approach that I’d hoped to be able to brag about it to somebody who’d understand… but I guess our teacher just wanted to brush it under the carpet and move on.

Aftermath

The school’s IT admin certainly never worked-out the true scope of my work. My “hidden” files remained undiscovered, and my friends were able to continue to use my special Word template to play games that I’d introduced to the network14. I checked, and the hidden files were still there when I graduated.

The warning worked: I kept my nose clean in computing classes for the remainder of secondary school. But I would’ve been happy to, anyway: I already felt like I’d “solved” the challenge of turning the school computer network to my interests and by now I’d moved on to other things… learning how to reverse-engineer phone networks… and credit card processors… and copy-protection systems. Oh, the stories I could tell15.

Old photograph of Dan, then a teenager, with other teenagers. Dan is labelled 'young hacker, a.k.a. bellend', while another young man is captioned 'classmate who just wanted to play lemmings'.
I “get” it that some of my classmates – including some of those pictured – were mostly interested in the results of my hacking efforts. But for me it always was – and still is – about the journey of discovery.

But I’ll tell you what: 13-ish year-old me ought to be grateful to the RM Nimbus network at my school for providing an interesting system about which my developing “hacker brain” could ask: what can I make it do?

Which remains one of the most useful questions with which to foster a hacker mentality.

Footnotes

1 I first played Game of Life on an Amstrad CPC464, or possibly a PC1512.

2 What is the earliest experience to which I can credit my “hacker mindset”? Tron and WarGames might have played a part, as might have the “hacking” sequence in Ferris Bueller’s Day Off. And there was the videogame Hacker and its sequel (it’s funny to see their influence in modern games). Teaching myself to program so that I could make text-based adventures was another. Dissecting countless obfuscated systems to see how they worked… that’s yet another one: something I did perhaps initially to cheat at games by poking their memory addresses or hexediting their save games… before I moved onto reverse-engineering copy protection systems and working out how they could be circumvented… and then later still when I began building hardware that made it possible for me to run interesting experiments on telephone networks.

Any of all of these datapoints, which took place over a decade, could be interpreted as “the moment” that I became a hacker! But they’re not the ones I’m talking about today. Today… is the story of the RM Nimbus.

3 Whatever happened to Recorder? After it disappeared in Windows 95 I occasionally had occasion to think to myself “hey, this would be easier if I could just have the computer watch me and copy what I do a few times.” But it was not to be: Microsoft decided that this level of easy automation wasn’t for everyday folks. Strangely, it wasn’t long after Microsoft dropped macro recording as a standard OS feature that Apple decided that MacOS did need a feature like this. Clearly it’s still got value as a concept!

4 Just to clarify: I put more effort in to making animations, which were not part of my schoolwork back when I was a kid. I certainly didn’t put more effort into my education.

5 The computers had been configured to make DOS access challenging: a boot menu let you select between DOS and Windows, but both were effectively nerfed. Booting into DOS loaded an RM-provided menu that couldn’t be killed; the MS-DOS prompt icon was absent from Program Manager and quitting Windows triggered an immediate shutdown.

6 My secondary school didn’t get Internet access during the time I was enrolled there. I was recently trying to explain to one of my kids the difference between “being on a network” and “having Internet access”, and how often I found myself on a network that wasn’t internetworked, back in the day. I fear they didn’t get it.

7 I was in the habit of occasionally hooking up PCs together with null modem cables, but only much later on would I end up acquiring sufficient “thinnet” 10BASE2 kit that I could throw together a network for a LAN party.

8 Initially I was looking to sidestep the space limitation enforcement on my “home” directory, and also to put the illicit software I was bringing in somewhere that could not be trivially-easily traced back to me! But later on this “shared” directory became the repository from which I’d distribute software to my friends, too.

9 The school computer didn’t have soundcards and nobody would have wanted PC speakers beeping away in the classroom while they were trying to play a clandestine videogame anyway.

10 The admin password was concepts. For at least four years.

11 Please remember that at this point I was a young teenager and so was pretty well over-fixated on what my peers thought of me! A big part of the persona I presented was of somebody who didn’t care what others thought of him, I’m sure, but a mask that doesn’t look like a mask… is still a mask. But yeah: I had a shortage of self-confidence and didn’t feel able to say no.

12 Art is weird when your medium is software.

13 I was briefly alarmed when there was talk of banning me from the computer lab for the remainder of my time at secondary school, which scared me because I was by now half-way through my boring childhood “life plan” to become a computer programmer by what seemed to be the appropriate route, and I feared that not being able to do a GCSE in a CS-adjacent subject could jeopardise that (it wouldn’t have).

14 That is, at least, my friends who were brave enough to carry on doing so after the teacher publicly (but inaccurately) described my alleged offences, seemingly as a warning to others.

15 Oh, the stories I probably shouldn’t tell! But here’s a teaser: when I built my first “beige box” (analogue phone tap hardware) I experimented with tapping into the phone line at my dad’s house from the outside. I carefully shaved off some of the outer insulation of the phone line that snaked down the wall from the telegraph pole and into the house through the wall to expose the wires inside, identified each, and then croc-clipped my box onto it and was delighted to discovered that I could make and receive calls “for” the house. And then, just out of curiosity to see what kinds of protections were in place to prevent short-circuiting, I experimented with introducing one to the ringer line… and took out all the phones on the street. Presumably I threw a circuit breaker in the roadside utility cabinet. Anyway, I patched-up my damage and – fearing that my dad would be furious on his return at the non-functioning telecomms – walked to the nearest functioning payphone to call the operator and claim that the phone had stopped working and I had no idea why. It was fixed within three hours. Phew!

× × × ×

Why does SSH send 100 packets per keystroke?

This is a repost promoting content originally published elsewhere. See more things Dan's reposted.

Further analysis on a smaller pcap pointed to these mysterious packets arriving ~20ms apart.

This was baffling to me (and to Claude Code). We kicked around several ideas like:

  • SSH flow control messages
  • PTY size polling or other status checks
  • Some quirk of bubbletea or wish

One thing stood out – these exchanges were initiated by my ssh client (stock ssh installed on MacOS) – not by my server.

In 2023, ssh added keystroke timing obfuscation. The idea is that the speed at which you type different letters betrays some information about which letters you’re typing. So ssh sends lots of “chaff” packets along with your keystrokes to make it hard for an attacker to determine when you’re actually entering keys.

That makes a lot of sense for regular ssh sessions, where privacy is critical. But it’s a lot of overhead for an open-to-the-whole-internet game where latency is critical.

Keystroke timing obfuscation: I could’ve told you that! Although I wouldn’t necessarily have leapt to the possibility of mitigating it server-side by patching-out support for (or at least: the telegraphing of support for!) it; that’s pretty clever.

Altogether this is a wonderful piece demonstrating the whole “engineer mindset”. Detecting a problem, identifying it, understanding it, fixing it, all tied-up in an engaging narrative.

And after playing with his earlier work, ssh tiny.christmas – which itself inspired me to learn a little Bubble Tea/Wish (I’ve got Some Ideas™️) – I’m quite excited to see where this new ssh-based project of Royalty’s is headed!

It Is A War Out There – Take Control of Your Supply Lines with HtDTY

This is a repost promoting content originally published elsewhere. See more things Dan's reposted.

This post advocates minimizing dependencies in web pages that you do not directly control. It conflates dependencies during build time and dependencies in the browser. I maintain that they are essentially the same thing, that both have the same potential problems, and that the solution is the snappy new acronym HtDTY – Host the Damn Thing Yourself.

If your resources are large enough to cause a problem if you Host the Damn Things Yourself then consider finding ways to cut back on their size. Or follow my related advice – HtDToaSYHaBRW IMCYMbT(P)WDWYD : Host the Damn Thing on a Service You Have A Business Relationship With, It May Cost You Money But They (Probably) Won’t Dick With Your Data.

Host the Damn Thing Yourself (HtDTY) is an excellent suggestion; I’ve been a huge fan of the philosophy for ages, but I like this acronym. (I wish it was pronounceable, but you can’t have everything.)

Andrew’s absolutely right, but I’m not even sure he’s expressed all the ways in which he’s right. Here are my reasons to HtDTY, especially for frontend resources:

  1. Security: As Andrew observes, you can’t protect against supply chain attacks if your supply chain wide open to exploitation. And I’m glad that he points out that version pinning doesn’t protect you from this (although subsource integrity can).
  2. Privacy: Similarly, Andrew nailed this one. If you host your fonts on Google Fonts, for example, you’re telling one of the biggest data-harvesting companies on the Internet who’s accessing your website. Don’t do that (in that specific example, google-webfonts-helper is your friend).
  3. Resilience: Every CDN and third-party service you depend upon is another single-point-of-failure. Sure, Azure has much better uptime than your site… but it still goes down and not necessarily at the same times as your site does! And it’s not just about downtime. What if your user’s government poisons the DNS to block the CDN? What if the user’s privacy tools block your CDN’s domain (whether rightly, for the privacy reasons described above, or wrongly)? What if, y’know, you were hosting your images on Imgur but that’s not available in your users’ country? These are all real examples that happen in the real world. Why would you choose to make your site less-reliable by loading jQuery from a CDN rather than just… downloading a copy?
  4. Performance: Andrew rightly deconstructs the outdated argument that CDN caching improves your site’s performance. Edge caching might, in some circumstances, but still has the problems listed above. But this argument can go further than Andrew’s observation that CDNs aren’t that much of a benefit… because sticking to just one domain name means (a) fewer DNS lookups, (b) fewer TLS handshakes, (c) better compression, if e.g. your JavaScript assets are bundled or at least delivered in the same pipeline, and (d) all the benefits of HTTP/2 and HTTP/3, like early hints, pipelining, etc. Nowadays, it can often be faster to not-use a CDN (depending on lots of factors), in addition to all the above benefits.

So yeah: HtDTY. I dig it.

Hive’s Password Policy Makes Me Cry

Hey, let’s set up an account with Hive. After the security scares they faced in the mid-2010s, I’m sure they’ll be competent at making a password form, right?

Welcome to Hive 'choose a password' form, with no password entered, saying 'your password must be at least 8 characters'.Your password must be at least 8 characters; okay.

Just 8 characters would be a little short in this day and age, I reckon, but what do I care: I’m going to make a long and random one with my password safe anyway. Here we go:

The same form, with the password S1dfCeg7!Ex;C$Ngban9-A entered. The error message now shows 'Your password must be at least 12 characters log, contain at least one uppercase letter, one lowercase letter, one number, and one special character'I’ve unmasked the password field so I can show you what I tried. Obviously the password I eventually chose is unrelated to any of my screenshots.

Now my password must be at least 12 characters long, not 8 as previously indicated. That’s still not a problem.

Oh, and must contain at least one of four different character classes: uppercase, lowercase, numbers, and special characters. But wait… my proposed password already does contain all of those things!

The same form, now with the password 1111AAAAaaaa!!!! which is accepted as valid.Let’s simplify.

The password 1111AAAAaaaa!!!! is valid… but S1dfCeg7!Ex;C$Ngban9-A is not. I guess my password is too strong?

Composition rules are bullshit already. I’d already checked to make sure my surname didn’t appear in the password in case that was the problem (on a few occasions services have forbidden me from using the letter “Q” in random passwords because they think that would make them easier to guess… wot?). So there must be something else amiss. Something that the error message is misleading about…

A normal person might just have used the shit password that Hive accepted, but I decided to dig deeper.

The shit password again, but appended with a semicolon (;) triggers the message.Using the previously-accepted password again but with a semicolon in it… fails. So clearly the problem is that some special characters are forbidden. But we’re not being told which ones, or that that’s the problem. Which is exceptionally sucky user experience, Hive.

At this point it’s worth stressing that there’s absolutely no valid reason to limit what characters are used in a password. Sometimes well-meaning but ill-informed developers will ban characters like <, >, ' and " out of a misplaced notion that this is a good way to protect against XSS and injection attacks (it isn’t; I’ve written about this before…), but banning ; seems especially obtuse (and inadequately explaining that in an error message is just painfully sloppy). These passwords are going to be hashed anyway (right… right!?) so there’s really no reason to block any character, but anyway…

I wondered what special characters are forbidden, and ran a quick experiment. It turns out… it’s a lot:

  • Characters Hive forbids use of in passwords include - , . + = " £ ^ # ' ( ) { } * | < > : ` – also space
  • “Special” characters Hive they allow: ! @ $ % & ?

What the fuck, Hive. If you require that users add a “special” character to their password but there are only six special characters you’ll accept (and they don’t even include the most common punctuation characters), then perhaps you should list them when asking people to choose a password!

Or, better yet, stop enforcing arbitrary and pointless restrictions on passwords. It’s not 1999 any more.

The invalid password but with all the special characters transformed into exclamation points to make it valid.I eventually found a password that would be accepted. Again, it’s not the one shown above, but it’s more than a little annoying that this approach – taking the diversity of punctuation added by my password safe’s generator and swapping them all for exclamation marks – would have been enough to get past Hive’s misleading error message.

Having eventually found a password that worked and submitted it…

Hive error message: 'This activation URL seems to be invalid.'

…it turns out I’d taken too long to do so, so I got treated to a different misleading error message. Clearly the problem was that the CSRF token had expired, but instead they told me that the activation URL was invalid.

If I, a software engineer with a quarter of a century of experience and who understands what’s going wrong, struggle with setting a password on your site… I can’t begin to image the kinds of tech support calls that you must be fielding.

Do better, Hive.

× × × × × ×

Is it possible to allow sideloading *and* keep users safe?

This is a repost promoting content originally published elsewhere. See more things Dan's reposted.

Terence Eden raises some valid points:

I’ve tried to be pragmatic, but there’s something of a dilemma here.

  1. Users should be free to run whatever code they like.
  2. Vulnerable members of society should be protected from scams.

Do we accept that a megacorporation should keep everyone safe at the expense of a few pesky nerds wanting to run some janky code?

Do we say that the right to run free software is more important than granny being protected from scammers?

Do we pour billions into educating users not to click “yes” to every prompt they see?

Do we try and build a super-secure Operating System which, somehow, gives users complete freedom without exposing them to risk?

Do we hope that Google won’t suddenly start extorting developers, users, and society as a whole?

Do we chase down and punish everyone who releases a scam app?

Do we stick an AI on every phone to detect scam apps and refuse to run them if they’re dodgy?

I don’t know the answers to any of these questions and – if I’m honest – I don’t like asking them.

Google’s gradual locking-down of Android bothers me, too. I’ve rooted many of my phones in order to unlock features that I benefit from (as a developer… and as a nerd!), and it’s bugged me on the occasions where I’ve been unable to run had to use complicated workarounds to trick e.g. a bank’s app. Having gone to the effort to root a phone – which remains outside of the reach of most regular users – I’d be happy to accept an appropriate share of the liability if my mistake, y’know, let a scammer steal all of my money.

That’s the risk you take with any device on which you have root, and it’s why we make it hard to the point of being discouraging. Because you can’t just put up a warning and hope that users will read and understand it, because they won’t. They’ll just click whatever button looks like it’ll get them to the next step without even glancing at the danger signs1.

I’m glad to have been increasingly decoupling myself from Google’s ecosystem, because I’ve been burned by it too. Like Terence, I’ve been hit by “real name” policies that discriminate against people with unusual names or who might be at risk of impersonation2. But I’m not convinced that there’s a good alternative for me to running Android on my mobile devices, at the moment: I really enjoyed Maemo back in the day; what’s the status of Sailfish nowadays?

I get that we need to protect people from dangerous scammy apps. But I’d like to think there’s a middle-ground somewhere between Doctrowian “it’s your device, you’re responsible for what runs on it” and the growing Apple/Google thinking of “if we don’t have the targetting coordinates of the developer that wrote the code, our OS won’t let you run it”. I’m ready to concede that user education alone hasn’t worked, but there’s got to be a better solution than this, Google.

Footnotes

1 Incidentally, I don’t blame users for this behaviour. Users have absolutely been conditioned, and continue to be conditioned, to click-without-reading. Cookie and privacy banners with dark patterns, EULAs and legal small print are notoriously (and often unnecessarily) long and convoluted, and companies routinely try to blur the line between “serious thing you should really read but we want you not to” and “trivial thing that you don’t need to read; it’s just a formality that we have to say it”.

2 Right now, my biggest fight with Google has come from the fact that lately, it seems like every time I upload a Three Rings demo video to YouTube it gets deleted under their harassment policy for doxxing people… people like “Alan Fakename” from Somewhereville, “Betty Notaperson” from Otherplace, and their friend “Chris McMadeup” who lives at 123 Imaginary Street. The appeals process turns out to be that you click a button to appeal, but don’t get to provide any further information (e.g. to explain that these are clearly-fake people who won’t mind being doxxed on account of the fact that they don’t exist), and then a few hours later you get an email to say “nah, we’re keeping it deleted”. I almost expect the YouTube version of my recent video demonstrating FreeDeedPoll.org.uk will be next to be targetted by this policy for showing me scribbling the purported signature Sam McRealName, formerly known as Jo Genuine-Person.

Note #26931

I got kicked off LinkedIn this week. Apparently there was “suspicious behaviour” on my account. To get back in, I needed to go through Persona’s digital ID check (this, despite the fact that I’ve got a Persona-powered verification on my LinkedIn, less than six months old).

After looping around many times identifying which way up a picture of a dog was and repeatedly photographing myself, my passport, and my driving license, I eventually got back in. Personally, I suspect they just rolled out some Online Safety Act functionality and it immediately tripped over my unusual name.

But let this be a reminder to anybody who (unlike me) depends upon their account in a social network: it can be taken away in a moment and be laborious (or impossible) to get back. If you care about your online presence, you should own your own domain name; simple as that!

Lock All The Computers

I wanted a way to simultaneously lock all of the computers – a mixture of Linux, MacOS and Windows boxen – on my desk, when I’m going to step away. Here’s what I came up with:

There’s optional audio in this video, if you want it.

One button. And everything locks. Nice!

Here’s how it works:

  1. The mini keyboard is just 10 cheap mechanical keys wired up to a CH552 chip. It’s configured to send CTRL+ALT+F13 through CTRL+ALT+F221 when one of its keys are pressed.
  2. The “lock” key is captured by my KVM tool Deskflow (which I migrated to when Barrier became neglected, which in turn I migrated to when I fell out of love with Synergy). It then relays this hotkey across to all currently-connected machines2.
  3. That shortcut is captured by each recipient machine in different ways:
    • The Linux computers run LXDE, so I added a line to /etc/xdg/openbox/rc.xml to set a <keybind> that executes xscreensaver-command -lock.
    • For the Macs, I created a Quick Action in Automator that runs pmset displaysleepnow as a shell script3, and then connected that via Keyboard Shortcuts > Services.
    • On the Windows box, I’ve got AutoHotKey running anyway, so I just have it run { DllCall("LockWorkStation") } when it hears the keypress.

That’s all there is to is! A magic “lock all my computers, I’m stepping away” button, that’s much faster and more-convenient than locking two to five computers individually.

Footnotes

1 F13 through F24 are absolutely valid “standard” key assignments, of course: it’s just that the vast majority of keyboards don’t have keys for them! This makes them excellent candidates for non-clashing personal-use function keys, but I like to append one or more modifier keys to the as well to be absolutely certain that I don’t interact with things I didn’t intend to!

2 Some of the other buttons on my mini keyboard are mapped to “jumping” my cursor to particular computers (if I lose it, which happens more often than I’d like to admit), and “locking” my cursor to the system it’s on.

3 These boxes are configured to lock as soon as the screen blanks; if yours don’t then you might need a more-sophisticated script.

Google Shared My Phone Number!

Duration

Podcast Version

This post is also available as a podcast. Listen here, download for later, or subscribe wherever you consume podcasts.

Earlier this month, I received a phone call from a user of Three Rings, the volunteer/rota management software system I founded1.

We don’t strictly offer telephone-based tech support – our distributed team of volunteers doesn’t keep any particular “core hours” so we can’t say who’s available at any given time – but instead we answer email/Web based queries pretty promptly at any time of the day or week.

But because I’ve called-back enough users over the years, it’s pretty much inevitable that a few probably have my personal mobile number saved. And because I’ve been applying for a couple of interesting-looking new roles, I’m in the habit of answering my phone even if it’s a number I don’t recognise.

Dan sits at his laptop in front of a long conference table where a group of people are looking at a projector screen.
Many of the charities that benefit from Three Rings seem to form the impression that we’re all just sat around in an office, like this. But in fact many of my fellow volunteers only ever see me once or twice a year!

After the first three such calls this month, I was really starting to wonder what had changed. Had we accidentally published my phone number, somewhere? So when the fourth tech support call came through, today (which began with a confusing exchange when I didn’t recognise the name of the caller’s charity, and he didn’t get my name right, and I initially figured it must be a wrong number), I had to ask: where did you find this number?

“When I Google ‘Three Rings login’, it’s right there!” he said.

Google Search results page for 'Three Rings CIC', showing a sidebar with information about the company and including... my personal mobile number and a 'Call' button that calls it!
I almost never use Google Search2, so there’s no way I’d have noticed this change if I hadn’t been told about it.

He was right. A Google search that surfaced Three Rings CIC’s “Google Business Profile” now featured… my personal mobile number. And a convenient “Call” button that connects you directly to it.

'Excuse me' GIF reaction. A white man blinks and looks surprised.

Some years ago, I provided my phone number to Google as part of an identity verification process, but didn’t consent to it being shared publicly. And, indeed, they didn’t share it publicly, until – seemingly at random – they started doing so, presumably within the last few weeks.

Concerned by this change, I logged into Google Business Profile to see if I could edit it back.

Screenshot from Google Business Profile, with my phone number and the message 'Your phone number was updated by Google.'.
Apparently Google inserted my personal mobile number into search results for me, randomly, without me asking them to. Delightful.

I deleted my phone number from the business listing again, and within a few minutes it seemed to have stopped being served to random strangers on the Internet. Unfortunately deleting the phone number also made the “Your phone number was updated by Google” message disappear, so I never got to click the “Learn more” link to maybe get a clue as to how and why this change happened.

Last month, high-street bank Halifax posted the details of a credit agreement I have with them to two people who aren’t me. Twice in two months seems suspicious. Did I accidentally click the wrong button on a popup and now I’ve consented to all my PII getting leaked everywhere?

Spoof privacy settings popup, such as you might find on a website, reading: We and our partners work very hard to keep your data safe and secure and to operate within the limitations of the law. It's really hard! Can you give us a break and make it easier for us by consenting for us to not have to do that? By clicking the 'I Agree' button, you consent to us and every other company you do business with to share your personal information with absolutely anybody, at any time, for any reason, forever. That's cool, right?
Don’t you hate it when you click the wrong button. Who reads these things, anyway, right?

Such feelings of rage.

Footnotes

1 Way back in 2002! We’re very nearly at the point where the Three Rings system is older than the youngest member of the Three Rings team. Speaking of which, we’re seeking volunteers to help expand our support team: if you’ve got experience of using Three Rings and an hour or two a week to spare helping to make volunteering easier for hundreds of thousands of people around the world, you should look us up!

2 Seriously: if you’re still using Google Search as your primary search engine, it’s past time you shopped around. There are great alternatives that do a better job on your choice of one or more of the metrics that might matter to you: better privacy, fewer ads (or more-relevant ads, if you want), less AI slop, etc.

× × × × ×

Halifax Shared My Credit Agreement!

Remember that hilarious letter I got from British Gas a in 2023 where they messed-up my name? This is like that… but much, much worse.

Today, Ruth and JTA received a letter. It told them about an upcoming change to the agreement of their (shared, presumably) Halifax credit card.

Except… they don’t have a shared Halifax credit card. Could it be a scam? Some sort of phishing attempt, maybe, or perhaps somebody taking out a credit card in their names?

I happened to be in earshot and asked to take a look at the letter, and was surprised to discover that all of the other details – the last four digits of the card, the credit limit, etc. – all matched my Halifax credit card.

Carefully-censored letter from Halifax, highlighting the parts that show my correct address, last four digits of my card, and my credit limit... and where it shows a pair of names that are not mine.
Halifax sent a letter to me, about my credit card… but addressed it to… two other people I live with

I spent a little over half an hour on the phone with Halifax, speaking to two different advisors, who couldn’t fathom what had happened or how. My credit card is not (and has never been) a joint credit card, and the only financial connection I have to Ruth and JTA is that I share a mortgage with them. My guess is that some person or computer at Halifax tried to join-the-dots from the mortgage outwards and re-assigned my credit card to them, instead?

Eventually I had to leave to run an errand, so I gave up on the phone call and raised a complaint with Halifax in writing. They’ve promised to respond within… eight weeks. Just brilliant.

×

Generative AI use and human agency

This is a repost promoting content originally published elsewhere. See more things Dan's reposted.

5. If you use AI, you are the one who is accountable for whatever you produce with it. You have to be certain that whatever you produced was correct. You cannot ask the system itself to do this. You must either already be expert at the task you are doing so you can recognise good output yourself, or you must check through other, different means the validity of any output.

9. Generative AI produces above average human output, but typically not top human output. If you overuse generative AI you may produce more mediocre output than you are capable of.

I was also tempted to include in 9 as a middle sentence “Note that if you are in an elite context, like attending a university, above average for humanity widely could be below average for your context.”

In this excellent post, Joanna says more-succinctly what I was trying to say in my comments on “AI Is Reshaping Software Engineering — But There’s a Catch…” a few days ago. In my case, I was talking very-specifically about AI as a programmer’s assistant, and Joanna’s points 5. and 9. are absolutely spot on.

Point 5 is a reminder that, as I’ve long said, you can’t trust an AI to do anything that you can’t do for yourself. I sometimes use a GenAI-based programming assistant, and I can tell you this – it’s really good for:

  • Fancy autocomplete: I start typing a function name, it guesses which variables I’m going to be passing into the function or that I’m going to want to loop through the output or that I’m going to want to return-early f the result it false. And it’s usually right. This is smart, and it saves me keypresses and reduces the embarrassment of mis-spelling a variable name1.
  • Quick reference guide: There was a time when I had all of my PHP DateTimeInterface::format character codes memorised. Now I’d have to look them up. Or I can write a comment (which I should anyway, for the next human) that says something like // @returns String a date in the form: Mon 7th January 2023 and when I get to my date(...) statement the AI will already have worked out that the format is 'D jS F Y' for me. I’ll recognise a valid format when I see it, and I’ll be testing it anyway.
  • Boilerplate: Sometimes I have to work in languages that are… unnecessarily verbose. Rather than writing a stack of setters and getters, or laying out a repetitive tree of HTML elements, or writing a series of data manipulations that are all subtly-different from one another in ways that are obvious once they’ve been explained to you… I can just outsource that and then check it2.
  • Common refactoring practices: “Rewrite this Javascript function so it doesn’t use jQuery any more” is a great example of the kind of request you can throw at an LLM. It’s already ingested, I guess, everything it could find on StackOverflow and Reddit and wherever else people go to bemoan being stuck with jQuery in their legacy codebase. It’s not perfect – just like when it’s boilerplating – and will make stupid mistakes3 but when you’re talking about a big function it can provide a great starting point so long as you keep the original code alongside, too, to ensure it’s not removing any functionality!

Other things… not so much. The other day I experimentally tried to have a GenAI help me to boilerplate some unit tests and it really failed at it. It determined pretty quickly, as I had, that to test a particular piece of functionality need to mock a function provided by a standard library, but despite nearly a dozen attempts to do so, with copious prompting assistance, it couldn’t come up with a working solution.

Overall, as a result of that experiment, I was less-effective as a developer while working on that unit test than I would have been had I not tried to get AI assistance: once I dived deep into the documentation (and eventually the source code) of the underlying library I was able to come up with a mocking solution that worked, and I can see why the AI failed: it’s quite-possibly never come across anything quite like this particular problem in its training set.

Solving it required a level of creativity and a depth of research that it was simply incapable of, and I’d clearly made a mistake in trying to outsource the problem to it. I was able to work around it because I can solve that problem.

But I know people who’ve used GenAI to program things that they wouldn’t be able to do for themselves, and that scares me. If you don’t understand the code your tool has written, how can you know that it does what you intended? Most developers have a blind spot for testing and will happy-path test their code without noticing if they’ve introduced, say, a security vulnerability owing to their handling of unescaped input or similar… and that’s a problem that gets much, much worse when a “developer” doesn’t even look at the code they deploy.

Security, accessibility, maintainability and performance – among others, I’ve no doubt – are all hard problems that are not made easier when you use an AI to write code that you don’t understand.

Footnotes

1 I’ve 100% had an occasion when I’ve called something $theUserID in one place and then $theUserId in another and not noticed the case difference until I’m debugging and swearing at the computer

2 I’ve described the experience of using an LLM in this way as being a little like having a very-knowledgeable but very-inexperienced junior developer sat next to me to whom I can pass off the boring tasks, so long as I make sure to check their work because they’re so eager-to-please that they’ll choose to assume they know more than they do if they think it’ll briefly impress you.

3 e.g. switching a selector from $(...) to document.querySelector but then failing to switch the trailing .addClass(...) to .classList.add(...)– you know: like an underexperienced but eager-to-please dev!

Reply to Vika, re: Content-Security-Policy

This is a reply to a post published elsewhere. Its content might be duplicated as a traditional comment at the original source.

Vika said:

Had a fight with the Content-Security-Policy header today. Turns out, I won, but not without sacrifices.
Apparently I can’t just insert <style> tags into my posts anymore, because otherwise I’d have to somehow either put nonces on them, or hash their content (which would be more preferrable, because that way it remains static).

I could probably do the latter by rewriting HTML at publish-time, but I’d need to hook into my Markdown parser and process HTML for that, and, well, that’s really complicated, isn’t it? (It probably is no harder than searching for Webmention links, and I’m overthinking it.)

I’ve had this exact same battle.

Obviously the intended way to use nonces in a Content-Security-Policy is to have the nonce generated, injected, and served in a single operation. So in PHP, perhaps, you might do something like this:

<?php
  $nonce = bin2hex(random_bytes(16));
  header("Content-Security-Policy: script-src 'nonce-$nonce'");
?>
<!DOCTYPE html>
<html lang="en">
<head>
  <meta charset="UTF-8">
  <title>PHP CSP Nonce Test</title>
</head>
<body>
  <h1>PHP CSP Nonce Test</h1>
  <p>
    JavaScript did not run.
  </p>

  <!-- This JS has a valid nonce: -->
  <script nonce="<?php echo $nonce; ?>">
    document.querySelector('p').textContent = 'JavaScript ran successfully.';
  </script>

  <!-- This JS does not: -->
  <script nonce="wrong-nonce">
    alert('The bad guys won!');
  </script>
</body>
</html>
Viewing this page in a browser (with Javascript enabled) should show the text “JavaScript ran successfully.”, but should not show an alertbox containing the text “The bad guys won!”.

But for folks like me – and you too, Vika,, from the sounds of things – who serve most of their pages, most of the time, from the cache or from static HTML files… and who add the CSP header on using webserver configuration… this approach just doesn’t work.

I experimented with a few solutions:

  • A long-lived nonce that rotates.
    CSP allows you to specify multiple nonces, so I considered having a rotating nonce that was applied to pages (which were then cached for a period) and delivered by the header… and then a few hours later a new nonce would be generated and used for future page generations and appended to the header… and after the cache expiry time the oldest nonces were rotated-out of the header and became invalid.
  • Dynamic nonce injection.
    I experimented with having the webserver parse pages and add nonces: randomly generating a nonce, putting it in the header, and then basically doing a s/<script/<script nonce="..."/ to search-and-replace it in.

Both of these are terrible solutions. The first one leaves a window of, in my case, about 24 hours during which a successfully-injected script can be executed. The second one effectively allowlists all scripts, regardless of their provenance. I realised that what I was doing was security theatre: seeking to boost my A-rating to an A+-rating on SecurityHeaders.com without actually improving security at all.

But the second approach gave me an idea. I could have a server-side secret that gets search-replaced out. E.g. if I “signed” all of my legitimate scripts with something like <script nonce="dans-secret-key-goes-here" ...> then I could replace s/dans-secret-key-goes-here/actual-nonce-goes-here/ and thus have the best of both worlds: static, cacheable pages, and actual untamperable nonces. So long as I took care to ensure that the pages were never delivered to anybody with the secret key still intact, I’d be sorted!

Alternatively, I was looking into whether Caddy can do something like mod_asis does for Apache: that is, serve a file “as is”, with headers included in the file. That way, I could have the CSP header generated with the page and then saved into the cache, so it’s delivered with the same none every time… until the page changes. I’d love more webservers to have an “as is” mode, but I appreciate that might be a big ask (Apache’s mechanism, I suspect, exploits the fact that HTTP/1.0 and HTTP/1.1 literally send headers, followed by two CRLFs, then content… but that’s not what happens in HTTP/2+).

So yeah, I’ll probably do a server-side-secret approach, down the line. Maybe that’ll work for you, too.

UK’s secret Apple iCloud backdoor order is a global emergency, say critics

This is a repost promoting content originally published elsewhere. See more things Dan's reposted.

In its latest attempt to erode the protections of strong encryption, the U.K. government has reportedly secretly ordered Apple to build a backdoor that would allow British security officials to access the encrypted cloud storage data of Apple customers anywhere in the world.

The secret order — issued under the U.K.’s Investigatory Powers Act 2016 (known as the Snoopers’ Charter) — aims to undermine an opt-in Apple feature that provides end-to-end encryption (E2EE) for iCloud backups, called Advanced Data Protection. The encrypted backup feature only allows Apple customers to access their device’s information stored on iCloud — not even Apple can access it.

Sigh. A continuation of a long-running saga of folks here in the UK attempting to make it easier for police to catch a handful of (stupid) criminals1… at the expense of making millions of people more-vulnerable to malicious hackers2.

If we continue on this path, it’ll only be a short number of years before you see a headline about a national secret, stored by a government minister (in the kind of ill-advised manner we know happens) on iCloud or similar and then stolen by a hostile foreign power who merely needed to bribe, infiltrate, or in the worst-case hack their way into Apple’s datacentres. And it’ll be entirely our own fault.

Meanwhile the serious terrorist groups will continue to use encryption that isn’t affected by whatever “ban” the UK can put into place (Al Qaeda were known to have developed their own wrapper around PGP, for example, decades ago), the child pornography rings will continue to tunnel traffic around whatever dark web platform they’ve made for themselves (I’m curious whether they’re actually being smart or not, but that’s not something I even remotely want to research), and either will still only be caught when they get sloppy and/or as the result of good old-fashioned police investigations.

Weakened and backdoored encryption in mainstream products doesn’t help you catch smart criminals. But it does help smart criminals to catch regular folks.

Footnotes

1 The smart criminals will start – or more-likely will already be using – forms of encryption that aren’t, and can’t, be prevented by legislation. Because fundamentally, cryptography is just maths. Incidentally, I assume you know that you can send me encrypted email that nobody else can read?

2 Or, y’know, abuse of power by police.

Endless SSH Tarpit on Debian

Tarpitting SSH with Endlessh

I had a smug moment when I saw security researcher Rob Ricci and friends’ paper empirically analysing brute-force attacks against SSH “in the wild”.1 It turns out that putting all your SSH servers on “weird” port numbers – which I’ve routinely done for over a decade – remains a pretty-effective way to stop all that unwanted traffic2, whether or not you decide to enhance that with some fail2ban magic.

But then I saw a comment about Endlessh. Endlessh3 acts like an SSH server but then basically reverse-Slow-Loris’s the connecting client, very gradually feeding it an infinitely-long SSH banner and hanging it for… well, maybe 15 seconds or so but possibly up to a week.

Installing an Endlessh tarpit on Debian 12

I was just setting up a new Debian 12 server when I learned about this. I’d already moved the SSH server port away from the default 224, so I figured I’d launch Endlessh on port 22 to slow down and annoy scanners.

Installation wasn’t as easy as I’d hoped considering there’s a package. Here’s what I needed to do:

  1. Move any existing SSH server to a different port, if you haven’t already, e.g. as shown in the footnotes.
  2. Install the package, e.g.: sudo apt update && sudo apt install -y endlessh
  3. Permit Endlessh to run on port 22: sudo setcap 'cap_net_bind_service=+ep' /usr/bin/endlessh
  4. Modify /etc/systemd/system/multi-user.target.wants/endlessh.service in the following ways:
    1. uncomment AmbientCapabilities=CAP_NET_BIND_SERVICE
    2. comment PrivateUsers=true
    3. change InaccessiblePaths=/run /var into InaccessiblePaths=/var
  5. Reload the modified service: sudo systemctl daemon-reload
  6. Configure Endlessh to run on port 22 rather than its default of 2222: echo "Port 22" | sudo tee /etc/endlessh/config
  7. Start Endlessh: sudo service endlessh start

To test if it’s working, connect to your SSH server on port 22 with your client in verbose mode, e.g. ssh -vp22 example.com and look for banner lines full of random garbage appearing at 10 second intervals.

Screenshot showing SSH connection being established to an Endlessh server, which is returning line after line of randomly-generated text as a banner.

It doesn’t provide a significant security, but you get to enjoy the self-satisfied feeling that you’re trolling dozens of opportunistic script kiddies a day.

Footnotes

1 It’s a good paper in general, if that’s your jam.

2 Obviously you gain very little security by moving to an unusual port number, given that you’re already running your servers in “keys-only” (PasswordAuthentication no) configuration mode already, right? Right!? But it’s nice to avoid all the unnecessary logging that wave after wave of brute-force attempts produce.

3 Which I can only assume is pronounced endle-S-S-H, but regardless of how it’s said out loud I appreciate the wordplay of its name.

4 To move your SSH port, you might run something like echo "Port 12345" | sudo tee /etc/ssh/sshd_config.d/unusual-port.conf and restart the service, of course.

×

Note #25343

As well as the programming tasks I’m working on for Three Rings this International Volunteer Day, I’m also doing a little devops. We’ve got a new server architecture rolling out next week, and I’m tasked with ensuring that the logging on them meets our security standards.

Terminal screenshot showing a directory listing of a logs directory with several gzipped logfiles with different date-stamped suffixes, and the contents of the logrotate configuration file that produced them.

Each server’s on-device logs are retained in date-stamped files for 14 days, but they’re also backed-up offsite daily.

Those bits all seem to be working, so next I need to work out a way to add a notification to our monitoring platform if any server doesn’t successfully push a log to the offsite backup in a timely manner.

×

Good Food, Bad Authorisation

I was browsing (BBC) Good Food today when I noticed something I’d not seen before: a “premium” recipe, available on their “app only”:

Screenshot showing recipes, one of which is labelled "App only" and "Premium".

I clicked on the “premium” recipe and… it looked just like any other recipe. I guess it’s not actually restricted after all?

Just out of curiosity, I fired up a more-vanilla web browser and tried to visit the same page. Now I saw an overlay and modal attempting1 to restrict access to the content:

Overlay attempting to block content to the page beneath, saying "Try 1 year for just £9.99 and save 81%".

It turns out their entire effort to restrict access to their premium content… is implemented in client-side JavaScript. Even when I did see the overlay and not get access to the recipe, all I needed to do was open my browser’s debugger and run document.body.classList.remove('tp-modal-open'); for(el of document.querySelectorAll('.tp-modal, .tp-backdrop')) el.remove(); and all the restrictions were lifted.

What a complete joke.

Why didn’t I even have to write my JavaScript two-liner to get past the restriction in my primary browser? Because I’m running privacy-protector Ghostery, and one of the services Ghostery blocks by-default is one called Piano. Good Food uses Piano to segment their audience in your browser, but they haven’t backed that by any, y’know, actual security so all of their content, “premium” or not, is available to anybody.

I’m guessing that Immediate Media (who bought the BBC Good Food brand a while back and have only just gotten around to stripping “BBC” out of the name) have decided that an ad-supported model isn’t working and have decided to monetise the site a little differently2. Unfortunately, their attempt to differentiate premium from regular content was sufficiently half-hearted that I barely noticed that, too, gliding through the paywall without even noticing were it not for the fact that I wondered why there was a “premium” badge on some of their recipes.

Screenshot from OpenSourceFood.com, circa 2007.
You know what website I miss? OpenSourceFood.com. It went downhill and then died around 2016, but for a while it was excellent.

Recipes probably aren’t considered a high-value target, of course. But I can tell you from experience that sometimes companies make basically this same mistake with much more-sensitive systems. The other year, for example, I discovered (and ethically disclosed) a fault in the implementation of the login forms of a major UK mobile network that meant that two-factor authentication could be bypassed entirely from the client-side.

These kinds of security mistakes are increasingly common on the Web as we train developers to think about the front-end first (and sometimes, exclusively). We need to do better.

Footnotes

1 The fact that I could literally see the original content behind the modal was a bit of a giveaway that they’d only hidden it, not actually protected it in any way.

2 I can see why they’d think that: personally, I didn’t even know there were ads on the site until I did the experiment above: turns out I was already blocking them, too, along with any anti-ad-blocking scripts that might have been running alongside.

× × ×