Once You’re Asking the Right Question, You Don’t Need To Ask!

Folks at work have been encouraging to make more use of generative AI in my workflow1; going beyond my current “fancy autocomplete” use and giving my agents more autonomy. My experience of such “vibe coding” so far has been… mixed2, but I promised I’d revisit it.

One thing that these models are usually effective at is summarisation3. This is valuable if you’re faced with a large and unfamiliar codebase and you’re looking to trace a particular thing but you’re not certain where it is or what it’ll be called. While they’re not always fast, these tools can at least work in the background, which allows the developer to get on with something else while the agent trawls logs, code, and configuration to find and explain a fuzzily-defined thing.

Recently, I had a moment which I thought might be such an instance… but it didn’t turn out quite the way I expected. Here’s the story4:

The broken dev env

I’d been drafted into an established and ongoing project to provide more hands, following a coworker’s departure last week. This project touches parts of our (sprawling, microsevices-based) infrastructure that I hadn’t looked at before, so there was a lot I didn’t yet know.

I picked an issue that had belonged to my former colleague that QA had rejected and set out to retrace their steps: to replicate the problem that the QA engineers had identified and in doing so learn more about the underlying process.  I spun up my development environment and tried to follow the steps.

A popup error message saying "Oops, something went wrong. Please try again."
The process failed… but much earlier than QA had said it would. Clearly my development environment was at fault, or at least not representative of their setup.

But I couldn’t even get as far as their problem before my frontend barfed out an error message. Sigh! Probably there’s some configuration I’ve missed somewhere in the myriad microservices, or else the data I’m testing with isn’t a fair reflection on what they’re doing as-standard.

Following some staff changes, I have no teammates on this side of the Atlantic who could help me decipher this: a “quick question on Slack” wouldn’t solve this one until hours from now. It was time to start debugging!

But… maybe Claude could help? It’s got access to almost all the same code, logs, tools and browser windows I do. I started typing:

✨ What’s up next, Dan?

In my development environment for https://service.dev/asset/new, when I click “Save”, I see the error “Oops, something went wrong.” Why?

Context is key

It’s quite possible that Claude would have gone away, had a “think”, done some tests, and then come back to me with a believable answer. It might even have been correct, and I’d have been able to short-cut my way back to productivity (and I’d have time to make a mug of coffee and finish reading my emails while it did so). Then, I’d just have to check that it was right, make the change, and get on with things.

But I realised that it’d probably work faster (and cheaper, and using less energy) if it had slightly more context from the get-go, so I elaborated. The first thing I’d want to know if I were debugging this is what was actually happening behind the scenes. I dipped into my browser’s Network debugger and extracted the relevant output, adding it to my prompt:

✨ What’s up next, Dan?

In my development environment for https://service.dev/asset/new, when I click “Save”, I see the error “Oops, something went wrong.” Why?The payload POSTed to the server is { content: 'test1', audience: [ 'one' ], status: 'draft' } and the response is a HTTP 500 with the following stack trace: pasted 94 lines

That’s more like it, now I could let it get on with its work. But wait…

Rubberducking

There’s a concept in computer programming called “rubberducking”. The name comes from an anecdote in The Pragmatic Programmer about a developer who, when stuck on a problem, would explain the code line-by-line to a rubber duck. The thinking is that talking-through a problem, even to someone (or something) who doesn’t understand it, can lead the speaker to insights they were otherwise missing.

I’ve done it myself many, many times: recruiting a convenient colleague or friend and talking them through the technical problem I was faced with, and inviting them to ask me to go into greater detail if I seemed to be skimming over anything, and I can promise that it can work.

A witch is happy and proud of her invention - a rubber duck. She explains to her friend: I just figured that formulating my questions out loud helps me to solve them, and finally that's all I needed.
I discovered Mini Fantasy Theater recently and loved this episode from its backlog.

The panel above is part of a series in which a sorceress called Cepper who’s coerced by her university into using Avian Intelligence (“AI”) – a robotic parrot5 that her headmaster insists is the future of magic. She experiments with it, finds it occasionally useful but more-often frustrating, attempts to implement her own local version but find that troublesome in different ways, and eventually settles on using an inanimate rubber duck instead. I get it, Cepper!

Let’s put that distraction aside for a moment and get back to the story of my broken development environment.

Clues in the stack trace

The top entry in the stack trace was an unsuccessful call to a different microservice, so I figured I’d pull its logs too, in order to further help direct the AI in the right direction6:

✨ What’s up next, Dan?

In my development environment for https://service.dev/asset/new, when I click “Save”, I see the error “Oops, something went wrong.” Why?The payload POSTed to the server is { content: 'test1', audience: [ 'one' ], status: 'draft' } and the response is a HTTP 500 with the following stack trace: pasted 94 linesThe stack trace suggests that a call is being made to the dojo backend service, where the following error log looks relevant: pasted 9 lines

I haven’t tried it, but I’m pretty confident that the LLM, after much number-crunching and a little warming-up of some datacentre somewhere, would get to the answer. But again, I found something niggling inside me: the second-from top line in the dojo logs suggested that a connection was being made to a further, deeper microservice.

I should pull its logs too, I figured.

The final puzzle piece

As an aide mémoire – in a way I’ve taken to doing when taking notes or when talking to AI – I first typed what I was going to provide. This is useful if, for example, somebody distracts me at a key moment: it means you’ve got a jumping-off point predefined by my past self:

✨ What’s up next, Dan?

In my development environment for https://service.dev/asset/new, when I click “Save”, I see the error “Oops, something went wrong.” Why?The payload POSTed to the server is { content: 'test1', audience: [ 'one' ], status: 'draft' } and the response is a HTTP 500 with the following stack trace: pasted 94 linesThe stack trace suggests that a call is being made to the dojo backend service, where the following error log looks relevant: pasted 9 lines. It’s calling osiris, which says:

I dipped into the directory for

osiris , and before I even got to the logs I spotted a problem: that microservice was on an old feature branch. How odd! I switched to the main branch and… everything started working.

The entire event took only a few minutes. I’d find some information, type it into Claude’s input field, realise that more information could be valuable, and repeat.

By the time I’d finished describing the problem, I’d discovered the solution. That’s the essence of successful rubberducking. I didn’t need the AI at all. All I needed was the illusion of something that might be able to help if I just talked through what I was thinking.

I don’t know what the moral is, here.

I wonder if I’d have been as effective had I just typed into my text editor. I suppose I would have, but I wonder if I’d have been motivated to do so in the first place? I’ve tried rubberducking before by talking to an imaginary person, but I’ve never tried typing to one7; maybe I should start?

Footnotes

1 I’m pretty sure every engineering department nowadays has it’s rabid fanboys, but I’m pleased that for the most part my colleagues take a more-pragmatic and realistic outlook: balancing the potential benefits of LLM-assisted coding with its many shortfalls, downsides, and risks.

2 My experience of vibe-coding in a nutshell: LLMs are great at knocking out the easy 80% of any engineering problem, but often in a way that makes the remaining 20% – already the hard part – harder than it would have been if a human had done the first 80% (especially if it’s the same human and they can bring their learnings with them)… and I’m definitely not the only one who’s found that. I also suspect that the unsatisfying and unimproving task of shepherding a flock of agents to write code and then casually reviewing it is not significantly more-productive (which research backs up) and results in a significantly increased regression rate… but I’m ready to be proven wrong when more studies come out. In short: I continue to think that GenAI isn’t useless, but neither is it necessarily always worthwhile.

3 So long as what you’ve got them summarising is something you can later verify!

4 I’ve taken huge liberties with the strict factual accuracy to make this more-readable as well as to to not-expose things I probably oughtn’t. So before you swoop in to criticise my prompt-fu (not that I asked you, but I know there’s somebody out there who’s thinking about doing this right now), please note that none of the text in this page are what I actually wrote to the AI; it’s a figurative example.

5 A literal stochastic parrot, one might say!

6 I’d had an experience just the previous week in which it’d gone off on completely the wrong track, attempting to change code in order to “fix” what was ultimately a configuration or data problem, and so I thought it might be useful to give it some rails to follow, to start with.

7 Except insofar as this AI agent is an “imaginary person”, which it possibly already a step-too-far in implying personhood for my liking!

×

Chicory Battlestation

Man, I have missed having a battlestation to work at these last few months. It’s nice to sit at one again, even if it’s only a ‘chicory battlestation’.

Two laptops sit nestled between two large monitors, with a desktop computer on the floor below and a teleprompter/webcam on an arm above, in a brightly-sunlit garden office.

×

What breaks?

What breaks when one of your developers leaves?

On Friday, I said goodbye to a colleague as she left us after most of a decade with the company. Then this morning, all hell broke loose on some production servers.

DataDog graph marked to show a co-worker's departure on Friday evening being immediately followed by a flurry of errors which then stopped for the weekend before coming back in force on Monday morning.

It turns out that the API key that connected our application to our feature flag management platform was associated with her account, and hadn’t shown up in the exit audit.

Let this be your reminder to go check where, if anywhere, your applications are using person-specific keys where they should be using generic ones!

×

Chicory, Coffee, and Code

Now that we’ve finished our move into the Chicory House, I have for the first time in over two months been able to set up my preferred coding environment… with a proper monitor on a proper desk with a proper office chair. Bliss!

A white man with blue hair tied up in a ponytail sits at a basic pine desk in a garden office, decorated with wallpaper showing toucans. In front of him are two laptops and a large monitor. He holds a black mug in his hand, as if about to drink from it.

×

My Salary History

Jeremy Keith posted his salary history last week. I absolutely agree with him that employers exploit the information gap created by opaque salary advertisement, and I think that our industry of software engineering is especially troublesome for this.

So I’m joining him (and others) in choosing to share my salary history. I’ve set up a new page for that purpose, but here’s the summary of its initial state:

Understand

A few understandings and caveats:

  • For most of my career I’ve described myself as a “Full-Stack Web Applications Developer”, but I’ve worked outside of every one of those words and my job titles have often been more like “CMS Developer” or “Senior Engineer (Security)”.
  • My specialisms and “hot areas” are security engineering, web standards, performance, and accessibility.
  • When I worked multiple roles in a year, I’ve tried to capture that, but there’ll be some fuzziness around the edges.
  • The salaries are rounded slightly to make nice readable numbers.
  • I’ve not always worked full-time; all salaries are translated into “full-time equivalent”1.
  • I’ve only included jobs that fit into my software engineering career2.
  • If the table below looks out-of-date then I’ve probably just forgotten to update it. Let me know!

History

Year Employer Salary Notes
2025 – 2026 Firstup £80,000 Remote. + Stock (spread over four years).
2024 – 2025 Automattic £111,000 Remote. + Stock (one-off bonus, worth ~£6,000).
2023 Automattic £103,000 Remote.
2021 – 2022 Automattic £98,000 Remote
2019 – 2020 Automattic £89,000 Remote.
2015 – 2019 Bodleian Libraries + Freelance £39,000 Hybrid.
2011 – 2014 Bodleian Libraries £36,000 Practices salary transparency! ❤️
2010 – 2011 SmartData + Freelance £26,000 Remote.
2007 – 2009 SmartData £24,000
2004 – 2006 SmartData £19,500
2002 – 2003 SmartData £16,500 Alongside full-time study.
2002 CTA £18,000
2001 Freelance £4,500 Ad-hoc and hard to estimate.
Alongside full-time study.

What does that look like?

I drew a graph, but I don’t like it. Mostly because I don’t see my salary as a “goal” to aim for or some kind of “score”.

It’s gone up; it’s gone down; but I’ve always been more-motivated by what I’m working on, with whom, and for what purpose than I have been on how much I get paid for it3. But if you want to see:

Graph showing my salary history: the same data as is in the table above.

I’m not sure to what degree my career looks typical or not. But I guess I also don’t care! My motivations are probably different than most (a little-more idealistic, a little-less capitalistic), I’d guess.

Footnotes

1 i.e. what I’d have earned if I had worked full-time

2 That summer back in college that I worked in a factory building striplight fittings doesn’t appear, for example!

3 Pro-tip if you’re looking at my CV and pitching me an opportunity: mention what you expect to pay, sure, but if you’re trying to win me over then tell me about the problems I’ll be solving and how that’ll make the world a better place. That’s how you motivate me to accept your offer!

×

Garden Office

I’ve lived in a LOT of different places these last few months while we’ve been arranging a place to live for the next six months or so of our house repairs. Each new AirBnB has had its pros and cons (and each hasn’t felt like “home”).

Two laptops on a glass desk in an attractive garden office/summer house, bathed in bright afternoon sunshine.

But man, I really like the “garden office” at our current one. So nice to work in the sun!

(I don’t like the slow WiFi as much, but yeah… pros and cons!)

×

Subverting AI Agent Logging with a Git Post-Commit Hook

Last night I was chatting to my friend (and fellow Three Rings volunteer) Ollie about our respective workplaces and their approach to AI-supported software engineering, and it echoed conversations I’ve had with other friends. Some workplaces, it seems, are leaning so-hard into AI-supported software development that they’re berating developers who seem to be using the tools less than their colleagues!

That’s a problem for a few reasons, principal among them that AI does not make you significantly faster but does make you learn less.1. I stand by the statement that AI isn’t useless, and I’ve experimented with it for years. But I certainly wouldn’t feel very comfortable working somewhere that told me I was underperforming if, say, my code contributions were less-likely than the average to be identifiably “written by an AI”.

Even if you’re one of those folks who swears by your AI assistant, you’ve got to admit that they’re not always the best choice.

Copilot review of some code on GitHub, in which it's telling me that I should have included an .agent-logs/... file in which my AI agent describes how it helped, but I'm responding to say that 'shockingly' I wrote it without the help of AI, and telling Copilot to shut up.
I ran into something a little like what Ollie described when an AI code reviewer told me off for not describing how my AI agent assisted me with the code change… when no AI had been involved: I’d written the code myself.2

I spoke to another friend, E, whose employers are going in a similar direction. E joked that at current rates they’d have to start tagging their (human-made!) commits with fake AI agent logs in order to persuade management that their level of engagement with AI was correct and appropriate.3

Supposing somebody like Ollie or E or anybody else I spoke to did feel the need to “fake” AI agent logs in order to prove that they were using AI “the right way”… that sounds like an excuse for some automation!

I got to thinking: how hard could it be to add a git hook that added an AI agent’s “logging” to each commit, as if the work had been done by a robot?4

Turns out: pretty easy…

Animation showing a terminal. The developer switches to a branch, adds two modifications, and commits them. Afterwards, the log and filesystem show that a log file has been created crediting (fictional) AI bot 'frantic' with the change.
To try out my idea, I made two changes to a branch. When I committed, imaginary AI agent ‘frantic’ took credit, writing its own change log. Also: asciinema + svg-term remains awesome.

Here’s how it works (with source code!). After you make a commit, the post-commit hook creates a file in .agent-logs/, named for your current branch. Each commit results in a line being appended to that file to say something like [agent] first line of your commit message, where agent is  the name of the AI agent you’re pretending that you used (you can even configure it with an array of agent names and it’ll pick one at random each time: my sample code uses the names agent, stardust, and frantic).

There’s one quirk in my code. Git hooks only get the commit message (the first line of which I use as the imaginary agent’s description of what it did) after the commit has taken place. Were a robot really used to write the code, it’d have updated the file already by this point. So my hook has to do an --amend commit, to retroactively fix what was already committed. And to do that without triggering itself and getting into an infinite loop, it needs to use a temporary environment variable. Ignoring that, though, there’s nothing particularly special about this code. It’s certainly more-lightweight, faster-running, and more-accurate than a typical coding LLM.

Sure, my hook doesn’t attempt to write any of the code for you; it just makes it look like an AI did. But in this instance: that’s a feature, not a bug!

Footnotes

1 That research comes from Anthropic. Y’know, the company who makes Claude, one of the most-popular AIs used by programmers.

2 Do I write that much like an AI? Relevant XKCD.

3 Using “proportion of PRs that used AI” as a metric for success seems to me to be just slightly worse than using “number of lines of code produced”. And, as this blog post demonstrates, the former can be “gamed” just as effectively as the latter (infamously) could.

4 Obviously – and I can’t believe I have to say this – lying to your employer isn’t a sensible long-term strategy, and instead educating them on what AI is (if anything) and isn’t good for in your workflow is a better solution in the end. If you read this blog post and actually think for a moment hey, I should use this technique, then perhaps there’s a bigger problem you ought to be addressing!

× ×

To really foul things up you need an AI

Today, an AI review tool used by my workplace reviewed some code that I wrote, and incorrectly claimed that it would introduce a bug because a global variable I created could “be available to multiple browser tabs” (that’s not how browser JavaScript works).

Just in case I was mistaken, I explained to the AI why I thought it was wrong, and asked it to explain itself.

To do so, the LLM wrote a PR to propose adding some code to use our application’s save mechanism to pass the data back, via the server, and to any other browser tab, thereby creating the problem that it claimed existed.

This isn’t even the most-efficient way to create this problem. localStorage would have been better.

So in other words, today I watched an AI:
(a) claim to have discovered a problem (that doesn’t exist),
(b) when challenged, attempt to create the problem (that wasn’t needed), and
(c) do so in a way that was suboptimal.

Humans aren’t perfect. A human could easily make one of these mistakes. Under some circumstances, a human might even have made two of these mistakes. But to make all three? That took an AI.

What’s the old saying? “To err is human, but to really foul things up you need a computer.”

Mocking SharePoint

Highlight of my workday was debugging an issue that turned out to be nothing like what the reporter had diagnosed.

The report suggested that our system was having problems parsing URLs with colons in the pathname, suggesting perhaps an encoding issue. It wasn’t until I took a deep dive into the logs that I realised that this was a secondary characteristic of many URLs found in customers’ SharePoint installations. And many of those URLs get redirected. And SharePoint often uses relative URLs when it sends redirections. And it turned out that our systems’ redirect handler… wasn’t correctly handling relative URLs.

It all turned into a hundred line automated test to mock SharePoint and demonstrate the problem… followed by a tiny two-line fix to the actual code. And probably the most-satisfying part of my workday!

Peripheral Vision

As I lay in bed the other night, I became aware of an unusually-bright LED, glowing in the corner of my room1. Lying still in the dark, I noticed that as I looked directly at the light meant that I couldn’t see it… but when I looked straight ahead – not at it – I could make it out.

Animated illustration showing how an eyeball that rotates to face a light source can have that light obstructed by an intermediary obstacle, but when it looks "away" some of the light can hit the pupil as a consequence of its curved shape now appearing "above the horizon" of the obstacle.
In my bedroom the obstruction was the corner of my pillow, not a nondescript black rectangle. Also: my eyeball was firmly within my skull and not floating freely in a white void.

This phenomenon seems to be most-pronounced when the thing you’re using a single eye to looking at something small and pointlike (like an LED), and where there’s an obstacle closer to your eye than to the thing you’re looking at. But it’s still a little spooky2.

It’s strange how sometimes you might be less-able to see something that you’re looking directly at… than something that’s only in your peripheral vision.

I’m now at six months since I started working for Firstup.3 And as I continue to narrow my focus on the specifics of the company’s technology, processes, and customers… I’m beginning to lose a sight of some of the things that were in my peripheral vision.

Dan, a white man with blue hair, wears headphones and a grey 'Firstup' hoodie, holding a 'Firstup'-branded shoebox.
I’ve not received quite so many articles of branded clothing and other swap from my new employer as I did from my previous, but getting useful ‘swag’ still feels cool.

I’m a big believer in the idea that folks who are new to your group (team, organisation, whatever) have a strange superpower that fades over time: the ability to look at “how you work” as an outsider and bring new ideas. It requires a certain boldness to not just accept the status quo but to ask “but why do we do things this way?”. Sure, the answer will often be legitimate and unchallengeable, but by using your superpower and raising the question you bring a chance of bringing valuable change.

That superpower has a sweet spot. A point at which a person knows enough about your new role that they can answer the easy questions, but not so late that they’ve become accustomed to the “quirks” that they can’t see them any longer. The point at which your peripheral vision still reveals where there’s room for improvement, because you’re not yet so-focussed on the routine that you overlook the objectively-unusual.

I feel like I’m close to that sweet spot, right now, and I’m enjoying the opportunity to challenge some of Firstup’s established patterns. Maybe there are things I’ve learned or realised over the course of my career that might help make my new employer stronger and better? Whether not not that turns out to be the case, I’m enjoying poking at the edges to find out!

Footnotes

1 The LED turned out to be attached to a laptop charger that was normally connected in such a way that it wasn’t visible from my bed.

2 Like the first time you realise that you have a retinal blind spot and that your brain is “filling in” the gaps based on what’s around it, like Photoshop’s “smart remove” tool is running within your head.

3 You might recall that I wrote about my incredibly-efficient experience of the recruitment process at Firstup.

× ×

Why I Am So Tired [Video]

This post is also available as an article. So if you'd rather read a conventional blog post of this content, you can!

This is the video version of a joke that’s also available as a blog post and as a podcast episode. Watch it here, or find it:

Why I Am So Tired

Duration

Podcast Version

This post is also available as a podcast. Listen here, download for later, or subscribe wherever you consume podcasts.

This post is also available as a video. If you'd prefer to watch/listen to me talk about this topic, give it a look.

I am tired. For a couple of years I’ve been blaming it on iron-poor blood, lack of vitamins, diet, and a dozen other maladies. But now I’ve found out the real reason: I’m tired because I’m overworked.

The population of the UK is 69 million1, of which the latest census has 37 million “of working age”2.

According to the latest statistics, 4,215,913 are unemployed3, leaving 32,784,087 people to do all the work.

19.2 million are in full time education4, 856,211 in the armed forces5, and collectively central, regional, and local government employs 4.987 million6. This leaves just 12,727,876 to do all of the real work.

Long term disabilities affect 6.9 million7. 393,000 are on visas that prohibit them from working8, and 108,0859 are working their way through the asylum process.

Of the remaining 339,791 people, a hundred thousand are in prison10 and 239,789 are in hospital11.

That leaves just two people to do all the work that keeps this country on its feet.

You and me.

And you’re sitting reading this.

This joke originally appeared aeons ago. I first saw it in a chain email in around 199612, when I adapted it from a US-centric version to a more British one and re-circulated it among some friends… taking the same kinds of liberties with the numbers that are required to make the gag work.

And now I’ve updated it with some updated population statistics13.

Footnotes

1 Source: Provisional population estimate for the UK: mid-2025, Office for National Statistics.

2 Source: Working age population, gov.uk.

3 Source: Unemployment, Office for National Statistics.

4 Source: Statistica for all the children, plus FE students from Education and training statistics for the UK, gov.uk, with some rounding.

5 Source: Hansard, here, plus other sources from the same time period.

6 Source: this informative article.

7 Source: UK disability statistics: Prevalence and life experiences, House of Commons Library.

8 Source: Reason for international migration, international students update: May 2025, Office for National Statistics.

9 Source: How many people claim asylum in the UK?, gov.uk.

10 Source: Prison population: weekly estate figures 2025, gov.uk.

11 Source: Bed Availability and Occupancy, Hansard Library.

12 In fact, I rediscovered it while looking through an old email backup from 1997, which inspired this blog post.

13 Using the same dodgy arithmetic, cherry-picking, double-counting, wild over-estimations, and hand-waving nonsense. Obviously this is a joke. Oh god, is somebody on the satire-blind Internet of 2026 going to assume any of these numbers are believable? (They’re not.) Or think I’m making some kind of political point? (I’m not.) What a minefield we live in, nowadays.

Two Days Offline

This week, I spent two days on a shoestring Internet connection, and it was pretty shit.

A silver van branded 'Zero Loss' has its doors open in a rainy road, as seen from the window of a nearby house, over its garden.
I’m not saying that these telecomms engineers, who were doing something in some of the nearby utility cabinets at the very moment our Internet connection dropped, were responsible… but it’d make an amusing irony of their company name – Zero Loss – if they were.

As you might anticipate, we run a complicated network at our house, and so when my connection dropped a quarter of an hour into the beginning of three and a half hours of scheduled meetings on a busy afternoon, my first thought was to check that everything was working locally. Internal traffic all seemed to be going the right way, so then I checked the primary router and discovered that the problem was further upstream. I checked our fibre terminator, and sure enough: it said it wasn’t getting a signal.

I checked the status page for our ISP – no reported problems. So I called them up. I was pleased that (after I relayed what tests I’d done so far) they treated me like a network specialist rather than somebody who needed hand-holding and we skipped over the usual “have you tried turning it off and on again” and got straight to some diagnosis and scheduling an engineer for the next day. That’d do.

A child leans out of an open window into a rainy garden, holding her phone outdoors to watch a YouTube video which shows the words 'I GUESS' over a blue-tinted forest background.
Our village has pretty weak cellular reception, and what little there is struggles to penetrate our walls, some of which are made of stone. And so for a little while, “leaning out of the window” was the only way to get Internet access while (mostly) dodging the rain.

The end of a workday being ruined was a bit of a drag, but for Ruth it was definitely worse, as she was overseeing a major project the following morning (from 5am!) and so needed to arrange for emergency out-of-hours access to her office for the next day to be able to make it work. As for me: I figured I’d be back online by lunchtime, and working a little into the evening would give me a rare opportunity for an increased overlap with my team – many of which are on the Pacific coast of the US – so it’d all work out.

The engineer arrived the next morning, just as a storm hit. He traced the problem, waited for the rain to ease off, then stomped off up the street to get it fixed. Only a matter of time now, I thought.

But nope: he came back to say that wherever the fault had occurred was found somewhere under the road that he couldn’t access by himself: it’d need a team of two engineers to get down there and fix it, and they wouldn’t be able to come… until tomorrow.

So I went up to the attic to work, which is just about the only place in the house where – by balancing my phone against a window – I can consistently tether at 4G/5G. Well… semi-consistently. Inconsistently enough to be immensely frustrating.

A phone, displaying a tethering configuration screen, is propped on top of several decks of cards on a cluttered window ledge.
Earlier efforts to tether from downstairs were even less successful.

There’s this thing, I find: no Internet access is annoying, but tolerable.

Slow Internet access is similar.

But intermittent Internet access is, somehow, a nightmare. Applications hang or fail in unpredictable ways, their developers not having planned for the possibility that the connection they detected for when they were opened might come and go at random. Shitty modern “web applications” that expect to download multiple megabytes of JavaScript before they work show skeleton loaders and dehydrated <div>s that might one day grow up to be something approximating a button, link, or image. It’s just generally a pretty crap experience.

It’s funny how we got so dependent upon the Internet. 26+ years ago, I used to write most of my Web-destined PHP and Perl code “offline”! I’d dial-up to the Internet to download documentation or upload code, then work from my memory, from books, and what I’d saved from the Web. Can you imagine asking a junior Web developer to do that today?

A Gigaclear van sits parked on a gravel driveway.
The second team of engineers were fortunate enough to arrive on a less-torrential day.

In a second ironic twist, a parcel arrived for me during our downtime which contained new network hardware with which I planned to eliminate a couple of WiFi weak spots at the edges of our house. The new hardware worked perfectly and provided a wonderful improvement to signal strength between our computers… but of course not to computers outside of the network.

There’s another interesting thing that’s changed over the decades. When I first started installing (bus!) networks, there was no assumption that the network would necessarily provide Internet access. The principal purpose of the network was to connect the computers within the LAN to one another. This meant that staff could access one another’s files more easily and make use of a shared printer without walking around carrying floppy disks, for example… or could frag one another at Doom and Quake at the LAN parties that I’d sometimes run from my mum’s living room!

But nowadays, if you connect to a network (whether wired or wireless) there’s an expectation that it’ll provide Internet access. So much so, that if you join a wireless network using your mobile phone and it doesn’t provide Internet access, your phone probably won’t route any traffic across it unless you specifically say that it should. That’s a reasonable default, these days, but it’s an annoyance when – for example – I wanted my phone to continue using Syncthing to back up my photos to my NAS even though the network that my NAS was on would no longer provide Internet access to my phone!

The second team of engineers quickly found and repaired a break in the fibre – apparently it was easier than the first engineer had expected – and normalcy returned to our household.

But for a couple of days, there, I was forcibly (and unpleasantly) reminded about how the world has changed since the time that “being on a network” wasn’t assumed to be synonymous with “has Internet access”.

× × × ×