Now that we’ve finished our move into the Chicory House, I have for the first time in over two months been able to set up my preferred coding environment… with a proper monitor on a proper desk with a proper office chair. Bliss!
Tag: work
My Salary History
Jeremy Keith posted his salary history last week. I absolutely agree with him that employers exploit the information gap created by opaque salary advertisement, and I think that our industry of software engineering is especially troublesome for this.
So I’m joining him (and others) in choosing to share my salary history. I’ve set up a new page for that purpose, but here’s the summary of its initial state:
Understand
A few understandings and caveats:
- For most of my career I’ve described myself as a “Full-Stack Web Applications Developer”, but I’ve worked outside of every one of those words and my job titles have often been more like “CMS Developer” or “Senior Engineer (Security)”.
- My specialisms and “hot areas” are security engineering, web standards, performance, and accessibility.
- When I worked multiple roles in a year, I’ve tried to capture that, but there’ll be some fuzziness around the edges.
- The salaries are rounded slightly to make nice readable numbers.
- I’ve not always worked full-time; all salaries are translated into “full-time equivalent”1.
- I’ve only included jobs that fit into my software engineering career2.
- If the table below looks out-of-date then I’ve probably just forgotten to update it. Let me know!
History
| Year | Employer | Salary | Notes |
|---|---|---|---|
| 2025 – 2026 | Firstup | £80,000 | Remote. + Stock (spread over four years). |
| 2024 – 2025 | Automattic | £111,000 | Remote. + Stock (one-off bonus, worth ~£6,000). |
| 2023 | Automattic | £103,000 | Remote. |
| 2021 – 2022 | Automattic | £98,000 | Remote |
| 2019 – 2020 | Automattic | £89,000 | Remote. |
| 2015 – 2019 | Bodleian Libraries + Freelance | £39,000 | Hybrid. |
| 2011 – 2014 | Bodleian Libraries | £36,000 | Practices salary transparency! ❤️ |
| 2010 – 2011 | SmartData + Freelance | £26,000 | Remote. |
| 2007 – 2009 | SmartData | £24,000 | |
| 2004 – 2006 | SmartData | £19,500 | |
| 2002 – 2003 | SmartData | £16,500 | Alongside full-time study. |
| 2002 | CTA | £18,000 | |
| 2001 | Freelance | £4,500 |
Ad-hoc and hard to estimate.
Alongside full-time study. |
What does that look like?
I drew a graph, but I don’t like it. Mostly because I don’t see my salary as a “goal” to aim for or some kind of “score”.
It’s gone up; it’s gone down; but I’ve always been more-motivated by what I’m working on, with whom, and for what purpose than I have been on how much I get paid for it3. But if you want to see:
I’m not sure to what degree my career looks typical or not. But I guess I also don’t care! My motivations are probably different than most (a little-more idealistic, a little-less capitalistic), I’d guess.
Footnotes
1 i.e. what I’d have earned if I had worked full-time
2 That summer back in college that I worked in a factory building striplight fittings doesn’t appear, for example!
3 Pro-tip if you’re looking at my CV and pitching me an opportunity: mention what you expect to pay, sure, but if you’re trying to win me over then tell me about the problems I’ll be solving and how that’ll make the world a better place. That’s how you motivate me to accept your offer!
Garden Office
I’ve lived in a LOT of different places these last few months while we’ve been arranging a place to live for the next six months or so of our house repairs. Each new AirBnB has had its pros and cons (and each hasn’t felt like “home”).
But man, I really like the “garden office” at our current one. So nice to work in the sun!
(I don’t like the slow WiFi as much, but yeah… pros and cons!)
Subverting AI Agent Logging with a Git Post-Commit Hook
Last night I was chatting to my friend (and fellow Three Rings volunteer) Ollie about our respective workplaces and their approach to AI-supported software engineering, and it echoed conversations I’ve had with other friends. Some workplaces, it seems, are leaning so-hard into AI-supported software development that they’re berating developers who seem to be using the tools less than their colleagues!
That’s a problem for a few reasons, principal among them that AI does not make you significantly faster but does make you learn less.1. I stand by the statement that AI isn’t useless, and I’ve experimented with it for years. But I certainly wouldn’t feel very comfortable working somewhere that told me I was underperforming if, say, my code contributions were less-likely than the average to be identifiably “written by an AI”.
Even if you’re one of those folks who swears by your AI assistant, you’ve got to admit that they’re not always the best choice.
I spoke to another friend, E, whose employers are going in a similar direction. E joked that at current rates they’d have to start tagging their (human-made!) commits with fake AI agent logs in order to persuade management that their level of engagement with AI was correct and appropriate.3
Supposing somebody like Ollie or E or anybody else I spoke to did feel the need to “fake” AI agent logs in order to prove that they were using AI “the right way”… that sounds like an excuse for some automation!
I got to thinking: how hard could it be to add a git hook that added an AI agent’s “logging” to each commit, as if the work had been done by a robot?4
Turns out: pretty easy…
asciinema + svg-term remains awesome.
Here’s how it works (with source code!). After you make a commit, the post-commit hook creates a file in
.agent-logs/, named for your current branch. Each commit results in a line being appended to that file to say something like [agent] first line of your commit
message, where agent is the name of the AI agent you’re pretending that you used (you can even configure it with an array of agent names and it’ll pick one at
random each time: my sample code uses the names agent, stardust, and frantic).
There’s one quirk in my code. Git hooks only get the commit message (the first line of which I use as the imaginary agent’s description of what it did) after the commit has
taken place. Were a robot really used to write the code, it’d have updated the file already by this point. So my hook has to do an --amend commit, to
retroactively fix what was already committed. And to do that without triggering itself and getting into an infinite loop, it needs to use a temporary environment variable.
Ignoring that, though, there’s nothing particularly special about this code. It’s certainly more-lightweight, faster-running, and more-accurate than a typical coding LLM.
Sure, my hook doesn’t attempt to write any of the code for you; it just makes it look like an AI did. But in this instance: that’s a feature, not a bug!
Footnotes
1 That research comes from Anthropic. Y’know, the company who makes Claude, one of the most-popular AIs used by programmers.
2 Do I write that much like an AI? Relevant XKCD.
3 Using “proportion of PRs that used AI” as a metric for success seems to me to be just slightly worse than using “number of lines of code produced”. And, as this blog post demonstrates, the former can be “gamed” just as effectively as the latter (infamously) could.
4 Obviously – and I can’t believe I have to say this – lying to your employer isn’t a sensible long-term strategy, and instead educating them on what AI is (if anything) and isn’t good for in your workflow is a better solution in the end. If you read this blog post and actually think for a moment hey, I should use this technique, then perhaps there’s a bigger problem you ought to be addressing!
To really foul things up you need an AI
Today, an AI review tool used by my workplace reviewed some code that I wrote, and incorrectly claimed that it would introduce a bug because a global variable I created could “be available to multiple browser tabs” (that’s not how browser JavaScript works).
Just in case I was mistaken, I explained to the AI why I thought it was wrong, and asked it to explain itself.
To do so, the LLM wrote a PR to propose adding some code to use our application’s save mechanism to pass the data back, via the server, and to any other browser tab, thereby creating the problem that it claimed existed.
This isn’t even the most-efficient way to create this problem. localStorage would have been better.
So in other words, today I watched an AI:
(a) claim to have discovered a problem (that doesn’t exist),
(b) when challenged, attempt to create the problem (that wasn’t needed), and
(c) do so in a way that was suboptimal.
Humans aren’t perfect. A human could easily make one of these mistakes. Under some circumstances, a human might even have made two of these mistakes. But to make all three? That took an AI.
What’s the old saying? “To err is human, but to really foul things up you need a computer.”
Mocking SharePoint
Highlight of my workday was debugging an issue that turned out to be nothing like what the reporter had diagnosed.
The report suggested that our system was having problems parsing URLs with colons in the pathname, suggesting perhaps an encoding issue. It wasn’t until I took a deep dive into the logs that I realised that this was a secondary characteristic of many URLs found in customers’ SharePoint installations. And many of those URLs get redirected. And SharePoint often uses relative URLs when it sends redirections. And it turned out that our systems’ redirect handler… wasn’t correctly handling relative URLs.
It all turned into a hundred line automated test to mock SharePoint and demonstrate the problem… followed by a tiny two-line fix to the actual code. And probably the most-satisfying part of my workday!
Peripheral Vision
As I lay in bed the other night, I became aware of an unusually-bright LED, glowing in the corner of my room1. Lying still in the dark, I noticed that as I looked directly at the light meant that I couldn’t see it… but when I looked straight ahead – not at it – I could make it out.
This phenomenon seems to be most-pronounced when the thing you’re using a single eye to looking at something small and pointlike (like an LED), and where there’s an obstacle closer to your eye than to the thing you’re looking at. But it’s still a little spooky2.
It’s strange how sometimes you might be less-able to see something that you’re looking directly at… than something that’s only in your peripheral vision.
I’m now at six months since I started working for Firstup.3 And as I continue to narrow my focus on the specifics of the company’s technology, processes, and customers… I’m beginning to lose a sight of some of the things that were in my peripheral vision.
I’m a big believer in the idea that folks who are new to your group (team, organisation, whatever) have a strange superpower that fades over time: the ability to look at “how you work” as an outsider and bring new ideas. It requires a certain boldness to not just accept the status quo but to ask “but why do we do things this way?”. Sure, the answer will often be legitimate and unchallengeable, but by using your superpower and raising the question you bring a chance of bringing valuable change.
That superpower has a sweet spot. A point at which a person knows enough about your new role that they can answer the easy questions, but not so late that they’ve become accustomed to the “quirks” that they can’t see them any longer. The point at which your peripheral vision still reveals where there’s room for improvement, because you’re not yet so-focussed on the routine that you overlook the objectively-unusual.
I feel like I’m close to that sweet spot, right now, and I’m enjoying the opportunity to challenge some of Firstup’s established patterns. Maybe there are things I’ve learned or realised over the course of my career that might help make my new employer stronger and better? Whether not not that turns out to be the case, I’m enjoying poking at the edges to find out!
Footnotes
1 The LED turned out to be attached to a laptop charger that was normally connected in such a way that it wasn’t visible from my bed.
2 Like the first time you realise that you have a retinal blind spot and that your brain is “filling in” the gaps based on what’s around it, like Photoshop’s “smart remove” tool is running within your head.
3 You might recall that I wrote about my incredibly-efficient experience of the recruitment process at Firstup.
Why I Am So Tired [Video]
This post is also available as an article. So if you'd rather read a conventional blog post of this content, you can!
This is the video version of a joke that’s also available as a blog post and as a podcast episode. Watch it here, or find it:
- on YouTube (also as a “short”, for people who are too lazy to rotate their phone screen to horizontal and/or don’t have the attention span for more than three minutes of content)
- on Facebook
Why I Am So Tired
Podcast Version
This post is also available as a podcast. Listen here, download for later, or subscribe wherever you consume podcasts.
This post is also available as a video. If you'd prefer to watch/listen to me talk about this topic, give it a look.
I am tired. For a couple of years I’ve been blaming it on iron-poor blood, lack of vitamins, diet, and a dozen other maladies. But now I’ve found out the real reason: I’m tired because I’m overworked.
The population of the UK is 69 million1, of which the latest census has 37 million “of working age”2.
According to the latest statistics, 4,215,913 are unemployed3, leaving 32,784,087 people to do all the work.
19.2 million are in full time education4, 856,211 in the armed forces5, and collectively central, regional, and local government employs 4.987 million6. This leaves just 12,727,876 to do all of the real work.
Long term disabilities affect 6.9 million7. 393,000 are on visas that prohibit them from working8, and 108,0859 are working their way through the asylum process.
Of the remaining 339,791 people, a hundred thousand are in prison10 and 239,789 are in hospital11.
That leaves just two people to do all the work that keeps this country on its feet.
You and me.
And you’re sitting reading this.
This joke originally appeared aeons ago. I first saw it in a chain email in around 199612, when I adapted it from a US-centric version to a more British one and re-circulated it among some friends… taking the same kinds of liberties with the numbers that are required to make the gag work.
And now I’ve updated it with some updated population statistics13.
Footnotes
1 Source: Provisional population estimate for the UK: mid-2025, Office for National Statistics.
2 Source: Working age population, gov.uk.
3 Source: Unemployment, Office for National Statistics.
4 Source: Statistica for all the children, plus FE students from Education and training statistics for the UK, gov.uk, with some rounding.
5 Source: Hansard, here, plus other sources from the same time period.
6 Source: this informative article.
7 Source: UK disability statistics: Prevalence and life experiences, House of Commons Library.
8 Source: Reason for international migration, international students update: May 2025, Office for National Statistics.
9 Source: How many people claim asylum in the UK?, gov.uk.
10 Source: Prison population: weekly estate figures 2025, gov.uk.
11 Source: Bed Availability and Occupancy, Hansard Library.
12 In fact, I rediscovered it while looking through an old email backup from 1997, which inspired this blog post.
13 Using the same dodgy arithmetic, cherry-picking, double-counting, wild over-estimations, and hand-waving nonsense. Obviously this is a joke. Oh god, is somebody on the satire-blind Internet of 2026 going to assume any of these numbers are believable? (They’re not.) Or think I’m making some kind of political point? (I’m not.) What a minefield we live in, nowadays.
Note #27296
Two Days Offline
This week, I spent two days on a shoestring Internet connection, and it was pretty shit.
As you might anticipate, we run a complicated network at our house, and so when my connection dropped a quarter of an hour into the beginning of three and a half hours of scheduled meetings on a busy afternoon, my first thought was to check that everything was working locally. Internal traffic all seemed to be going the right way, so then I checked the primary router and discovered that the problem was further upstream. I checked our fibre terminator, and sure enough: it said it wasn’t getting a signal.
I checked the status page for our ISP – no reported problems. So I called them up. I was pleased that (after I relayed what tests I’d done so far) they treated me like a network specialist rather than somebody who needed hand-holding and we skipped over the usual “have you tried turning it off and on again” and got straight to some diagnosis and scheduling an engineer for the next day. That’d do.
The end of a workday being ruined was a bit of a drag, but for Ruth it was definitely worse, as she was overseeing a major project the following morning (from 5am!) and so needed to arrange for emergency out-of-hours access to her office for the next day to be able to make it work. As for me: I figured I’d be back online by lunchtime, and working a little into the evening would give me a rare opportunity for an increased overlap with my team – many of which are on the Pacific coast of the US – so it’d all work out.
The engineer arrived the next morning, just as a storm hit. He traced the problem, waited for the rain to ease off, then stomped off up the street to get it fixed. Only a matter of time now, I thought.
But nope: he came back to say that wherever the fault had occurred was found somewhere under the road that he couldn’t access by himself: it’d need a team of two engineers to get down there and fix it, and they wouldn’t be able to come… until tomorrow.
So I went up to the attic to work, which is just about the only place in the house where – by balancing my phone against a window – I can consistently tether at 4G/5G. Well… semi-consistently. Inconsistently enough to be immensely frustrating.
There’s this thing, I find: no Internet access is annoying, but tolerable.
Slow Internet access is similar.
But intermittent Internet access is, somehow, a nightmare. Applications hang or fail in unpredictable ways, their developers not having planned for the possibility that the
connection they detected for when they were opened might come and go at random. Shitty modern “web applications” that expect to download multiple megabytes of JavaScript before they
work show skeleton loaders and dehydrated <div>s that might one day grow up to be something approximating a button, link, or image. It’s just generally a pretty crap
experience.
It’s funny how we got so dependent upon the Internet. 26+ years ago, I used to write most of my Web-destined PHP and Perl code “offline”! I’d dial-up to the Internet to download documentation or upload code, then work from my memory, from books, and what I’d saved from the Web. Can you imagine asking a junior Web developer to do that today?
In a second ironic twist, a parcel arrived for me during our downtime which contained new network hardware with which I planned to eliminate a couple of WiFi weak spots at the edges of our house. The new hardware worked perfectly and provided a wonderful improvement to signal strength between our computers… but of course not to computers outside of the network.
There’s another interesting thing that’s changed over the decades. When I first started installing (bus!) networks, there was no assumption that the network would necessarily provide Internet access. The principal purpose of the network was to connect the computers within the LAN to one another. This meant that staff could access one another’s files more easily and make use of a shared printer without walking around carrying floppy disks, for example… or could frag one another at Doom and Quake at the LAN parties that I’d sometimes run from my mum’s living room!
But nowadays, if you connect to a network (whether wired or wireless) there’s an expectation that it’ll provide Internet access. So much so, that if you join a wireless network using your mobile phone and it doesn’t provide Internet access, your phone probably won’t route any traffic across it unless you specifically say that it should. That’s a reasonable default, these days, but it’s an annoyance when – for example – I wanted my phone to continue using Syncthing to back up my photos to my NAS even though the network that my NAS was on would no longer provide Internet access to my phone!
The second team of engineers quickly found and repaired a break in the fibre – apparently it was easier than the first engineer had expected – and normalcy returned to our household.
But for a couple of days, there, I was forcibly (and unpleasantly) reminded about how the world has changed since the time that “being on a network” wasn’t assumed to be synonymous with “has Internet access”.
Note #26999
In my first few weeks at my new employer, my code contributions have added 218 lines of code, deleted 2,663. Only one of my PRs has resulted in a net increase in the size of their codebases (by two lines).
I need to pick up the pace if I’m going to reach the ultimate goal of deleting ALL of the code within my lifetime. (That’s the ultimate aim, right?)
Dan Has Too Many Monitors
My new employer sent me a laptop and a monitor, which I immediately added to my already pretty-heavily-loaded desk. Wanna see?
Firstup Day 1
Off to my first day at Firstup. Gotta have an induction: get my ID badge, learn where the toilets are, how to refill the coffee machine, and all that jazz.
Except, of course, none of those steps will be part of my induction. Because, yet again, I’ve taken a remote-first position. I’m 100% sold that, for me, remote/distributed work helps me bring my most-productive self. It might not be for everybody, but it’s great for me.
And now: I’m going to find out where the water cooler is. No, wait… some other thing!
Firstup Recruitment
In a little over a week I’ll be starting my new role at Firstup, who use some of my favourite Web technologies to deliver tools that streamline employee communication and engagement.
I’m sure there’ll be more to say about that down the line, but for now: let’s look at my recruitment experience, because it’s probably the fastest and most-streamlined technical recruitment process I’ve ever experienced! Here’s the timeline:
Firstup Recruitment Timeline
- Day 0 (Thursday), 21:18 – One evening, I submitted an application via jobs listing site Welcome To The Jungle. For comparison, I submitted an application for a similar role at a similar company at almost the exact same time. Let’s call them, umm… “Secondup”.
- 21:42 – I received an automated response to say “Firstup have received your application”. So far, so normal.
- 21:44 – I received an email from a human – only 26 minutes after my initial application – to invite me to an initial screener interview the following week, and offering a selection of times (including a reminder of the relative timezone difference between the interviewer and I).
- 21:55 – I replied to suggest meeting on Wednesday the following week1.
- Day 6 (Wednesday), 15:30 – Half-hour screener interview, mostly an introduction, “keyword check” (can I say the right keywords about my qualifications and experience to demonstrate that, yes, I’m able to do the things they’ll need), and – because it’s 2025 and we live in the darkest timeline – a confirmation that I was a real human being and not an AI2. The TalOps person, Talia, says she’d like to progress me to an interview with the person who’d become my team lead, and arranges the interview then-and-there for Friday. She talked me through all the stages (max points to any recruiter who does this), and gave me an NDA to sign so we could “talk shop” in interviews if applicable.
- Day 8 (Friday), 18:30 – My new line manager, Kirk, is on the Pacific Coast of the US, so rather than wait until next week to meet I agreed to this early-evening interview slot. I’m out of practice at interviews and I babbled a bit, but apparently I had the right credentials because, at a continuing breakneck pace…
- 21:32 – Talia emailed again to let me know I was through that stage, and asked to set up two live coding “tech test” interviews early the following week. I’ve been enjoying all the conversations and the vibes so far, so I try to grab the earliest available slots that I can make. This put the two tech test interviews back-to-back, to which Ruth raised her eyebrows – but to me it felt right to keep riding the energy of this high-speed recruitment process and dive right in to both!
- Day 11 (Monday), 18:30 – Not even a West Coast interviewer this time, but because I’d snatched the earliest possible opportunity I spoke to Joshua early in the evening. Using a shared development environment, he had me doing a classic data-structures-and-algorithms style assessment: converting a JSON-based logical inference description sort-of reminiscent of a Reverse Polish Notation tree into something that looked more pseudocode of the underlying boolean logic. I spotted early on that I’d want a recursive solution, considered a procedural approach, and eventually went with a functional one. It was all going well… until it wasn’t! Working at speed, I made frustrating early mistake left me with the wrong data “down” my tree and needed to do some log-based debugging (the shared environment didn’t support a proper debugger, grr!) to get back on track… but I managed to deliver something that worked within the window, and talked at length through my approach every step of the way.
- 19:30 – The second technical interview was with Kevin, and was more about systems design from a technical perspective. I was challenged to make an object-oriented implementation of a car park with three different sizes of spaces (for motorbikes, cars, and vans); vehicles can only fit into their own size of space or larger, except vans which – in the absence of a van space – can straddle three car spaces. The specification called for a particular API that could answer questions about the numbers and types of spaces available. Now warmed-up to the quirks of the shared coding environment, I started from a test-driven development approach: it didn’t actually support TDD, but I figured I could work around that by implementing what was effectively my API’s client, hitting my non-existent classes and their non-existent methods and asserting particular responses before going and filling in those classes until they worked. I felt like I really “clicked” with Kevin as well as with the tech test, and was really pleased with what I eventually delivered.
- Day 12 (Tuesday), 12:14 – I heard from Talia again, inviting me to a final interview with Kirk’s manager Xiaojun, the Director of Engineering. Again, I opted for the earliest mutually-convenient time – the very next day! – even though it would be unusually-late in the day.
- Day 13 (Wednesday), 20:00 – The final interview with Xiaojun was a less-energetic affair, but still included some fun technical grilling and, as it happens, my most-smug interview moment ever when he asked me how I’d go about implementing something… that I’d coincidentally implemented for fun a few weeks earlier! So instead of spending time thinking about an answer to the question, I was able to dive right in to my most-recent solution, for which I’d conveniently drawn diagrams that I was able to use to explain my architectural choices. I found it harder to read Xiaojun and get a feel for how the interview had gone than I had each previous stage, but I was excited to hear that they were working through a shortlist and should be ready to appoint somebody at the “end of the week, or early next week” at the latest.
- Day 14 (Thursday), 00:09 – At what is presumably the very end of the workday in her timezone, Talia emailed me to ask if we could chat at what must be the start of her next workday. Or as I call it, lunchtime. That’s a promising sign.
- 13:00 – The sun had come out, so I took Talia’s call in the “meeting hammock” in the garden, with a can of cold non-alcoholic beer next to me (and the dog rolling around on the grass). After exchanging pleasantries, she made the offer, which I verbally accepted then and there and (after clearing up a couple of quick queries) signed a contract to a few hours later. Sorted.
- Day 23 – You remember that I mentioned applying to another (very similar) role at the same time? This was the day that “Secondup” emailed to ask about my availability for an interview. And while 23 days is certainly a more-normal turnaround for the start of a recruitment process, I’d already found myself excited by everything I’d learned about Firstup: there are some great things they’re doing right; there are some exciting problems that I can be part of the solution to… I didn’t need another interview, so I turned down “Secondup”. Something something early bird.
Wow, that was fast!
With only eight days between the screener interview and the offer – and barely a fortnight after my initial application – this has got to be the absolute fastest I’ve ever seen a tech role recruitment process go. It felt like a rollercoaster, and I loved it.
Footnotes
1 The earliest available slot for a screener interview, on Tuesday, clashed with my 8-year-old’s taekwondo class which I’d promised I’ll go along and join in with it as part of their “dads train free in June” promotion. This turned out to be a painful and exhausting experience which I thoroughly enjoyed, but more on that some other time, perhaps.
2 After realising that “are you a robot” was part of the initial checks, I briefly regretted taking the interview in our newly-constructed library because it provides exactly the kind of environment that looks like a fake background.




