About twenty years ago, after a a tumultuouslife, Big.McLargeHuge – the shared server of several other Abnibbers and I – finally and fatally kicked the bucket. I spun up its replacement, New.McLargeHuge, on hosting company DreamHost, and this blog (and many other sites) moved over to it1.
Wow, I’d forgotten half of these websites existed.
I only stayed with DreamHost for a few years before switching to Bytemark, with whom I was a loyal customer right up until a few years
ago2, but in that time I took advantage of DreamHost’s “Refer & Earn” program, which
allowed me to create referral codes that, if redeemed by others who went on to become paying customers, would siphon off a fraction of the profits as a “kickback” against my server
bills. Neat!3
DreamHost’s referrals had a certain “pyramid scheme” feel in that you could get credit for the people referred by the people you referred.
A year or so after I switched to ByteMark, DreamHost decided I owed them money: probably because of a
“quirk” in their systems. I disagreed with their analysis, so I ignored their request. They “suspended” my account (which I wasn’t using anyway), and that was the end of it.
Right?
But the referral fees continued to trickle in. For the last seventeen years, I’ve received a monthly email advising me that my account had been credited, off the back of a
referral.
I have no explanation as to why the amount of the referral reward fluctuates, but I can only assume that it’s the result of different people on different payment schedules?
About once a year I log in and check the balance. I was quite excited to discover that, at current rates, they’d consider me “paid-up” for my (alleged) debt by around Spring 2026!
I had this whole plan that I’d write a blog post about it when the time came. It could’ve been funny!
But it’s not to be: DreamHost emailed me last night to tell me that they’re killing their “Refer & Earn” program; replacing it with something different-but-incompatible (social media’s
already having a grumble about this, I gather).
So I guess this is the only blog post you’ll get about “that time DreamHost decided I owed them money and I opted to pay them back in my referral fees over the course of eighteen
years”.
No big loss.
Footnotes
1 At about the same time I moved Three
Rings over from its previous host, Easily, to DreamHost too, in order to minimise the number of systems I had to keep an eye on. Oh, how different things are now, when I’ve
got servers and domain registrations and DNS providers all over the damn place!
2 Bytemark have rapidly gone downhill since their acquisition by Iomart a while back, IMHO.
3 Nowadays, this blog (and several of my other projects) is hosted by Linode, whose acquisition by Akamai seems not to have caused any problems with, so that’s fab.
I may have raved about other concept albums in the meantime (this one, for example…), but The Signal and the Noise still makes my top 101.
I’ve listened to it twice this week, and I still love it.
But I probably love it differently than I used to.
Spy Numbers / One Time Pad remains my favourite pair of tracks on the album, as it always was: like so much of Andy’s music it tells a story that feels almost
like it belongs to a parallel universe… but that’s still relatable and compelling and delightful. And a fun little bop, too.
But In Potential, which I initially declared “a little weaker than the rest” of the album, has grown on me immensely over the course of the last decade. It presents an
optimistic, humanistic conclusion to the album that I look forward to every time. After John Frum Will Return and Checker Charlie open the album in a way that
warns us, almost prophetically, about the dangers of narrow target-lock thinking and AI dependence2, In Potential provides a beautiful and hopeful introspective
about humanity and encourages an attitude of… just being gentle and forgiving with ourselves, I guess.
So yeah, the whole thing remains fantastic. And better yet: Andy announced about six weeks ago that all of his
music is now available under a free/pay-what-you-like model, so if you missed it the first time around, now’s your opportunity to play catch-up!3
This post is also available as a video. If you'd prefer to watch/listen to me
talk about this topic, give it a look.
1979
The novelisation of The Hitch-Hiker’s Guide to the Galaxy came out in 1979, just a smidge before I was born. There’s a well-known scene in the second chapter featuring Ford
Prefect, an alien living on Earth, distracting his human friend Arthur Dent. Arthur is concerned about the imminent demolition of his house by a wrecking crew, and Ford takes him
to the pub to get him drunk, in anticipation of the pair attempting to hitch a lift on an orbiting spacecraft that’s about to destroy the planet:
“Six pints of bitter,” said Ford Prefect to the barman of the Horse and Groom. “And quickly please, the world’s about to end.”
The barman of the Horse and Groom didn’t deserve this sort of treatment, he was a dignified old man. He pushed his glasses up his nose and blinked at Ford Prefect. Ford ignored him
and stared out of the window, so the barman looked instead at Arthur who shrugged helplessly and said nothing.
So the barman said, “Oh yes sir? Nice weather for it,” and started pulling pints.
He tried again.
“Going to watch the match this afternoon then?”
Ford glanced round at him.
“No, no point,” he said, and looked back out of the window.
“What’s that, foregone conclusion then you reckon sir?” said the barman. “Arsenal without a chance?”
“No, no,” said Ford, “it’s just that the world’s about to end.”
“Oh yes sir, so you said,” said the barman, looking over his glasses this time at Arthur. “Lucky escape for Arsenal if it did.”
Ford looked back at him, genuinely surprised.
“No, not really,” he said. He frowned.
The barman breathed in heavily. “There you are sir, six pints,” he said.
Arthur smiled at him wanly and shrugged again. He turned and smiled wanly at the rest of the pub just in case any of them had heard what was going on.
None of them had, and none of them could understand what he was smiling at them for.
A man sitting next to Ford at the bar looked at the two men, looked at the six pints, did a swift burst of mental arithmetic, arrived at an answer he liked and grinned a stupid
hopeful grin at them.
“Get off,” said Ford, “They’re ours,” giving him a look that would have an Algolian Suntiger get on with what it was doing.
Ford slapped a five-pound note on the bar. He said, “Keep the change.”
“What, from a fiver? Thank you sir.”
There’s a few great jokes there, but I’m interested in the final line. Ford buys six pints of bitter, pays with a five-pound note, and says “keep the change”, which surprises the
barman. Presumably this is as a result of Ford’s perceived generosity… though of course what’s really happening is that Ford has no use for Earth money any longer; this point is
hammered home for the barman and nearby patrons when Ford later buys four packets of peanuts, also asking the barman to keep the change from a fiver.
Beer’s important, but you also need to know where your towel is.
We’re never told exactly what the barman would have charged Ford. But looking at the history of average UK beer prices and assuming that the story is set in 1979, we can
assume that the pints will have been around 34p each1,
so around £2.04 for six of them. So… Ford left a 194% tip for the beer2.
1990
By the time I first read Hitch-Hikers, around 1990, this joke was already dated. By then, an average pint of bitter would set you back £1.10. I didn’t have a good
awareness of that, being as I was well-underage to be buying myself alcohol! But I clearly had enough of an awareness that my dad took the time to explain the joke… that is, to point
out that when the story was written (and is presumably set), six pints would cost less than half of five pounds.
But by the mid-nineties, when I’d found a friend group who were also familiar with the Hitch-Hikers… series, we’d joke about it. Like pointing out that by then if
you told the barman to keep the change from £5 after buying six pints, the reason he’d express surprise wouldn’t be because you’d overpaid…
In his defence, Ford’s an alien and might not fully understand human concepts of inflation. Or sarcasm.
1998
Precocious drinker that I was, by the late nineties I was quite aware of the (financial) cost of drinking.
Sure, this seems like a responsible amount of alcohol for a party thrown by a couple of tearaway teenagers. Definitely nothing going to go wrong here, no siree.
And so when it was announced that a new denomination of coin – the £2 coin –
would enter general circulation3
I was pleased to announce how sporting it was of the government to release a “beer token”.
With the average pint of beer at the time costing around £1.90 and a still cash-dominated economy, the “beer token” was perfect! And in my case, it lasted: the bars I was
drinking at in the late 1990s were in the impoverished North, and were soon replaced with studenty bars on the West coast of Wales, both of which allowed the price of a pint to do
battle with inflationary forces for longer than might have been expected elsewhere in the country. The “beer token” that was the £2 coin was a joke that kept on giving for some time.
The one thing I always hated about the initial design for the bimetallic £2 coin was – and this is the nerdiest thing in the world with which to take issue – the fact that it had a
ring of 19 cogs to represent British industry. But if you connect a circuit of an odd number of cogs… it won’t function. Great metaphor, there. Photo
courtesy of the late Andy Fogg, used under a Creative Commons license.
2023
As the cost of living rapidly increased circa 2023, the average price of a pint of beer in the UK finally got to the point where, rounded to the nearest whole pound, it was closer to £5
than it is to £44.
And while we could moan and complain about how much things cost nowadays, I’d prefer to see this as an opportunity. An opportunity for a new beer token: a general-release
of the £5 coin. We already some defined characteristics that fit: a large,
heavy coin, about twice the weight of the £2 coin, with a copper/nickel lustre and struck from engravings with thick, clear lines.
And the design basically comes up with itself. I give you… the Beer Token of the 2020s:
Wouldn’t this be much more-satisfying to give to a barman than a plasticky note or a wave of a contactless card or device?
It’s time for the beer token to return, in the form of the £5 coin. Now is the time… now is the last time, probably… before cash becomes such a rarity that little thought
is evermore given to the intersection of its design and utility. And compared to a coin that celebrates industry while simultaneously representing a disfunctional machine, this is a
coin that Brits could actually be proud of. It’s a coin that tourists would love to take home with them, creating a satisfying new level of demand for the sinking British
Pound that might, just might, prop up the economy a little, just as here at home they support those who prop up the bar.
I know there must be a politician out there who’s ready to stand up and call for this new coin. My only fear is that it’s Nigel Fucking Farage… at which point I’d be morally compelled
to reject my own proposal.
But for now, I think I’ll have another drink.
Footnotes
1 The recession of the 1970s brought high inflation that caused the price of beer to
rocket, pretty much tripling in price over the course of the decade. Probably Douglas Adams didn’t anticipate that it’d more-than-double again over the course of the 1980s
before finally slowing down somewhat… at least until tax
changes in 2003 and the aftermath of the 2022 inflation rate spike!
2 We do know that the four packets of peanuts Ford bought later were priced at 7p
each, so his tip on that transaction was a massive 1,686%: little wonder the barman suddenly started taking more-seriously Ford’s claims about the imminent end of the world!
3 There were commemorative £2 coins of a monometallic design floating around already, of
course, but – being collectible – these weren’t usually found in circulation, so I’m ignoring them.
4 Otherwise known as “two beer tokens”, of course. As in “Bloody hell, 2022, why does a
pint of draught cost two beer tokens now?”
Do you remember when your domestic ISP – Internet Service Provider – used to be an Internet Services Provider? They
were only sometimes actually called that, but what I mean is: when ISPs provided more than one Internet service? Not just connectivity, but… more.
One of the first ISPs I subscribed to had a “standard services” list longer than most modern ISPs complete services list!
ISPs twenty years ago
It used to just be expected that your ISP would provide you with not only an Internet connection, but also some or all of:
I don’t remember which of my early ISPs gave me a free license for HoTMetaL Pro, but I was very appreciative of it at the time.
ISPs today
The ISP I hinted at above doesn’t exist any more, after being bought out and bought out and bought out by a series of owners. But I checked the Website of the current owner to see what
their “standard services” are, and discovered that they are:
Optional 4G backup connectivity (for an extra fee)
A voucher for 3 months access to a streaming service3
The connection is faster, which is something, but we’re still talking about the “baseline” for home Internet access then-versus-now. Which feels a bit galling, considering that (a)
you’re clearly, objectively, getting fewer services, and (b) you’re paying more for them – a cheap basic home Internet subscription today, after accounting
for inflation, seems to cost about 25% more than it did in 2000.4
Are we getting a bum deal?
Not every BBS nor ISP would ever come to support the blazing speeds of a 33.6kbps modem… but when you heard the distinctive scream of its negotiation at close to the Shannon Limit of
the piece of copper dangling outside your house… it felt like you were living in the future.
Would you even want those services?
Some of them were great conveniences at the time, but perhaps not-so-much now: a caching server, FTP site, or IRC node in the building right at the end of my
dial-up connection? That’s a speed boost that was welcome over a slow connection to an unencrypted service, but is redundant and ineffectual today. And if you’re still using a
fax-to-email service for any purpose, then I think you have bigger problems than your ISP’s feature list!
Some of them were things I wouldn’t have recommend that you depend on, even then: tying your email and Web hosting to your connectivity provider traded
one set of problems for another. A particular joy of an email address, as opposed to a postal address (or, back in the day, a phone number), is that it isn’t tied to where
you live. You can move to a different town or even to a different country and still have the same email address, and that’s a great thing! But it’s not something you can
guarantee if your email address is tied to the company you dial-up to from the family computer at home. A similar issue applies to Web hosting, although for a true traditional “personal
home page”: a little information about yourself, and your bookmarks, it would be fine.
But some of them were things that were actually useful and I miss: honestly, it’s a pain to have to use a third-party service for newsgroup
access, which used to be so-commonplace that you’d turn your nose up at an ISP that didn’t offer it as standard. A static IP being non-standard on fixed connections is a sad reminder
that the ‘net continues to become less-participatory, more-centralised, and just generally more watered-down and shit: instead of your connection making you “part of” the Internet,
nowadays it lets you “connect to” the Internet, which is a very different experience.5
A page like this used to be absolutely standard on the Website6
of any ISP worth its salt.
Yeah, sure, you can set up a static site (unencumbered by any opinionated stack) for free on Github Pages, Neocities, or wherever, but the barrier to entry has been raised
by just enough that, doubtless, there are literally millions of people who would have taken that first step… but didn’t.
And that makes me sad.
Footnotes
1 ISP-provided shared FTP servers would also frequently provide locally-available copies
of Internet software essentials for a variety of platforms. This wasn’t just a time-saver – downloading Netscape Navigator from your ISP rather than from half-way across the world was
much faster! – it was also a way to discover new software, curated by people like you: a smidgen of the feel of a well-managed BBS, from the comfort of your local ISP!
2 ISP-provided routers are, in my experience, pretty crap 50% of the time… although
they’ve been improving over the last decade as consumers have started demanding that their WiFi works well, rather than just works.
3 These streaming services vouchers are probably just a loss-leader for the streaming
service, who know that you’ll likely renew at full price afterwards.
4 Okay, in 2000 you’d have also have had to pay per-minute for the price of the
dial-up call… but that money went to BT (or perhaps Mercury or KCOM), not to your ISP. But my point still stands: in a world where technology has in general gotten cheaper
and backhaul capacity has become underutilised, why has the basic domestic Internet connection gotten less feature-rich and more-expensive? And often with worse
customer service, to boot.
5 The problem of your connection not making you “part of” the Internet is multiplied if
you suffer behind carrier-grade NAT, of course. But it feels like if we actually cared enough to commit to rolling out IPv6 everywhere we could obviate the need for that particular
turd entirely. And yet… I’ll bet that the ISPs who currently use it will continue to do so, even as the offer IPv6 addresses as-standard, because they buy into their own idea that
it’s what their customers want.
6 I think we can all be glad that we no longer write “Web Site” as two separate words, but
you’ll note that I still usually correctly capitalise Web (it’s a proper noun: it’s the Web, innit!).
Some time in the last 25 years, ISPs stopped saying they made you “part of” the Internet, just that they’d help you “connect to” the Internet.
Most people don’t need a static IP, sure. But when ISPs stopped offering FTP and WWW hosting as a standard feature (shit though it often was), they became part of the tragic process by
which the Internet became centralised, and commoditised, and corporate, and just generally watered-down.
The amount of effort to “put something online” didn’t increase by a lot, but it increased by enough that millions probably missed-out on the opportunity to create
their first homepage.
I wanted a way to simultaneously lock all of the computers – a mixture of Linux, MacOS and Windows boxen – on my desk, when I’m
going to step away. Here’s what I came up with:
There’s optional audio in this video, if you want it.
One button. And everything locks. Nice!
Here’s how it works:
The mini keyboard is just 10 cheap mechanical keys wired up to a CH552 chip. It’s configured to send CTRL+ALT+F13 through
CTRL+ALT+F221
when one of its keys are pressed.
The “lock” key is captured by my KVM tool Deskflow (which I migrated to when Barrier became neglected, which in turn I migrated to when I fell out of love with Synergy). It then relays
this hotkey across to all currently-connected machines2.
That shortcut is captured by each recipient machine in different ways:
The Linux computers run LXDE, so I added a line to /etc/xdg/openbox/rc.xml to set a <keybind> that executes xscreensaver-command
-lock.
For the Macs, I created a Quick Action in Automator that runs pmset displaysleepnow as a shell script3, and then connected that via
Keyboard Shortcuts > Services.
On the Windows box, I’ve got AutoHotKey running anyway, so I just have it run { DllCall("LockWorkStation") } when it hears
the keypress.
That’s all there is to is! A magic “lock all my computers, I’m stepping away” button, that’s much faster and more-convenient than locking two to five computers individually.
Footnotes
1F13 through F24 are absolutely valid “standard” key assignments,
of course: it’s just that the vast majority of keyboards don’t have keys for them! This makes them excellent candidates for non-clashing personal-use function keys, but I like to
append one or more modifier keys to the as well to be absolutely certain that I don’t interact with things I didn’t intend to!
2 Some of the other buttons on my mini keyboard are mapped to “jumping” my cursor to
particular computers (if I lose it, which happens more often than I’d like to admit), and “locking” my cursor to the system it’s on.
3 These boxes are configured to lock as soon as the screen blanks; if yours don’t then you
might need a more-sophisticated script.
The tl;dr is: the court ruled that (a) piracy for the purpose of training an LLM is still piracy, so there’ll be a separate case about the fact that Anthropic did not pay for copies of
all the books their model ingested, but (b) training a model on books and then selling access to that model, which can then produce output based on what it has “learned” from those
books, is considered transformative work and therefore fair use.
Compelling arguments have been made both ways on this topic already, e.g.:
Some folks are very keen to point out that it’s totally permitted for humans to read, and even memorise, entire volumes, and then use what they’ve learned when they
produce new work. They argue that what an LLM “does” is not materially different from an impossibly well-read human.
By way of counterpoint, it’s been observed that such a human would still be personally liable if the “inspired” output they subsequently created was derivative
to the point of violating copyright, but we don’t yet have a strong legal model for assessing AI output in the same way. (BBC News article about Disney & Universal vs. Midjourney is going to be very interesting!)
Furthermore, it might be impossible to conclusively determine that the way GenAI works is fundamentally comparable to human thought. And that’s the thing that got
me thinking about this particular thought experiment.
A moment of philosophy
Here’s a thought experiment:
Support I trained an LLM on all of the books of just one author (plus enough additional language that it was able to meaningfully communicate). Let’s take Stephen King’s 65 novels and
200+ short stories, for example. We’ll sell access to the API we produce.
I suppose it’s possible that Stephen King was already replaced long ago with an AI that was instructed to churn out horror stories about folks in isolated Midwestern locales being
harassed by a pervasive background evil?
The output of this system would be heavily-biased by the limited input it’s been given: anybody familiar with King’s work would quickly spot that the AI’s mannerisms echoed his writing
style. Appropriately prompted – or just by chance – such a system would likely produce whole chapters of output that would certainly be considered to be a substantial infringement of
the original work, right?
If I make KingLLM, I’m going to get sued, rightly enough.
But if we accept that (and assume that the U.S. District Court for the Northern District of California would agree)… then this ruling on Anthropic would carry a curious implication.
That if enough content is ingested, the operation of the LLM in itself is no longer copyright infringement.
Which raises the question: where is the line? What size of corpus must a system be trained upon before its processing must necessarily be considered transformative
of its inputs?
Clearly, trying to answer that question leads to a variant of the sorites paradox. Nobody can ever say that, for example, an input of twenty million words
is enough to make a model transformative but just one fewer and it must be considered to be perpetually ripping off what little knowledge it has!
But as more of these copyright holder vs. AI company cases come to fruition, it’ll be interesting to see where courts fall. What is fair use and what is infringing?
And wherever the answers land, I’m sure there’ll be folks like me coming up with thought experiments that sit uncomfortably in the grey areas that remain.
What can I possibly say about Bored Gay Werewolf, which caught my attention
with the garish colours of its front cover when I saw it in Waterstones and whose blurb suggested that it might, perhaps, be a queer fantasy romp with a Buffy-esque sense of
humour.
Werewolf? Sure, it’s got a few of those. There’s even a bit of fun, offbeat humour each time the protagonist reflects on their curious monthly cycle and tries to work
out whether they attacked or even killed anybody this time around. But mostly it’s not a story about werewolf: it’s a story about a slacker who gets suckered into a pyramid scheme, with
just a hint of lycanthropy around the fringes.
Gay? I mean: the protagonist’s gay, and many of their friends are queer… and while the representation is good, sexuality doesn’t feel like it’s a particularly
significant issue to the storyline. I enjoyed the parallels that were drawn between Brian’s coming-out as gay versus his (for most of the story) closeted werewolf nature – which even
though I saw them coming from the first chapter onwards were still well-presented – but apart from that it almost felt like gayness wasn’t a central theme to the story. A smidge of
homophobia, some queer culture references, and a throwaway Grindr hookup with a closeted MSM dude do not contribute enough homosexuality to justify “gay” being the largest, pinkest word
on a novel’s cover, if you ask me.
Bored? I was, at some points in the book, but I’m not convinced that’s what was intended. The pacing’s a little inconsistent: a long and
drawn-out description of an exercise routines overshadows an exploration of the impact of werewolf super-senses, for example. And a long-foreshadowed fight scene finale feels like it’s
over in an instant (with a Van Helsing ex Machina twist that felt simultaneously like the brakes being slammed on and a set-up for an inevitable sequel).
I sound pretty negative about it, I’m sure. But it’s not actually bad. It’s just not actually good, either. It’s a passable, middle-of-the-road time-filler
with an interesting hook, a few funny set pieces (I laughed out loud a couple of times, for sure), and a set of misfit characters who spend most of the book feeling a little…
incomplete? Though it’s possible that latter point’s at-least partially deliberate, as this is without a doubt a “Gen-Z Grows Up” story. Maybe if I were younger and didn’t yet have my
shit together the story would appeal better.
I was updating my CV earlier this week in anticipation of applying for a handful of interesting-looking roles1
and I was considering quite how many different tech stacks I claim significant experience in, nowadays.
There are languages I’ve been writing in every single week for the last 15+ years, of course, like PHP, Ruby, and JavaScript. And my underlying fundamentals are solid.
But is it really fair for me to be able to claim that I can code in Java, Go, or Python: languages that I’ve not used commercially within the last 5-10 years?
What kind of developer writes the same program six times… for a tech test they haven’t even been asked to do? If you guessed “Dan”, you’d be correct!
Obviously, I couldn’t just let that question lie2.
Let’s find out!
I fished around on Glassdoor for a bit to find a medium-sized single-sitting tech test, and found a couple of different briefs that I mashed together to create this:
In an object-oriented manner, implement an LRU (Least-Recently Used) cache:
The size of the cache is specified at instantiation.
Arbitrary objects can be put into the cache, along with a retrieval key in the form of a string. Using the same string, you can get the objects back.
If a put operation would increase the number of objects in the cache beyond the size limit, the cached object that was least-recently accessed (by either a
put or get operation) is removed to make room for it.
putting a duplicate key into the cache should update the associated object (and make this item most-recently accessed).
Both the get and put operations should resolve within constant (O(1)) time.
Add automated tests to support the functionality.
My plan was to implement a solution to this challenge, in as many of the languages mentioned on my CV as possible in a single sitting.
But first, a little Data Structures & Algorithms theory:
The Theory
Simple case with O(n) complexity
The simplest way to implement such a cache might be as follows:
Use a linear data structure like an array or linked list to store cached items.
On get, iterate through the list to try to find the matching item.
If found: move it to the head of the list, then return it.
On put, first check if it already exists in the list as with get:
If it already exists, update it and move it to the head of the list.
Otherwise, insert it as a new item at the head of the list.
If this would increase the size of the list beyond the permitted limit, pop and discard the item at the tail of the list.
It’s simple, elegant and totally the kind of thing I’d accept if I were recruiting for a junior or graduate developer. But we can do better.
The problem with this approach is that it fails the requirement that the methods “should resolve within constant (O(1)) time”3.
Of particular concern is the fact that any operation which might need to re-sort the list to put the just-accessed item at the top
4. Let’s try another design:
Achieving O(1) time complexity
Here’s another way to implement the cache:
Retain cache items in a doubly-linked list, with a pointer to both the head and tail
Add a hash map (or similar language-specific structure) for fast lookups by cache key
On get, check the hash map to see if the item exists.
If so, return it and promote it to the head (as described below).
On put, check the hash map to see if the item exists.
If so, promote it to the head (as described below).
If not, insert it at the head by:
Updating the prev of the current head item and then pointing the head to the new item (which will have the old head item as its
next), and
Adding it to the hash map.
If the number of items in the hash map would exceed the limit, remove the tail item from the hash map, point the tail at the tail item’s prev, and
unlink the expired tail item from the new tail item’s next.
To promote an item to the head of the list:
Follow the item’s prev and next to find its siblings and link them to one another (removes the item from the list).
Point the promoted item’s next to the current head, and the current head‘s prev to the promoted item.
Point the head of the list at the promoted item.
Looking at a plate of pointer-spaghetti makes me strangely hungry.
It’s important to realise that this alternative implementation isn’t better. It’s just different: the “right” solution depends on the use-case5.
The Implementation
That’s enough analysis and design. Time to write some code.
Turns out that if you use enough different languages in your project, GitHub begins to look like itwants to draw a rainbow.
Picking a handful of the more-useful languages on my CV6,
I opted to implement in:
Ruby (with RSpec for testing and Rubocop for linting)
PHP (with PHPUnit for testing)
TypeScript (running on Node, with Jest for testing)
Java (with JUnit for testing)
Go (which isn’t really an object-oriented language but acts a bit like one, amirite?)
Python (probably my weakest language in this set, but which actually ended up with quite a tidy solution)
Naturally, I open-sourced everything if you’d like to see for yourself. It all works, although if you’re actually in need of such a
cache for your project you’ll probably find an alternative that’s at least as good (and more-likely to be maintained!) in a third-party library somewhere!
What did I learn?
This was actually pretty fun! I might continue to expand my repo by doing the same challenge with a few of the other languages I’ve used professionally at some point or
another7.
And there’s a few takeaways I got from this experience –
Lesson #1: programming more languages can make you better at all of them
As I went along, one language at a time, I ended up realising improvements that I could make to earlier iterations.
For example, when I came to the TypeScript implementation, I decided to use generics so that the developer can specify what kind of objects they want to store in the cache,
rather than just a generic Object, and better benefit type-safety. That’s when I remembered that Java supports generics, too, so I went back and used them there as well.
In the same way as speaking multiple (human) languages or studying linguistics can help unlock new ways of thinking about your communication, being able to think in terms of multiple
different programming languages helps you spot new opportunities. When in 2020 PHP 8 added nullsafe operators, union types, and
named arguments, I remember feeling confident using them from day one because those features were already familiar to me from Ruby8, TypeScript9, and Python10,
respectively.
Lesson #2: even when I’m rusty, I can rely on my fundamentals
I’ve applied for a handful of jobs now, but if one of them had invited me to a pairing session on a language I’m rusty on (like Java!) I might’ve felt intimidated.
But it turns out I shouldn’t need to be! With my solid fundamentals and a handful of other languages under my belt, I understand when I need to step away from the code editor and hit
the API documentation. Turns out, I’m in a good position to demo any of my language skills.
I remember when I was first learning Go, I wanted to make use of a particular language feature that I didn’t know whether it had. But because I’d used that feature in Ruby, I knew what
to search for in Go’s documentation to see if it was supported (it wasn’t) and if so, what the syntax was11.
Lesson #3: structural rules are harder to gearshift than syntactic ones
Switching between six different languages while writing the same application was occasionally challenging, but not in the ways I expected.
I’ve had plenty of experience switching programming languages mid-train-of-thought before. Sometimes you just have to flit between the frontend and backend of your application!
But this time around I discovered: changes in structure are apparently harder for my brain than changes in syntax. E.g.:
Switching in and out of Python’s indentation caught me out at least once (might’ve been better if I took the time to install the language’s tools into my text editor first!).
Switching from a language without enforced semicolon line ends (e.g. Ruby, Go) to one with them (e.g. Java, PHP) had me make the compiler sad several times.
This gets even tougher when not writing the language but writing about the language: my first pass at the documentation for the Go version somehow ended up with
Ruby/Python-style #-comments instead of Go/Java/TypeScript-style //-comments; whoops!
I’m guessing that the part of my memory that looks after a language’s keywords, how a method header is structured, and which equals sign to use for assignment versus comparison… are
stored in a different part of my brain than the bit that keeps track of how a language is laid-out?12
Okay, time for a new job
I reckon it’s time I got back into work, so I’m going to have a look around and see if there’s any roles out there that look exciting to me.
If you know anybody who’s looking for a UK-based, remote-first, senior+, full-stack web developer with 25+ years experience and more languages than you can shake a stick at… point them at my CV, would you?
Footnotes
1 I suspect that when most software engineers look for a new job, they filter to the
languages, frameworks, they feel they’re strongest at. I do a little of that, I suppose, but I’m far more-motivated by culture, sector, product and environment than I am by the shape
of your stack, and I’m versatile enough that technology specifics can almost come second. So long as you’re not asking me to write VB.NET.
2 It’s sort-of a parallel to how I decided to check
the other week that my Gutenberg experience was sufficiently strong that I could write standard ReactJS, too.
3 I was pleased to find a tech test that actually called for an understanding of algorithm
growth/scaling rates, so I could steal this requirement for my own experiment! I fear that sometimes, in their drive to be pragmatic and representative of “real work”, the value of a
comprehension of computer science fundamentals is overlooked by recruiters.
4 Even if an algorithm takes the approach of creating a new list with the
inserted/modified item at the top, that’s still just a very-specific case of insertion sort when you think about it, right?
5 The second design will be slower at writing but faster at
reading, and will scale better as the cache gets larger. That sounds great for a read-often/write-rarely cache, but your situation may differ.
6 Okay, my language selection was pretty arbitrary. But if I’d have also come up with
implementations in Perl, and C#, and Elixir, and whatever else… I’d have been writing code all day!
7 So long as I’m willing to be flexible about the “object-oriented” requirement, there are
even more options available to me. Probably the language that I last wrote longest ago would be Pascal: I wonder how much of that I remember?
8 Ruby’s safe navigation/”lonely” operator did the same thing as PHP’s nullsafe operator
since 2015.
9 TypeScript got union types back in 2015, and apart from them being more-strictly-enforced they’re basically identical to
PHP’s.
10 Did you know that Python had keyword arguments since its very first public release
way back in 1994! How did it take so many other interpreted languages so long to catch up?
11 The feature was the three-way comparison or “spaceship operator”, in case you were wondering.
12 I wonder if anybody’s ever laid a programmer in an MRI machine while they code? I’d
be really interested to see if different bits of the brain light up when coding in functional programming languages than in procedural ones, for example!
Earlier this month, I received a phone call from a user of Three Rings, the volunteer/rota management
software system I founded1.
We don’t strictly offer telephone-based tech support – our distributed team of volunteers doesn’t keep any particular “core hours” so we can’t say who’s available at any given
time – but instead we answer email/Web based queries pretty promptly at any time of the day or week.
But because I’ve called-back enough users over the years, it’s pretty much inevitable that a few probably have my personal mobile number saved. And because I’ve been applying for a couple of
interesting-looking new roles, I’m in the habit of answering my phone even if it’s a number I don’t recognise.
Many of the charities that benefit from Three Rings seem to form the impression that we’re all just sat around in an office, like this. But in fact many of my fellow
volunteers only ever see me once or twice a year!
After the first three such calls this month, I was really starting to wonder what had changed. Had we accidentally published my phone number, somewhere? So when the fourth tech support
call came through, today (which began with a confusing exchange when I didn’t recognise the name of the caller’s charity, and he didn’t get my name right, and I initially figured it
must be a wrong number), I had to ask: where did you find this number?
“When I Google ‘Three Rings login’, it’s right there!” he said.
I almost never use Google Search2,
so there’s no way I’d have noticed this change if I hadn’t been told about it.
He was right. A Google search that surfaced Three Rings CIC’s “Google Business Profile” now featured… my personal mobile number. And a convenient “Call” button that connects you
directly to it.
Some years ago, I provided my phone number to Google as part of an identity verification process, but didn’t consent to it being shared publicly. And, indeed, they
didn’t share it publicly, until – seemingly at random – they started doing so, presumably within the last few weeks.
Concerned by this change, I logged into Google Business Profile to see if I could edit it back.
Apparently Google inserted my personal mobile number into search results for me, randomly, without me asking them to. Delightful.
I deleted my phone number from the business listing again, and within a few minutes it seemed to have stopped being served to random strangers on the Internet. Unfortunately deleting
the phone number also made the “Your phone number was updated by Google” message disappear, so I never got to click the “Learn more” link to maybe get a clue as to how and why this
change happened.
Don’t you hate it when you click the wrong button. Who reads these things, anyway, right?
Such feelings of rage.
Footnotes
1 Way back in 2002! We’re very nearly at the point where the Three Rings
system is older than the youngest member of the Three Rings team. Speaking of which, we’re seeking volunteers to help expand our support team: if you’ve got experience of
using Three Rings and an hour or two a week to spare helping to make volunteering easier for hundreds of thousands of people around the world, you should look us up!
2 Seriously: if you’re still using Google Search as your primary search engine, it’s past
time you shopped around. There are great alternatives that do a better job on your choice of one or more of the metrics that might matter to you: better privacy, fewer ads (or
more-relevant ads, if you want), less AI slop, etc.
Here in the UK, ice cream vans will usually play a tune to let you know they’re set up and selling1.
So when you hear Greensleeves (or, occasionally, Waltzing Matilda), you know it’s time to go and order yourself a ninety-nine.
Imagine my delight, then, when I discover this week that ice cream vans aren’t the only services to play such jaunty tunes! I was sat with work colleagues outside İlter’s Bistro on Meşrutiyet Cd. in Istanbul, enjoying a beer, when a van
carrying water pulled up and… played a little song!
And then, a few minutes later – as if part of the show for a tourist like me – a flatbed truck filled with portable propane tanks pulled up. Y’know, the kind you might use to heat a
static caravan. Or perhaps a gas barbeque if you only wanted to have to buy a refill once every five years. And you know what: it played a happy little jingle, too. Such joy!
In Istanbul, people put out their empty water bottles to be swapped-out for full ones by the water delivery man2.
My buddy Cem, who’s reasonably local to the area, told me that this was pretty common practice. The propane man, the water man, etc. would
all play a song when they arrived in your neighbourhood so that you’d be reminded that, if you hadn’t already put your empties outside for replacement, now was the time!
And then Raja, another member of my team, observed that in his native India, vegetable delivery trucks also play a song so you know they’re arriving. Apparently the tune they
play is as well-standardised as British ice cream vans are. All of the deliveries he’s aware of across his state of Chennai play the same piece of music, so that you know it’s them.
Raja didn’t have a photo to share (and why would he? it’s not like I have a photo of the guy who comes to refill the gas tank behind my
house!3), so I found this stock pic which sounds a bit like what
he described. Photo courtesy Aiden Jones, used under a CC-By-SA license.
It got me thinking: what other delivery services might benefit from a recognisable tune?
Bin men: I’ve failed to put the bins out in time frequently enough, over the course of my life, that a little jingle to remind me to do so would be welcome4!
(My bin men often don’t come until after I’m awake anyway, so as long as they don’t turn the music on until after say 7am they’re unlikely to be a huge inconvenience to anybody,
right?) If nothing else, it’d cue me in to the fact that they were passing so I’d remember to bring the bins back in again afterwards.
Fish & chip van: I’ve never made use of the mobile fish & chip van that tours my village once a week, but I might be more likely to if it announced its arrival with a
recognisable tune.
I’m thinking a chorus of Baby Shark would get everybody’s attention.
Milkman: I’ve a bit of a gripe with our milkman. Despite promising to deliver before 07:00 each morning, they routinely turn up much later. It’s particularly
troublesome when they come at about 08:40 while I’m on the school run, which breaks my routine sufficiently that it often results in the milk sitting unseen on the porch until I think
to check much later in the day. Like the bin men, it’d be a convenience if, on running late, they at least made their presence in my village more-obvious with a happy little ditty!
Emergency services: Sirens are boring. How about if blue light services each had their own song. Perhaps something thematic? Instead of going nee-naw-nee-naw, you’d
hear, say, de-do-do-do-de-dah-dah-dah
and instantly know that you were hearing The Police.
Evri: Perhaps there’s an appropriate piece of music that says “the courier didn’t bother to ring your doorbell, so now your parcel’s hidden in your recycling box”?
Just a thought.
Anyway: the bottom line is that I think there’s an untapped market for jolly little jingles for all kinds of delivery services, and Turkey and India are clearly both way ahead
of the UK. Let’s fix that!
Footnotes
1 It’s not unheard of for cruel clever parents to try to teach their young
children that the ice cream van plays music only to let you know it’s soldout of ice cream. A devious plan, although one I wasn’t smart (or evil?) enough to try for
myself.
3 My gas delivery man should also have his own song, of course. Perhaps an instrumental
cover of Burn Baby Burn?
4 Perhaps bin men could play Garbage Truck by Sex Bob-Omb/Beck? That seems kinda
fitting. Although definitely not what you want to be woken up with if they turn the speakers on too early…
The news has, in general, been pretty terrible lately.
Like many folks, I’ve worked to narrow the focus of the things that I’m willing to care deeply about, because caring about many things is just too difficult when, y’know, nazis
are trying to destroy them all.
I’ve got friends who’ve stopped consuming news media entirely. I’ve not felt the need to go so far, and I think the reason is that I already have a moderately-disciplined
relationship with news. It’s relatively easy for me to regulate how much I’m exposed to all the crap news in the world and stay focussed and forward-looking.
The secret is that I get virtually all of my news… through my feed reader (some of it pre-filtered, e.g. my de-crappified BBC News feeds).
I use FreshRSS and I love it. But really: any feed reader can improve your relationship with
the Web.
Without a feed reader, I can see how I might feel the need to “check the news” several times a day. Pick up my phone to check the time… glance at the news while I’m there… you know how
to play that game, right?
But with a feed reader, I can treat my different groups of feeds like… periodicals. The news media I subscribe to get collated in my feed reader and I can read them once, maybe twice
per day, just like a daily newspaper. If an article remains unread for several days then, unless I say otherwise, it’s configured to be quietly archived.
My current events are less like a firehose (or sewage pipe), and more like a bottle of (filtered) water.
Categorising my feeds means that I can see what my friends are doing almost-immediately, but I don’t have to be disturbed by anything else unless I want to be. Try getting that
from a siloed social network!
Maybe sometimes I see a new breaking news story… perhaps 12 hours after you do. Is that such a big deal? In exchange, I get to apply filters of any kind I like to the news I read, and I
get to read it as a “bundle”, missing (or not missing) as much or as little as I like.
On a scale from “healthy media consumption” to “endless doomscrolling”, proper use of a feed reader is way towards the healthy end.
If you stopped using feeds when Google tried to kill them, maybe it’s time to think again. The ecosystem’s alive and well, and having a one-stop place where you can
enjoy the parts of the Web that are most-important to you, personally, in an ad-free, tracker-free, algorithmic-filtering-free space that you can make your very own… brings a
special kind of peace that I can highly recommend.
The W3C‘s WebDX Community Group this week announced that they’ve reached a milestone with their web-features project. The project is an effort to catalogue browser support for Web features, to establish an
understanding of the baseline feature set that developers can rely on.
That’s great, and I’m in favour of the initiative. But I wonder about graphs like this one:
The graph shows the increase in time of the number of features available on the Web, broken down by how widespread they are implemented across the browser corpus.
The shape of that graph sort-of implies that… more features is better. And I’m not entirely convinced that’s true.
Does “more” imply “better”?
Don’t get me wrong, there are lots of Web features that are excellent. The kinds of things where it’s hard to remember how I did without them. CSS grids are for many purposes an
improvement on flexboxes; flexboxes were massively better than floats; and floats were an enormous leap forwards compared to using tables for layout! The “new” HTML5 input types are
wonderful, as are the revolutionary native elements for video, audio, etc. I’ll even sing the praises of some of the new JavaScript APIs (geolocation, web share, and push are
particular highlights).
But it’s not some kind of universal truth that “more features means better developer experience”. It’s already the case, for example, that getting started as a Web developer is
harder than it once was, and I’d argue harder than it ought to be. There exist complexities nowadays that are barriers to entry. Like the places where the promise of a
progressively-enhanced Web has failed (they’re rare, but they exist). Or the sheer plethora of features that come with caveats to their use that simply must be learned (yes, you need a
<meta name="viewport">; no, you can’t rely on JS to produce content).
Meanwhile, there are technologies that were standardised, and that we did need, but that never took off. The <keygen> element never got
implemented into the then-dominant Internet Explorer (there were other implementation problems too, but this one’s the killer). This made it functionally useless, which meant that its
standard never evolved and grew. As a result, its implementation in other browsers stagnated and it was eventually deprecated. Had it been implemented properly and iterated on, we’d
could’ve had something like WebAuthn over a decade earlier.
Which I guess goes to show that “more features is better” is only true if they’re the right features. Perhaps there’s some way of tracking the changing landscape of developer
experience on the Web that doesn’t simply count enumerate a baseline of widely-available features? I don’t know what it is, though!
A simple web
Mostly, the Web worked fine when it was simpler. And while some of the enhancements we’ve seen over the decades are indisputably an advancement, there are also plenty of places
where we’ve let new technologies lead us astray. Third-party cookies appeared as a naive consequence of first-party ones, but came to be used to undermine everybody’s privacy. Dynamic
DOM manipulation started out as a clever idea to help with things like form validation and now a significant number of websites can’t even show their images – or sometimes their text –
unless their JavaScript code gets downloaded and interpreted successfully.
Were you reading this article on Medium, you’d have downloaded ~5MB of data including 48 JS files and had 7 cookies set, just so you could… have most of the text covered with
popovers? (for comparison, reading it here takes about half a megabyte and the cookies are optional delicious)
A blog post, news article, or even an eCommerce site or social networking platform doesn’t need the vast majority of the Web’s “new” features. Those features are important for some Web
applications, but most of the time, we don’t need them. But somehow they end up being used anyway.
Whether or not the use of unnecessary new Web features is a net positive to developer experience is debatable. But it’s certainly not often to the benefit of user experience.
And that’s what I care about.
Last month my pest of a dog destroyed my slippers, and it was more-disruptive to my life than I would have anticipated.
Look what you did, you troublemaker.
Sure, they were just a pair of slippers1, but they’d
become part of my routine, and their absence had an impact.
Routines are important, and that’s especially true when you work from home. After I first moved to Oxford and started doing entirely remote work for the first time, I found the transition challenging2.
To feel more “normal”, I introduced an artificial “commute” into my day: going out of my front door and walking around the block in the morning, and then doing the same thing in reverse
in the evening.
My original remote working office, circa 2010.
It turns out that in the 2020s my slippers had come to serve a similar purpose – “bookending” my day – as my artificial commute had over a decade earlier. I’d slip them on when I was at
my desk and working, and slide them off when my workday was done. With my “work” desk being literally the same space as my “not work” desk, the slippers were a psychological reminder of
which “mode” I was in. People talk about putting on “hats” as a metaphor for different roles and personas they hold, but for me… the distinction was literal footwear.
And so after a furry little monster (who for various reasons hadn’t had her customary walk yet that day and was probably feeling a little frustrated) destroyed my slippers… it actually
tripped me up3. I’d be doing
something work-related and my feet would go wandering, of their own accord, to try to find their comfortable slip-ons, and when they failed, my brain would be briefly tricked
into glancing down to look for them, momentarily breaking my flow. Or I’d be distracted by something non-work-related and fail to get back into the zone without the warm, toe-hugging
reminder of what I should be doing.
It wasn’t a huge impact. But it wasn’t nothing either.
The bleppy little beast hasn’t expressed an interest in my replacement slippers, yet. Probably because they’re still acquiring the smell of my feet, which I’m guessing is
what interested her in the first place.
So I got myself a new pair of slippers. They’re a different design, and I’m not so keen on the lack of an enclosed heel, but they solved the productivity and focus problem I was facing.
It’s strange how such a little thing can have such a big impact.
Oh! And d’ya know what? This is my hundredth blog post of the year so far! Coming on only the 73rd day of the year, this is my fastest run at
#100DaysToOffload yet (my previous best was last year, when I managed the same on 22 April). 73 is exactly a fifth of 365, so… I guess I’m on
track for a mammoth 500 posts this year? Which would be my second-busiest blogging year ever, after 2018. Let’s see how I get on…4
Footnotes
1 They were actually quite a nice pair of slippers. JTA got them for me as a gift a few years back, and they lived either on my feet or under my desk ever since.
2 I was working remotely for a company where everybody else was working
in-person. That kind of hybrid setup is a lot harder to do “right”, as many companies in this post-Covid-lockdowns age have discovered, and it’s understandable that I found it
somewhat isolating. I’m glad to say that the experience of working for my current employer – who are entirely distributed –
is much more-supportive.
3 Figuratively, not literally. Although I would probably have literally tripped
over had I tried to wear the tattered remains of my shredded slippers!
I’d already read every prior book published by the excellent Becky Chambers, but this (and its sequel) had been sitting on my to-read list for some time,
and so while I’ve been ill and off work these last few days, I felt it would be a perfect opportunity to
pick it up. I’ve spent most of this week so far in bed, often drifting in and out of sleep, and a lightweight novella that I coud dip in and out of over the course of a day felt like
the ideal comfort.
I couldn’t have been more right, as the very first page gave away. My friend Ash described the experience of reading it (and
its sequel) as being “like sitting in a warm bath”, and I see where they’re coming from. True to form, Chambers does a magnificent job of spinning a believable utopia: a world that acts
like an idealised future while still being familiar enough for the reader to easily engage with it. The world of Wild-Built is inhabited by humans whose past saw them come
together to prevent catastrophic climate change and peacefully move beyond their creation of general-purpose AI, eventually building for themselves a post-scarcity economy based on
caring communities living in harmony with their ecosystem.
Writing a story in a utopia has sometimes been seen as challenging, because without anything to strive for, what is there for a protagonist to strive against? But
Wild-Built has no such problem. Written throughout with a close personal focus on Sibling Dex, a city monk who decides to uproot their life to travel around the various
agrarian lands of their world, a growing philosophical theme emerges: once ones needs have been met, how does one identify with ones purpose? Deprived of the struggle to climb
some Maslowian pyramid, how does a person freed of their immediate needs (unless they choose to take unnecessary risks: we hear of hikers who die exploring the uncultivated
wilderness Dex’s people leave to nature, for example) define their place in the world?
Aside from Dex, the other major character in the book is Mosscap, a robot whom they meet by a chance encounter on the very edge of human civilisation. Nobody has seen a robot for
centuries, since such machines became self-aware and, rather than consign them to slavery, the humans set them free (at which point they vanished to go do their own thing).
To take a diversion from the plot, can I just share for a moment a few lines from an early conversation between Dex and Mosscap, in which I think the level of mutual interpersonal
respect shown by the characters mirrors the utopia of the author’s construction:
…
“What—what are you? What is this? Why are you here?”
The robot, again, looked confused. “Do you not know? Do you no longer speak of us?”
“We—I mean, we tell stories about—is robots the right word? Do you call yourself robots or something else?”
“Robot is correct.”
…
“Okay. Mosscap. I’m Dex. Do you have a gender?”
“No.”
“Me neither.”
These two strangers take the time in their initial introduction to ensure they’re using the right terms for one another: starting with those relating to their… let’s say
species… and then working towards pronouns (Dex uses they/them, which seems to be widespread and commonplace but far from universal in their society; Mosscap uses it/its, which
provides for an entire discussion on the nature of objectship and objectification in self-identity). It’s queer as anything, and a delightful touch.
In any case: the outward presence of the plot revolves around a question that the robot has been charged to find an answer to: “What do humans need?” The narrative theme of self-defined
purpose and desires is both a presenting and a subtextual issue, and it carries through every chapter. The entire book is as much a thought experiment as it is a
novel, but it doesn’t diminish in the slightest from the delightful adventure that carries it.
Dex and Mosscap go on to explore the world, to learn more about it and about one another, and crucially about themselves and their place in it. It’s charming and wonderful and uplifting
and, I suppose, like a warm bath: comfortable and calming and centering. And it does an excellent job of setting the stage for the second book in the series, which we’ll get to presently…