CSS or BS

This is a repost promoting content originally published elsewhere. See more things Dan's reposted.

CSS or BS game in progress. The player is asked to declare whether 'view-timeline-name' is a real CSS property or made-up.

Well this is a fun (and frustrating!) game. You’ll be presented with 20 (alleged) CSS properties, but some of them… are convincing-looking fakes! You’ve got 10 seconds to identify whether each is real or not. Every few you get right increases the difficulty level, but also the score potential. How high can you score?

Me? Oh, I kept getting up into the “forbidden” level and then my brain would melt and I’d crash out. Quite proud of my last run, though:

Final score: 61/80. Reached: Forbidden. "If CSS knowledge were currency, you'd be comfortably middle-class."

×

The first glimmer

This is a repost promoting content originally published elsewhere. See more things Dan's reposted.

Glimmers are the opposite of triggers.

They are small, almost imperceptible cues that tell the nervous system: you are safe. You are connected. You are still here. Where a trigger tightens the chest and narrows the world, a glimmer softens the edges. It steadies the breath. It lets a thin ribbon of light slip in.

They are rarely grand in scale. Most often, they are sensory. Fleeting. Easy to miss.

… 

This is beautiful.

I’m reminded of the way Ruth reframed imposter syndrome as wonder syndrome a few years ago, which I wrote about at the time. A “glimmer” is not only a valuable and useful word that I’d not come across before (I love it when that happens, like with entle), but it also reframes the world in a more-positive light.

I’m going to to start looking for and naming glimmers in my life as part of my general practice of gratitude. Cultivating a conscious awareness of our glimmers is probably harder than finding an awareness of our triggers – and even that’s not always easy to narrow down specifically! – but it seems like such a worthwhile exercise.

The One and I is a delightful and long-running personal blog, by the way, if you’re looking for somebody new to follow. It feels calming and personal and sweet and there’s a healthy corpus of pictures of pets.

Coding Is When We’re Least Productive

This is a repost promoting content originally published elsewhere. See more things Dan's reposted.

I potentially saved my client a bunch of money and embarrassment with that 3-line change.

Now, I consider that a productive day.

But had I been measured on my contribution by lines of code, or commits, or features finished, it would have been seen as a very unproductive day by my manager.

A great anecdote and some wise words from Jason Gorman on the nature of productivity and code.

This matches my feeling on AI. It’s good at making lots of code. Sometimes it even writes the right code. But something it rarely demonstrates skill at is comprehending the bigger issue. I’m sure we’re already seeing developers who “game” their employers’ productivity metrics, to the detriment of the end users, by having AI make “more” code without having to engage their brain and actually understand the problem.

(And, of course, there are employers who, whether intentionally or not, promote this kind of behaviour through their policies and success metrics.)

Thames Path 7

This is a repost promoting content originally published elsewhere. See more things Dan's reposted.

New friends – obscure sights – the group divides – clear and present danger – an accident of geography – interest in bridges

2026 has not been an easy one so far. Work challenges, family challenges and my frickin’ house flooding have combined to make everything a bit overwhelming and hard to cope with.

So when we got a sunny Sunday, on a weekend in late April when (thanks to having found a long-term rental) we didn’t have to move between short-term lets, I cajoled Dan into once again acting as my support driver so I could walk some more of the Thames Path.

Dan and the smaller child joined me for the first couple of miles from Abingdon, which was nice.

My partner Ruth’s mission to walk the entire length of the Thames Path1 continued recently, and I still love “going on on” her journey – even the parts I wasn’t present for – through her blog posts.

If you too might enjoy blog-spectating this slowest-possible-walk along the length of the River Thames, you can catch-up on the backlog and subscribe for the next one, whenever that happens!

Footnotes

1 She’s doing the walk in many, tiny, and disparate instalments. By her own estimates she’s achieving about 50 metres per day, when averaged over her entire effort. This makes her only marginally faster than the 40 metres per day of the faster parts of the Greenland Ice Sheet, which I guess means that her progress is literally glacial in its speed.

×

Spoon bucket

This is a repost promoting content originally published elsewhere. See more things Dan's reposted.

I got a spoon bucket for my birthday! 🥄

2 gallon white plastic bucket, labeled: - Flatware Circus - Spoon Bucket - Discover the wonder of spoons with this randomized collection contained within a convenient bucket - 30 miscellaneous spoons inside one bucket

Wait, this is a real thing?

OMG it is.

New life goal unlocked!

Evaluation: wild success, a very pleasing range of spoons (and two tiny forks)

11 small piles of stainless steel flatware sorted by type, with 1-7 items per pile. 10 types are spoons, 1 is tiny forks.

The same guy behind these also does buckets of forks. And vlogs about silverware in general.

Everything about that is excellent.

Oh, and happy birthday Jamie, I guess!

× ×

ElenaJS (Progressive Web Components)

This is a repost promoting content originally published elsewhere. See more things Dan's reposted.

I still think web components are a great foundation for a design system. No other approach gives you true cross-framework portability built on what the web platform already provides. The problem isn’t necessarily the model itself, it’s how we’ve been building them.

This is how I ended up creating Elena, a library that I’m open sourcing today. Elena starts from HTML and CSS, and stays grounded in web standards and what the web platform natively provides.

love the “HTML Web Components”/”Progressive Web Components”1 development pattern. The idea is, if you’re new to it:

  1. Write the HTML to provide as much functionality as possible
  2. Wrap it in a custom element
  3. Use that custom element to enhance the component with anything only JS can provide

The downside is that there’s often more scaffolding than you’d like: implementing event and property change listeners (and tidying them on disconnection), batching updates to avoid flicker, and all that jazz.

Now obviously you could go with one of the big heavyweight frameworks like React, but then you’re leaning into a whole locked-in architecture that makes it harder to write progressive components and burdens your users with a ton of unnecessary code. Boo!

That’s why I love it when clever people make useful, HTML-friendly, ultra-lightweight frameworks2 like ReefJS, which I’ve talked about using before, and – now – Elena!

Elena’s a modern, simple, MIT-licensed wrapper framework for your web components, and – having perused the documentation on-and-off for the last couple of days – it’s really exciting. Perhaps not because of what it does, but because of what it doesn’t do. It’s unopinionated, well-documented, SSR-friendly3 microframework that seems to bring the absolute best in what the Web offers via web components… and makes it easier for developers without making end-users pay the price for it.

Anyway: all of which is to say: check out Elena! I’m really excited to have a play with it the next time I have a suitable web components project.

Footnotes

1 I’m with Jeremy: “Progressive Web Components” is a better name. Also: it’s it funny how changing just one word of a name can make you re-think what a thing is. The moment I refactored the way I thought about HTML Web Components into calling them Progressive Web Components was the moment I said to myself “hey, I could put an SVG into one of those… use state-managed props to set CSS variables that are available to the image… and in doing so, produce an SVG that elegantly becomes animated where JS is available…”

2 I same “frameworks”: by the time they’re this lightweight, single-purpose, and focussed on adding functionality that perhaps vanilla JS and web components should already have we might as well call them utility libraries or polyfills!

3 SSR perhaps ought not to matter for Progressive Web Components, but I can imagine situations where Elena would still be useful even for web components without a HTML fallback, at which point I suppose SSR could be a performance shortcut for some projects.

NHS England rushes to hide software over AI hacking fears

This is a repost promoting content originally published elsewhere. See more things Dan's reposted.

NHS England has issued new guidance to staff, which has been shared with New Scientist, that demands existing and future software be pulled from public view and kept behind closed doors. “All source code repositories must be private by default. Repositories must not be public unless there is an explicit and exceptional need, and public access has been formally approved,” says the new guidance. The deadline for making code private is 11 May.

Last month, an AI created by Anthropic called Mythos was widely reported to be capable of discovering flaws in virtually any software, potentially allowing hackers to break into systems running it.

NHS England’s guidance specifically points to Mythos as the cause for the new measures.

Yet again, “AI” is the reason why we can’t have nice things on an open and transparent Web.

This is bad, of course. But the worst part is the illusion it helps feed that closed-source software is necessarily more-secure than open-source software. Obviously it’s all much more-complex than that. Indeed, the article goes on to quote Terence Eden thoroughly debunking the entire line of thought:

“Is it possible that Mythos will scan a repository and find a bug? Yes, 100 per cent likely. Is that going to be a bug that causes a security issue in a live NHS service somewhere? Almost certainly not,” says Eden. “I think it’s someone in NHS England buying into the hype that Mythos is going to cause the end of security as we know it and getting a bit panicked.”

He’s right. This policy change is unlikely to improve the security of any of the affected pieces of NHS software (for much of which, the code is already out-there and archived, and so removing it from the Internet now is pretty pointless). If it’s going to be attacked, it’ll be attacked, and the resources that the bad guys have for probing a whole database worth of CVEs or fuzz-testing the extremities makes the availability of vulnerability-scanning AI pretty-close to irrelevant.

At least if it were open source then the good guys would have a chance of helping out… as well as we, the taxpayers who made the software possible, being able to see where our money was going!

Altogether a bad move by the NHS, here.

rejecting convenience

This is a repost promoting content originally published elsewhere. See more things Dan's reposted.

why bother going to the brick-and-mortar store? amazon is more “convenient”. why bother cooking a nice meal for yourself? doordash and uber eats are more “convenient”. why go out and socialize with people? facebook is more “convenient”. why use a digital camera, camcorder, or polaroid? your smartphone is more “convenient”. why bother going to the theater or concerts? netflix and spotify are more “convenient”. why bother making art? asking an AI to generate it for you is more “convenient”.

well, i say nuts to that. from now on, i’m going to make my life as inconvenient as possible. i’m going to go to the store and buy stuff in person. i’m going to make my own food with my own hands. i’m going to socialize with people face-to-face. i’m going to use a true camera instead of my phone’s camera. i’m going to buy blu-rays, DVDs, and CDs instead of streaming. i’m going to take my time when creating, watching, playing, and reading a work of art.

I’m seeing an growing movement in indieweb, revivalist, and adjacent circles that express RNotté’s sentiment: that the endless (and highly-marketable) quest for increased convenience in our lives has gained us free time, but we’ve lost something along the way.

What we’ve lost varies from case to case, but includes freedom (from lock-in to subscription services), creative satisfaction (from convenient “artistic” expression), privacy (from becoming the product, packaged-up by big-data advertising-funded tools), and social interactions (from so much of “social” media).

But reading RNotté share their thoughts on the matter today was the first time that it’s reminded me of The Matrix.

Framegrab from The Matrix. In the foreground is the silhouette of Morpheus, who is about to be interrogated by Agent Smith, a man in a suit at the windowed far end of an office.
The connection was probably helped by the fact that I rewatched the film pretty recently.

There’s a bit where Agent Smith says, to his captive the rebel captain Morpheus:

Did you know that the first Matrix was designed to be a perfect human world? Where none suffered, where everyone would be happy. It was a disaster. No one would accept the program. Entire crops were lost. Some believed we lacked the programming language to describe your perfect world.

Smith goes on to elucidate that his personal explanation for this fault was that humans depend upon suffering and misery, while acknowledging that there are other explanations. And perhaps we’ve touched upon one.

Perhaps humans – all humans – have a limit for how much they’re willing to accept convenience as compensation. Connected humans in The Matrix grain a convenient life, superficially superior to the struggle for survival experienced by humans living in the real world, short on food and hunted by machines. But to get that, they trade away their individual ability to become aware of the truth and, collectively, the ability for humanity for shape its own destiny. But there’s something about the imbalance of power in the arrangement niggles in human minds, and some rebel against the established order… and are joined by others who are shown that an alternative is available.

Clearly – as RNotté and others show – faceless technological forces need not go quite so far as enslaving an entire species before “convenience” no longer becomes a tolerable mitigation!

I’m not convinced that seeking out inconvenience is in itself a good. But questioning what your conveniences are worth and what you’re paying for them… that’s definitely worthwhile.

×

Molly guard in reverse

This is a repost promoting content originally published elsewhere. See more things Dan's reposted.

Old-school computing has a term “molly guard”: it’s the little plastic safety cover you have to move out of the way before you press some button of significance.

Anecdotally, this is named after Molly, an engineer’s daughter who was invited to a datacenter and promptly pressed a big red button, as one would.

Then she did it again later the same day.

This article from UX expert Marcin Wichary is intended to be a vehicle to talk about the thoughtful design that goes into “reverse molly guards”: pieces of user interface that will proceed by themselves if you do nothing, but can be stopped by user interaction. He provides the example of MacOS’s “Are you sure you want to restart your computer?” dialog, which includes a countdown to automatically going ahead with the restart in 60 seconds unless told not to.

From my perspective, though: this was the first time I’ve ever come across  the term “molly guard”, and I love it (especially with its accompanying anecdote). I’ve seen them all over the place, though. In fact, I’d love to share with you a particularly-aggressive molly guard I implemented into Three Rings a couple of years ago:

A problem we occasionally faced in Three Rings was administrators – especially new administrators, gaining lots of powers for the first time – managing to delete entire rotas, without realising that this would delete all of the shifts (and the signups) within those rotas too. This is a hard operation to un-do, so we added a basic molly guard: an “are you sure?” interstitial page that explained exactly how much damage would be done.

But it didn’t work well enough! We watched users who would see a blocker and rush straight to the big, red, delete button on the other side of all the warnings. I guess that the dark patterns that are now everywhere in software have trained users to click-through every wall that gets in their way as fast as possible and with the minimum interaction. But now that “training” was working against the safety of charity data!

So we came up with something stronger:

Screenshot of a pre-deletion warning page that says on the first line how many shifts will be deleted and then asks on the last line for the user to repeat that number back.

Now, the interstitial page not only says what the scale of the damage is… it asks the user to repeat it back to them. Looking at that screenshot, you’ll see that the first line says that 2,056 will be deleted… and then the last line contains a text box to type that number back in again (this page only appears if it looks like a lot of “real” data will be deleted; otherwise we use the old page so as not to scare off people who are throwing together temporary test rotas).

If you read the page, it’s easy to answer the question. But if you just rush to the red button… you’re stuck. You’ll be given a user interface nudge to tell you to fill the box, but until you first line of the page, you won’t be able to answer it.

This molly guard works: since it was implemented, we’ve never had an instance of an accidentally-deleted rota that required us to pull data from the backups on behalf of a charity.

But it’s possible we’ve swung too far the other way and caused some collateral damage to usability: we’ve twice had technical support queries from users who couldn’t work out what they had to type into the box!

This is an acceptable outcome, we decided: it gives us the chance to check that they really mean what they were asking to do (of the two queries: one user did, the other meant to do something else) and point them in the direction the number they need. It works!

Anyway, the key thing I wanted to share was that great article by Marcin Wichary with some great photos of various hardware and software molly guards (and reverse molly guards) for your amusement.

×

The Dungeon of Dark Patterns

This is a repost promoting content originally published elsewhere. See more things Dan's reposted.

The Dungeon of Dark Patterns A comic in four panels Panel 1. The adventurer and his fairy are in front of the door of a nightmarish dungeon, it's dark, foggy, and the inside the door we can't see anything except a deep red light. > Dungeon: "Welcome adventurers, to the Dungeon of Dark Patterns!" Panel 2. In one room of the dungeon, a giant beautiful and inviting door with a red carpet, and on the side, in the shadow a too little door. Writing on big door: Go to the trap, on small door: Go to the treasure. The adventurer crouch and do a little sign to the fairy to follow him to the little door. > Dungeon: "Ha ha, you're good!" Panel 3. The aventurer is now putting some effort climbing on an old rope in the middle of a room with a beautiful luxuous stairway with a red carpet on the side. A sign tells "GO TO THE TREASURE but pass by the trap" in direction of the beautiful stairs; and "(other options)" in small and in the shadow in direction of the rope. > Dungeon: "Impressive!" Panel 4. Top down view on the adventurers shrugging in front of the fairy, they reached a dead end. A short path on the right has on the ground the word "Now", and a longer path "Later". Both lead to a giant pool of green acid where bones and skulls are floating. > Dungeon: "So, when do you want to jump to the trap?"

Well this is just excellent.

I’d not come across David Revoy before today, but he’s apparently being doing art and comics since 2014. The Mini Fantasy Theatre series started a couple of years ago, but is totally getting added to my RSS reader. Almost everything’s bilingual English/French too, if that’s something that interests you.

Navigating around the dark patterns of modern UX certainly feels like a dungeon delve, sometimes. Now we just need the episode in which the adventurer has difficulty unsubscribing from requests from their patron…

×

WSL9x

This is a repost promoting content originally published elsewhere. See more things Dan's reposted.

WSL9x runs a modern Linux kernel (6.19 at time of writing) cooperatively inside the Windows 9x kernel, enabling users to take advantage of the full suite of capabilities of both operating systems at the same time, including paging, memory protection, and pre-emptive scheduling. Run all your favourite applications side by side – no rebooting required!

Well this blew my mind.

Windows Subsystem for Linux (WSL) is one of the single best things Microsoft have added to Windows in the last decade1. But, of course, it’s for Windows 10 and 11 only. I would never have conceived that somebody could make the same trick work for, like, Windows 95!

But Hails has done so. And no, this isn’t some kind of emulation; it’s proper cooperative multitasking between the two kernels, just like regular WSL does. Somehow, in a version that came out nine years before Windows even supported the NX bit. Mindboggling.

Footnotes

1 This ought to be a little embarrassing for them: I mean – if the most-valuable improvement you make to your operating system is to make it… more like a different operating system… – that’s not a great sign, is it?

×

Finding the right Bottom Hole paper

This is a repost promoting content originally published elsewhere. See more things Dan's reposted.

On the 6th of January 1995, viewers of BBC Two were treated to a new series of Waiting for Godot Bottom. Stuck at the top of a Ferris wheel, Vyvyan and the People’s Poet Eddie and Ritchie wait to see what the cruel hand of fate has dealt them in this week’s episode “Hole”.

At one point, Captain Edrison Peavey Edward Elizabeth Hitler pulls out a newspaper to read.

Some pissed old fart reading a newspaper called "The Hammersmith Bugle" with the headline "No news shocker..."

It may surprise you to know that the “Hammersmith Bugle” is not a real paper and they never ran a headline “No News Shocker”. At which point, it is time to rip off Dirty Feed’s shtick and find out what that paper really is.

This is exactly the kind of rabbitholey deep-dive I know and love (and have experienced ever so frequently myself). Take a ride with Terence on a long (and not-entirely satisfying!) ride to try to find the actual newspaper that’s been adapted by the Bottom production team in this particular episodes.

Anyway, it’s an amusing journey that I enjoyed going along with, this morning, and maybe you will too.

Update: He later found the right paper, and there’s a story there too!

×

The machines are fine. I’m worried about us.

This is a repost promoting content originally published elsewhere. See more things Dan's reposted.

Unlike Alice, who spent the year reading papers with a pencil in hand, scribbling notes in the margins, getting confused, re-reading, looking things up, and slowly assembling a working understanding of her corner of the field, Bob has been using an AI agent. When his supervisor sent him a paper to read, Bob asked the agent to summarize it. When he needed to understand a new statistical method, he asked the agent to explain it. When his Python code broke, the agent debugged it. When the agent’s fix introduced a new bug, it debugged that too. When it came time to write the paper, the agent wrote it. Bob’s weekly updates to his supervisor were indistinguishable from Alice’s. The questions were similar. The progress was similar. The trajectory, from the outside, was identical.

Here’s where it gets interesting. If you are an administrator, a funding body, a hiring committee, or a metrics-obsessed department head, Alice and Bob had the same year. One paper each. One set of minor revisions each. One solid contribution to the literature each. By every quantitative measure that the modern academy uses to assess the worth of a scientist, they are interchangeable. We have built an entire evaluation system around counting things that can be counted, and it turns out that what actually matters is the one thing that can’t be.

The strange thing is that we already know this. We have always known this. Every physics textbook ever written comes with exercises at the end of each chapter, and every physics professor who has ever stood in front of a lecture hall has said the same thing: you cannot learn physics by watching someone else do it. You have to pick up the pencil. You have to attempt the problem. You have to get it wrong, sit with the wrongness, and figure out where your reasoning broke. Reading the solution manual and nodding along feels like understanding. It is not understanding. Every student who has tried to coast through a problem set by reading the solutions and then bombed the exam knows this in their bones. We have centuries of accumulated pedagogical wisdom telling us that the attempt, including the failed attempt, is where the learning lives. And yet, somehow, when it comes to AI agents, we’ve collectively decided that maybe this time it’s different. That maybe nodding at Claude’s output is a substitute for doing the calculation yourself. It isn’t. We knew that before LLMs existed. We seem to have forgotten it the moment they became convenient.

Centuries of pedagogy, defeated by a chat window.

This piece by Minas Karamanis is excellent throughout, and if you’ve got the time to read it then you should. He’s a physics postdoc, and this post comes from his experience in his own field, but I feel that the concerns he raises are more-widely valid, too.

In my field – of software engineering – I have similar concerns.

Let’s accept for a moment that an LLM significantly improves the useful output of a senior software engineer (which is very-definitely disputed, especially for the “10x” level of claims we often hear, but let’s just take it as-read for now). I’ve experimented with LLM-supported development for years, in various capacities, and it certainly sometimes feels like they do (although it sometimes also feels like they have the opposite effect!). But if it’s true, then yes: an experienced senior software engineer could conceivably increase their work performance by shepherding a flock of agents through a variety of development tasks, “supervising” them and checking their work, getting them back on-course when they make mistakes, approving or rejecting their output, and stepping in to manually fix things where the machines fail.

In this role, the engineer acts more like an engineering team lead, bringing their broad domain experience to maximise the output of those they manage. Except who they manage is… AI.

Again, let’s just accept all of the above for the sake of argument. If that’s all true… how do we make new senior developers?

Junior developers can use LLMs too. And those LLMs will make mistakes that the junior developer won’t catch, because the kinds of mistakes LLMs make are often hard to spot and require significant experience to identify. But if they’re encouraged to use LLMs rather than making mistakes by hand and learning from them – to keep up, for example, or to meet corporate policies – then these juniors will never gain the essential experience they’ll one day need. They’ll be disenfranchised of the opportunity to grow and learn.

It’s yet to be proven that more-sophisticated models will “solve” this problem, but my understanding is that issues like hallucination are fundamentally unsolvable: you might get fewer hallucinations in a better model, but that just means that those hallucinations that slip through will be better-concealed and even harder to identify in code review or happy-path testing.

Maybe – maybe – the trajectory of GPTs is infinite, and they’ll keep getting “smarter” to the point at which this doesn’t matter: programming genuinely will become a natural language exercise, and nobody will need to write or understand code at all. In this possible reality, the LLMs will eventually develop entire new programming languages to best support their work, and humans will simply express ideas and provide feedback on the outputs. But I’m very sceptical of that prediction: it’s my belief that the mechanisms by which LLMs work has a fundamental ceiling – a capped level of sophistication that can be approached but never exceeded. And sure, maybe some other, different approach to AI might not have this limitation, but if so then we haven’t invented it yet.

Which suggests that we will always need experienced engineers to shepherd our AIs. Which brings us back to the fundamental question: if everybody uses AI to code, how do we make new senior developers?

I have other concerns about AI too, of course, some of which I’ve written about. But this one’s top-of-mind today, thanks to Minas’ excellent article. Go read it to learn more about how physics research faces a similar threat… and, perhaps, consider how your own field might need to face this particular challenge.

Blogging: you’re doing it right

This is a repost promoting content originally published elsewhere. See more things Dan's reposted.

That’s all you need to know. If you’re doing it, you’re doing it right. If you have decided to reclaim ownership of your place on the web, you’re doing it right. It doesn’t matter how you did it. It doesn’t matter if you’re self-hosting or using a SAAS. It doesn’t matter if your content lives on a database or in a TXT file. It doesn’t matter if you did everything yourself or you paid someone to do it for you. It doesn’t matter if you post once a day or once a year. What matters is that you’re doing it. Your effort is commendable. You deserve to be thanked so, thank you.

Wonderful words from Manu, there, that I think every blogger needs to be told once in a while. You have permission to write stuff. There isn’t a wrong way.

Also worth reading is his “2-step process for AI-free blogging”.

I wish I could be as pithy as Manu. But I’mma keep blogging anyway. After all; I’m doing it right!

People are not friction

This is a repost promoting content originally published elsewhere. See more things Dan's reposted.

The Gell-Mann Amnesia Effect of AI is a pretty well documented phenomenon:

The Gell-Mann amnesia effect is a cognitive bias describing the tendency of individuals to critically assess media reports in a domain they are knowledgeable about, yet continue to trust reporting in other areas despite recognizing similar potential inaccuracies.

Summarizing, AI sounds like a incredible genius synthesizing the world’s knowledge right up until you ask it about the thing you know about, then it’s an idiot. Even knowing about this phenomenon and having experienced it countless times, LLMs have an intoxicating quality to them.

I remember one time, maybe in the mid-1990s, when I saw a shopping channel (remember those? oh god, they’re still a thing, aren’t they?) where the host was trying to sell a personal computer. And… clearly, they knew absolutely nothing about it. They kept hitting on the same two or three talking points they’d been given (“mention the quad-speed CD-ROM drive!”) and fumbling their way through, and it gave me a revelation:

knew enough about computers that I could see that the presenter was bullshitting their way through the segment. But there are plenty of things that I don’t know much about, which are also sold on this same show. Duvets, jewellery, glassware… I’m nowhere near as much an expert on these as I was on PC featuresets. Is there something inherently incomprehensible about computers? No. So it’s reasonable to assume that these salespeople probably know equally-little about everything they sell, it’s just that I don’t have the knowledge base to be able to see that.

That’s what GenAI often feels like, to me. Having collated all of the publicly-available knowledge it could find into its model doesn’t make it smarter than the smartest humans, it brings it towards probably something slightly-above-the-average in any given subject, depending on the topic. If I ask an LLM about something that I don’t understand well, it produces often highly-believable answers, but if I ask it about something that I’m an expert in, it can come off as a fool.

I’m very interested in how we teach information literacy in this new world of rapidly-generated highly-believable nonsense.

Anyway: Dave’s post doesn’t go in that direction – instead, he’s got some clever thoughts about how the “convenience” of a “good enough” AI-driven solution to any given problem risks us seeing humans as the friction point, which ultimately works against those very humans who are looking to benefit from the technology:

We need experts to share what they know and improve the quality of our work, generated or otherwise. We even need idiots to make sure we can break ideas down into their simplest form that everyone, agents or human, understand. People can have bad attitudes, be shitty, and have wrong opinions… but people are not friction. An LLM may be able to autocorrect its way into a plausible human response, but it’s not people. It doesn’t care if it’s right or wrong.

It’s an easy and worthwhile read.