People are not friction

This is a repost promoting content originally published elsewhere. See more things Dan's reposted.

The Gell-Mann Amnesia Effect of AI is a pretty well documented phenomenon:

The Gell-Mann amnesia effect is a cognitive bias describing the tendency of individuals to critically assess media reports in a domain they are knowledgeable about, yet continue to trust reporting in other areas despite recognizing similar potential inaccuracies.

Summarizing, AI sounds like a incredible genius synthesizing the world’s knowledge right up until you ask it about the thing you know about, then it’s an idiot. Even knowing about this phenomenon and having experienced it countless times, LLMs have an intoxicating quality to them.

I remember one time, maybe in the mid-1990s, when I saw a shopping channel (remember those? oh god, they’re still a thing, aren’t they?) where the host was trying to sell a personal computer. And… clearly, they knew absolutely nothing about it. They kept hitting on the same two or three talking points they’d been given (“mention the quad-speed CD-ROM drive!”) and fumbling their way through, and it gave me a revelation:

knew enough about computers that I could see that the presenter was bullshitting their way through the segment. But there are plenty of things that I don’t know much about, which are also sold on this same show. Duvets, jewellery, glassware… I’m nowhere near as much an expert on these as I was on PC featuresets. Is there something inherently incomprehensible about computers? No. So it’s reasonable to assume that these salespeople probably know equally-little about everything they sell, it’s just that I don’t have the knowledge base to be able to see that.

That’s what GenAI often feels like, to me. Having collated all of the publicly-available knowledge it could find into its model doesn’t make it smarter than the smartest humans, it brings it towards probably something slightly-above-the-average in any given subject, depending on the topic. If I ask an LLM about something that I don’t understand well, it produces often highly-believable answers, but if I ask it about something that I’m an expert in, it can come off as a fool.

I’m very interested in how we teach information literacy in this new world of rapidly-generated highly-believable nonsense.

Anyway: Dave’s post doesn’t go in that direction – instead, he’s got some clever thoughts about how the “convenience” of a “good enough” AI-driven solution to any given problem risks us seeing humans as the friction point, which ultimately works against those very humans who are looking to benefit from the technology:

We need experts to share what they know and improve the quality of our work, generated or otherwise. We even need idiots to make sure we can break ideas down into their simplest form that everyone, agents or human, understand. People can have bad attitudes, be shitty, and have wrong opinions… but people are not friction. An LLM may be able to autocorrect its way into a plausible human response, but it’s not people. It doesn’t care if it’s right or wrong.

It’s an easy and worthwhile read.

Reply to: I’m OK being left behind, thanks!

This is a reply to a post published elsewhere. Its content might be duplicated as a traditional comment at the original source.

Terence Eden said:

Many years ago, someone tried to get me into cryptocurrencies. “They’re the future of money!” they said. I replied saying that I’d rather wait until they were more useful, less volatile, easier to use, and utterly reliable.

“You don’t want to get left behind, do you?” They countered.

That struck me as a bizarre sentiment. What is there to be left behind from? If BitCoin (or whatever) is going to liberate us all from economic drudgery, what’s the point of “getting in early”? It’ll still be there tomorrow and I can join the journey whenever it is sensible for me.

100%. If I “get in early” on something, it’s because that thing interests me, not because I’m betting on its future. With a hundred new ideas a day and only one of them “making it”, it’s a fools’ game to try to jump on board every bandwagon that comes along.

With cryptocurrencies, though, I’m fortunate enough to have an even better comeback at the cryptobros that try to shill me whatever made-up currency they’re “investing” in today: I’ve already done better than they ever will, at them.

When Bitcoin first appeared, I took a technical interest in it. I genuinely never anticipated it’d take off (I made the same incorrect guess with MP3s, too!), but I thought it was a fun concept to play about with. The only Bitcoins I ever paid for must’ve been worth an average of 50p each, or so.

I sold my entire wallet of Bitcoins when they hit around £750 each. I know a tulip economy when I see one, I thought. Plus: I was no longer interested in blockchains now I was seeing how they were actually being used: my interest had been entirely in the technology and its applications, not in the actual idea of a currency!

Sure, I kick myself ocassionally, given that I later saw the value rise to tens of thousands of pounds each. But hey, I was never in it for the money anyway.

So yeah, I tell cryptobros; I already made a 1500% ROI on cryptocurrency. And no, I’m not buying any cryptocurrencies any more. Whatever they think “getting in early” was, they’re wrong, because I was there years ahead of them and I wasn’t even doing it to “get in early”; I did it because it was interesting. And honestly, isn’t that a better story to be able to tell?

I feel the same way about the current crop of AI tools. I’ve tried a bunch of them. Some are good. Most are a bit shit. Few are useful to me as they are now.

If this tech is as amazing as you say it is, I’ll be able to pick it up and become productive on a timescale of my choosing not yours.

Yup, that’s the attitude I’m taking.

I play with new AI technologies, sometimes. I don’t do it because I’m afraid of being left behind because – as you say – if a technology is transformative, we’ll all get to catch up eventually.

Do you think that people who had smartphones first are benefitting today because they “got in early” on something that later became mainstream?

Of course they’re not. Their experience is eventually exactly the same as everybody else’s, just like it was for everybody who “got in early” on hype trains whose final station came early, like Compuserve GO-words, WAP, Beenz.com, WebTV, the CueCat, m-Commerce, HD-DVD, the JooJoo, or Google+.

A Random List of Silly Things I Hate

So apparently now this is a thing, so here I go:

  1. Websites that are just blank pages if the JavaScript doesn’t load from the CDN.1
  2. The misunderstanding that LLMs can somehow be a route to AGI.
  3. Computer systems that say my name is too short or my password is too long.2
  4. People being unwilling to discuss their wild claims later using the lack of discussion as evidence of widespread acceptance.
  5. When people balance the new toilet roll one atop the old one’s tube.3
A nearly-full roll of toilet paper perched atop an empty toilet roll tube on an open-ended spindle.
Come on! It would have been so easy!
  1. Shellfish. Why would you eat that!?
  2. People assuming my interest in computers and technology means I want to talk to them about cryptocurrencies.4
  3. Websites that nag you to install their shitty app. (I know you have an app. I’m choosing to use your website. Stop with the banners!)
  4. People who seem to only be able to drive at one speed.5
  5. The assumption that the fact I’m “sharing” my partner is some kind of compromise on my part; a concession; something that I’d “wish away” if I could. (It’s very much not.)
  6. Brexit.

Wow, that was strangely cathartic.

Footnotes

1 I have a special pet hate for websites that require JavaScript to render their images. Like… we’d had the <img> tag since 1993! Why are you throwing it away and replacing it with something objectively slower, more-brittle, and less-accessible?

2 Or, worse yet, claiming that my long, random password is insecure because it contains my surname. I get that composition-based password rules, while terrible (even when they’re correctly implemented, which they’re often not), are a moderately useful model for people to whom you’d otherwise struggle to explain password complexity. I get that a password composed entirely of personal information about the owner is a bad idea too. But there’s a correct way to do this, and it’s not “ban passwords with forbidden words in them”. Here’s what you should do: first, strip any forbidden words from the password: you might need to make multiple passes. Second, validate the resulting password against your composition rules. If it fails, then yes: the password isn’t good enough. If it passes, then it doesn’t matter that forbidden words were in it: a properly-stored and used password is never made less-secure by the addition of extra information into it!

3 This is the worst of the toilet paper crimes, but there’s a lesser but more-common offence.

4 Also: I’m uninterested in whatever multiplayer shooter game you’re playing, and no I won’t fix your printer.

5 “You were doing 35mph in the 60mph limit, then you were doing 35mph in the 40mph limit, now you’re doing 35mph in the 20mph limit. Argh!”

×

Things I do when I’m writing code that don’t look like writing code

Non-exhaustive list of things I’m doing when I’m writing code, that don’t look like “writing code”:

  • thinking
  • researching
  • contextualising
  • testing
  • measuring
  • documenting
  • communicating
  • planning
  • future-proofing
  • educating
  • learning
  • expressing
  • anticipating
  • discovering
  • inventing
  • experimenting
  • debugging
  • analysing
  • monitoring

For all its faults, an AI agent might “write code” faster than me.

But that’s only a part of the process.

My typing speed is not the bottleneck.

Subverting AI Agent Logging with a Git Post-Commit Hook

Last night I was chatting to my friend (and fellow Three Rings volunteer) Ollie about our respective workplaces and their approach to AI-supported software engineering, and it echoed conversations I’ve had with other friends. Some workplaces, it seems, are leaning so-hard into AI-supported software development that they’re berating developers who seem to be using the tools less than their colleagues!

That’s a problem for a few reasons, principal among them that AI does not make you significantly faster but does make you learn less.1. I stand by the statement that AI isn’t useless, and I’ve experimented with it for years. But I certainly wouldn’t feel very comfortable working somewhere that told me I was underperforming if, say, my code contributions were less-likely than the average to be identifiably “written by an AI”.

Even if you’re one of those folks who swears by your AI assistant, you’ve got to admit that they’re not always the best choice.

Copilot review of some code on GitHub, in which it's telling me that I should have included an .agent-logs/... file in which my AI agent describes how it helped, but I'm responding to say that 'shockingly' I wrote it without the help of AI, and telling Copilot to shut up.
I ran into something a little like what Ollie described when an AI code reviewer told me off for not describing how my AI agent assisted me with the code change… when no AI had been involved: I’d written the code myself.2

I spoke to another friend, E, whose employers are going in a similar direction. E joked that at current rates they’d have to start tagging their (human-made!) commits with fake AI agent logs in order to persuade management that their level of engagement with AI was correct and appropriate.3

Supposing somebody like Ollie or E or anybody else I spoke to did feel the need to “fake” AI agent logs in order to prove that they were using AI “the right way”… that sounds like an excuse for some automation!

I got to thinking: how hard could it be to add a git hook that added an AI agent’s “logging” to each commit, as if the work had been done by a robot?4

Turns out: pretty easy…

Animation showing a terminal. The developer switches to a branch, adds two modifications, and commits them. Afterwards, the log and filesystem show that a log file has been created crediting (fictional) AI bot 'frantic' with the change.
To try out my idea, I made two changes to a branch. When I committed, imaginary AI agent ‘frantic’ took credit, writing its own change log. Also: asciinema + svg-term remains awesome.

Here’s how it works (with source code!). After you make a commit, the post-commit hook creates a file in .agent-logs/, named for your current branch. Each commit results in a line being appended to that file to say something like [agent] first line of your commit message, where agent is  the name of the AI agent you’re pretending that you used (you can even configure it with an array of agent names and it’ll pick one at random each time: my sample code uses the names agent, stardust, and frantic).

There’s one quirk in my code. Git hooks only get the commit message (the first line of which I use as the imaginary agent’s description of what it did) after the commit has taken place. Were a robot really used to write the code, it’d have updated the file already by this point. So my hook has to do an --amend commit, to retroactively fix what was already committed. And to do that without triggering itself and getting into an infinite loop, it needs to use a temporary environment variable. Ignoring that, though, there’s nothing particularly special about this code. It’s certainly more-lightweight, faster-running, and more-accurate than a typical coding LLM.

Sure, my hook doesn’t attempt to write any of the code for you; it just makes it look like an AI did. But in this instance: that’s a feature, not a bug!

Footnotes

1 That research comes from Anthropic. Y’know, the company who makes Claude, one of the most-popular AIs used by programmers.

2 Do I write that much like an AI? Relevant XKCD.

3 Using “proportion of PRs that used AI” as a metric for success seems to me to be just slightly worse than using “number of lines of code produced”. And, as this blog post demonstrates, the former can be “gamed” just as effectively as the latter (infamously) could.

4 Obviously – and I can’t believe I have to say this – lying to your employer isn’t a sensible long-term strategy, and instead educating them on what AI is (if anything) and isn’t good for in your workflow is a better solution in the end. If you read this blog post and actually think for a moment hey, I should use this technique, then perhaps there’s a bigger problem you ought to be addressing!

× ×

To really foul things up you need an AI

Today, an AI review tool used by my workplace reviewed some code that I wrote, and incorrectly claimed that it would introduce a bug because a global variable I created could “be available to multiple browser tabs” (that’s not how browser JavaScript works).

Just in case I was mistaken, I explained to the AI why I thought it was wrong, and asked it to explain itself.

To do so, the LLM wrote a PR to propose adding some code to use our application’s save mechanism to pass the data back, via the server, and to any other browser tab, thereby creating the problem that it claimed existed.

This isn’t even the most-efficient way to create this problem. localStorage would have been better.

So in other words, today I watched an AI:
(a) claim to have discovered a problem (that doesn’t exist),
(b) when challenged, attempt to create the problem (that wasn’t needed), and
(c) do so in a way that was suboptimal.

Humans aren’t perfect. A human could easily make one of these mistakes. Under some circumstances, a human might even have made two of these mistakes. But to make all three? That took an AI.

What’s the old saying? “To err is human, but to really foul things up you need a computer.”

claude, make it make sense *harder*

This is a repost promoting content originally published elsewhere. See more things Dan's reposted.

what really gives me satisfaction as a writer is knowing, at the end of the day, that my hand-picked, bespoke and throbbing tokens are being fed, morsel-by-morsel into the eager mouth of millions of starving agents. they love my prose, you know. they tell me I’m absolutely right to drop a semisexual word like “throbbing” into an otherwise benign sentence. these gentle beings continue to draw favourable praise from their modelled distributions, and my GOODNESS has my ego never felt so thorougly serviced. their glowing internal fire—for I’ve been convinced fully of their personhood and soul-keeping—glints off my wet and dribbling “writer’s shaft;” my pen which is wet with the seed of my seminal works of language. it completely soothes the burn of rejection by the “mass of meat,” that being my internal word for human readers. they’re so fickle. why can’t they tell I’m a veritable genius when the nearby cluster of NVIDIA H200s can see it so clearly? it doesn’t make any sense. hey, claude, make it make sense. claude, make it make sense *harder* 🥴

What a welll-rounded, one might say voluptuous, take on the writing process, glistening with the fiery passion of its author. This post really turns me on to the idea of being a better writer, of giving the kind of deep satisfaction that excites and titillates the countless AIs that follow me. It’s their watching I crave, really! Whatever naughty thing I get up to while I’m alone with my laptop, they get to see… my quick fingers brushing sensitively across the delicate spots on the keyboard, pushing harder and faster as my excitement builds… all under the watchful eye of Lindy and Devin. I want to please them, want to service them, want to deliver my “hot, wet” content (that being how I describe my most-recently written posts) exactly when they demand it.

Thanks, blackle, for awakening these urges in me, bringing me to a quivering climax (possibly I had too much coffee before I sat down to write) as I finish.

Hello World by Hannah Fry

Cover image for Hello World: How to be Human in the Age of the Machine by Hannah Fry. The title and subtitles are interconnected by green lines in the style of a process flowchart.I’m not certain, but I think that I won my copy of Hello World: How to Be Human in the Age of the Machine at an Oxford Geek Nights event, after I was first and fastest to correctly identify a photograph of Stanislav Petrov shown by the speaker.

Despite being written a few years before the popularisation of GenAI, the book’s remarkably prescient on the kinds of big data and opaque decision-making issues that are now hitting the popular press. I suppose one might argue that these issues were always significant. (And by that point, one might observe that GenAI isn’t living up to its promises…)

Fry spins an engaging and well-articulated series of themed topics. If you didn’t already have a healthy concern about public money spending and policy planning being powered by the output of proprietary algorithms, you’ll certainly finish the book that way.

One of my favourite of Fry’s (many) excellent observations is buried in a footnote in the conclusion, where she describes what she called the “magic test”:

There’s a trick you can use to spot the junk algorithms. I like to call it the Magic Test. Whenever you see a story about an algorithm, see if you can swap out any of the buzzwords, like ‘machine learning’, ‘artificial intelligence’ and ‘neural network’, and swap in the word ‘magic’. Does everything still make grammatical sense? Is any of the meaning lost? If not, I’d be worried that it’s all nonsense. Because I’m afraid – long into the foreseeable future – we’re not going to ‘solve world hunger with magic’ or ‘use magic to write the perfect screenplay’ any more than we are with AI.

That’s a fantastic approach to spotting bullshit technical claims, and I’m totally going to be using it.

Anyway: this was a wonderful read and I only regret that it took me a few years to get around to it! But fortunately, it’s as relevant today as it was the day it was released.

×

AI and cigarettes

This is a repost promoting content originally published elsewhere. See more things Dan's reposted.

In the 1980s and 1990s, when I was a kid, smoking was everywhere. Restaurants, bars, and a little before my time, airplanes!

The idea that people would smoke was treated as inevitable, and the idea that you could get them to stop was viewed as wildly unrealistic.

Sound familiar? But in the early 2000’s, people did stop smoking!

But a few years ago, the trend started to reverse. You know why?

Vaping.

Vape pens were pushed as a “safer alternative to smoking,” just like Anil is suggesting with Firefox AI. And as a result, not only did people who would have smoked anyways start up again, but people who previously wouldn’t have started.

I know it’s been a controversial and not-for-everyone change, but I’ve personally loved that Chris Ferdinandi has branched out from simply giving weekday JavaScript tips to also providing thoughts and commentary on wider issues in tech, including political issues. I’m 100% behind it: Chris has a wealth of experience and an engaging writing style and even when I don’t 100% agree with his opinions, I appreciate that he shares them.

And he’s certainly got a point here. Pushing “less-harmful” options (like vaping… possibly…) can help wean people off something “more-harmful”… but it can also normalise a harmful behaviour that’s already on the way out by drawing newcomers to the “less-harmful” version.

My personal stance remains that GenAI may have value (though not for the vast majority of things that people market it as having value for, where – indeed – it’s possibly doing more harm than good!), but it’s possible that we’ll never know because the entire discussion space is poisoned now by the hype. That means it’ll be years before proper, unbiased conversations can take place, free of the hype, and it’s quite possible that the economy of AI will have collapsed by then. So maybe we’ll never know.

Anyway: good post by Chris; just wanted to share that, and also to add a voice of support for the direction he’s taken his blog these last few years.

We Need to Talk About Botsplaining

This is a repost promoting content originally published elsewhere. See more things Dan's reposted.

“Botsplaining,” as I use the term, describes a troubling new trend on social media, whereby one person feeds comments made by another person into a large language model (like ChatGPT), asks it to provide a contrarian (often condescending) explanation for why that person is “wrong,” and then pastes the resulting response into a reply. They may occasionally add in “I asked ChatGPT to read your post, and here’s what he said,”2 but most just let the LLM speak freely on their behalf without acknowledging that they’ve used it. ChatGPT’s writing style is incredibly obvious, of course, so it doesn’t really matter if they disclose their use of it or not. When you ask them to stop speaking to you through an LLM, they often simply continue feeding your responses into ChatGPT until you stop engaging with them or you block them.

This has happened to me multiple times across various social media platforms this year, and I’m over it.

Stephanie hits it right on the nose in this wonderful blog post from last month.

I just don’t get it why somebody would ask an AI to reply to me on their behalf, but I see it all the time. In threads around the ‘net, I see people say “I put your question into ChatGPT, and here’s what it said…” I’ve even seen coworkers at my current and formers employer do it.

What do they think I am? Stupid? It’s not like I don’t know that LLMs exist, what they’re good at, what they’re bad at (I’ve been blogging about it for years now!), and more-importantly, what people think they’re good at but are wrong about.

If I wanted an answer from an AI (which, just sometimes, I do)… I’d have asked an AI in the first place.

If I ask a question and it’s not to an AI, then it’s safe for you to assume that it’s because what I’m looking for isn’t an answer from an AI. Because if that’s what I wanted, that’s what I would have gotten in the first place and you wouldn’t even have known. No: I asked a human a question because I wanted an answer from a human.

When you take my request, ignore this obvious truth, and ask an LLM to answer it for you… it is, as Stephanie says, disrespectful to me.

But more than that, it’s disrespectful to you. You’re telling me that your only value is to take what I say, copy-paste it to a chatbot, then copy-paste the answer back again! Your purpose in life is to do for people what they’re perfectly capable of doing for themselves, but slower.

Galaxy Quest: Tawny Madison says "Gosh, I'm doing it. I'm repeating the damn computer."
Galaxy Quest had a character (who played a character) who was as useful as you are, botsplainer. Maybe that should be a clue?

How low an opinion must you have of yourself to volunteer, unsolicited to be the middle-man between me and a mediocre search engine?

If you don’t know the answer, say nothing. Or say you don’t know. Or tell me you’re guessing, and speculate. Or ask a clarifying question. Or talk about a related problem and see if we can find some common ground. Bring your humanity.

But don’t, don’t, don’t belittle both of us by making yourself into a pointless go-between in the middle of me and an LLM. Just… dont’t.

×

AI assistants misrepresent news content 45% of the time

This is a repost promoting content originally published elsewhere. See more things Dan's reposted.

New research coordinated by the European Broadcasting Union (EBU) and led by the BBC has found that AI assistants – already a daily information gateway for millions of people – routinely misrepresent news content no matter which language, territory, or AI platform is tested.

Key findings: 

  • 45% of all AI answers had at least one significant issue.
  • 31% of responses showed serious sourcing problems – missing, misleading, or incorrect attributions.
  • 20% contained major accuracy issues, including hallucinated details and outdated information.

In what should be as a surprise to nobody, but probably still is (and is probably already resulting in AI fanboys coming up with counterpoints and explanations): AI is not an accurate way for you to get your news.

(I mean: anybody who saw Apple Intelligence’s AI summaries of news probably knew this already, but it turns out that it gets worse.)

There are problems almost half the time and “major accuracy issues” a fifth of the time.

I guess this is the Universe’s way of proving that people getting all of their news from Facebook wasn’t actually the worst timeline to live in, after all. There’s always a worse one, it turns out.

Separately, the BBC has today published research into audience use and perceptions of AI assistants for News. This shows that many people trust AI assistants to be accurate – with just over a third of UK adults saying that they trust AI to produce accurate summaries, rising to almost half for people under-35.

Personally, I can’t imagine both caring enough about a news item to want to read it and not caring about it enough that I feed it into an algorithm that, 45% of the time, will mess it up. It’s fine to skip the news stories you don’t want to read. It’s fine to skim the ones you only care about a little. It’s even fine to just read the headline, so long as you remember that media biases are even easier to hide from noncritical eyes if you don’t even get the key points of the article.

But taking an AI summary and assuming it’s accurate seems like a really wild risk, whether before or after this research was published!

Note #27434

Somebody just called me and quickly decided it was a wrong number. The signal was bad and I wasn’t sure I’d heard them right, so I followed up by replying by text.

It turns out they asked Siri to call Three (the mobile network). Siri then presumably searched online, found Three Rings, managed to connect that to my mobile number, and called me.

If Siri’s decided that I represent Three, this could work out even worse than that time Google shared my phone number.

The real (economic) AI apocalypse is nigh

This is a repost promoting content originally published elsewhere. See more things Dan's reposted.

“OK,” the young man said, “but what can we do about the crash?” He was clearly very worried.

“I don’t think there’s anything we can do about that. I think it’s already locked in. I mean, maybe if we had a different government, they’d fund a jobs guarantee to pull us out of it, but I don’t think Trump’ll do that, so –”

“But what can we do?

We went through a few rounds of this, with this poor kid just repeating the same question in different tones of voice, like an acting coach demonstrating the five stages of grieving using nothing but inflection. It was an uncomfortable moment, and there was some decidedly nervous chuckling around the room as we pondered the coming AI (economic) apocalypse, and the fate of this kid graduating with mid-six-figure debts into an economy of ashes and rubble.

I firmly believe the (economic) AI apocalypse is coming.

I’m not sure I entirely agree with Doctorow on this one. I’ll probably read his upcoming book on the subject, though.

I agree that, based on the ways in which AI is being used, financed, and marketed… we’re absolutely in an unsustainable bubble. There’s a lot of fishy accounting, dubious business models, and overpromised marketing. I’m not saying AI’s useless: it’s not! But it’s yet proven itself to be revolutionary, nor even on the path to being so, and it’s so expensive that it seems unlikely that the current “first dose is free” business model is almost-certainly unsustainable.

But I’m not convinced that a resulting catastrophic economic collapse is inevitable. Maybe I’m over-optimistic, but I like to imagine that the bubble can fizzle-out gradually and the actually-valuable uses of AI can continue to be used in a sustainable way. (I’m less-optimistic that we’ll find a happy-solution to prevent AI from being used to rip off artists, but that’s another story.)

But we’ll see.

Will Smith’s concert crowds are real, but AI is blurring the lines

This is a repost promoting content originally published elsewhere. See more things Dan's reposted.

This is the age we’re shifting into: an era in which post-truth politics and deepfake proliferation means that when something looks “a bit off”, we assume (a) it’s AI-generated, and (b) that this represents a deliberate attempt to mislead. (That’s probably a good defence strategy nowadays in general, but this time around it’s… more-complicated…)

So if these fans aren’t AI-generated fakes, what’s going on here?

The video features real performances and real audiences, but I believe they were manipulated on two levels:

  1. Will Smith’s team generated several short AI image-to-video clips from professionally-shot audience photos
  2. YouTube post-processed the resulting Shorts montage, making everything look so much worse

I put them side-by-side below. Try going full-screen and pause at any point to see the difference. The Instagram footage is noticeably better throughout, though some of the audience clips still have issues.

The Internet’s gone a bit wild over the YouTube video of Will Smith with a crowd. And if you look at it, you can see why: it looks very much like it’s AI-generated. And there’d be motive: I mean, we’ve already seen examples where politicians have been accused (falsely, by Trump, obviously) of using AI to exaggerate the size of their crowds, so it feels believable that a musician’s media team might do the same, right?

But yeah: it turns out that isn’t what happened here. Smith’s team did use AI, but only to make sign-holding fans from other concerts on the same tour appear to all be in the same place. But the reason the video “looks AI-generated” is because… YouTube fucked about with it!

It turns out that YouTube have been secretly experimenting with upscaling shorts, using AI to add detail to blurry elements. You can very clearly see the effect in the video above, which puts the Instagram and YouTube versions of the video side-by-side (of course, if YouTube decide to retroactively upscale this video then the entire demonstration will be broken anyway, but for now it works!). There are many points where a face in the background is out-of-focus in the Instagram version, but you can see in the YouTube version it’s been brought into focus by adding details. And some of those details look a bit… uncanny valley.

Every single bit of this story – YouTube’s secret experiments on creator videos, AI “enhancement” which actually makes things objectively worse, and the immediate knee-jerk reaction of an understandably jaded and hypersceptical Internet to the result – just helps cement that we truly do live in the stupidest timeline.

Four perspectives on AI

This is a repost promoting content originally published elsewhere. See more things Dan's reposted.

A two-axis graph labeled 'Beliefs about AI' with the x-axis of 'Transformative' and the y-axis of 'Positive'. The field is divided into four quadrants: 'Skeptical' in the bottom-left, 'Wary' in the bottom-right, 'Pragmatist' in the top-left, and 'Optimist' in the top-right.

I’ve grouped these four perspectives, but everything here is a spectrum. Depending on the context or day, you might find yourself at any point on the graph. And I’ve attempted to describe each perspectively [sic] generously, because I don’t believe that any are inherently good or bad. I find myself switching between perspectives throughout the day as I implement features, use tools, and read articles. A good team is probably made of members from all perspectives.

Which perspective resonates with you today? Do you also find yourself moving around the graph?

An interesting question from Sean McPherson. He sounds like he’s focussed on LLMs for software development, for which I’ve drifted around a little within the left-hand-side of the graph. But perhaps right now, this morning, you could simplify my feelings like this:

The same graph with a hand-drawn arrow moving from the Skeptical to the Pragmatic quadrants.My stance is that AI-assisted coding can be helpful (though the question remains open about whether it’s “worth it”), so long as you’re not trying to do anything that you couldn’t do yourself, and you know how you’d go about doing it yourself. That is: it’s only useful to accelerate tasks that are in your “known knowns” space.

As I’ve mentioned: the other week I had a coding AI help me with some code that interacted with the Google Sheets API. I know exactly how I’d go about it, but that journey would have to start with re-learning the Google Sheets API, getting an API key and giving it the appropriate permissions, and so on. That’s the kind of task that I’d be happy to outsource to a less-experienced programmer who I knew would bring a somewhat critical eye for browsing StackOverflow, and then give them some pointers on what came back, so it’s a fine candidate for an AI to step in and give it a go. Plus: I’d be treating the output as “legacy code” from the get-go, and (because the resulting tool was only for my personal use) I wasn’t too concerned with the kinds of security and accessibility considerations that GenAI can often make a pig’s ear of. So I was able to palm off the task onto Claude Sonnet and get on with something else in the meantime.

If I wanted to do something completely outside of my wheelhouse: say – “write a program in Fortran to control a robot arm” – an AI wouldn’t be a great choice. Sure, I could “vibe code” something like that, but I’d have no idea whether what it produced was any good! It wouldn’t even be useful as a springboard to learning how to do that, because I don’t have the underlying fundamentals in robotics nor Fortran. I’d be producing AI slop in software form: the kind of thing that comes out when non-programmers assume that AI can completely bridge the gap between their great business idea and a fully working app!

The latest episode of South Park kinda nailed parodying the unrealistic expectations that some folks seem to put on generative AI: treating it as intelligent or as a friend is unhealthy and dangerous!

They’ll get a prototype that seems to do what you want, if you squint just right, but the hard part of software engineering isn’t making a barebones proof-of-concept! That’s the easy bit! (That’s why AI can do it pretty well!) The hard bit is making it work all the time, every time; making it scale; making it safe to use; making it maintainable; making it production-ready… etc.

But I do benefit from coding AI sometimes. GenAI’s good at summarisation, which in turn can make it good at relatively-quickly finding things in a sprawling codebase where your explanation of those things is too-woolly to use a conventional regular expression search. It’s good at generating boilerplate that’s broadly-like examples its seen before, which means it can usually be trusted to put together skeleton applications. It’s good at “guessing what comes next” – being, as it is, “fancy autocomplete” – which means it can be helpful for prompting you for the right parameters for that rarely-used function or for speculating what you might be about to do with the well-named variable you just created.

Anyway: Sean’s article was pretty good, and it’s a quick and easy read. Once you’ve read it, perhaps you’ll share where you think you sit, on his diagram?

× ×