Musical Transients

This is a repost promoting content originally published elsewhere. See more things Dan's reposted.

Musical Transients from Psynwav1 is without a doubt the best mashup/mixtape-album I’ve heard since Neil Cicierega’s Mouth Moods (which I’ve listened to literally hundreds of times since its release in 2017). Well-done, Psynwav.

It’s possible, of course… that my taste in music is not the same as your taste in music, and that’s fine.

Footnotes

1 If you’ve heard of Psynwav already it’s probably thanks to 2021’s Slamilton, which is probably the best Space Jam/Hamilton crossover soundtrack ever made.

Four perspectives on AI

This is a repost promoting content originally published elsewhere. See more things Dan's reposted.

A two-axis graph labeled 'Beliefs about AI' with the x-axis of 'Transformative' and the y-axis of 'Positive'. The field is divided into four quadrants: 'Skeptical' in the bottom-left, 'Wary' in the bottom-right, 'Pragmatist' in the top-left, and 'Optimist' in the top-right.

I’ve grouped these four perspectives, but everything here is a spectrum. Depending on the context or day, you might find yourself at any point on the graph. And I’ve attempted to describe each perspectively [sic] generously, because I don’t believe that any are inherently good or bad. I find myself switching between perspectives throughout the day as I implement features, use tools, and read articles. A good team is probably made of members from all perspectives.

Which perspective resonates with you today? Do you also find yourself moving around the graph?

An interesting question from Sean McPherson. He sounds like he’s focussed on LLMs for software development, for which I’ve drifted around a little within the left-hand-side of the graph. But perhaps right now, this morning, you could simplify my feelings like this:

The same graph with a hand-drawn arrow moving from the Skeptical to the Pragmatic quadrants.My stance is that AI-assisted coding can be helpful (though the question remains open about whether it’s “worth it”), so long as you’re not trying to do anything that you couldn’t do yourself, and you know how you’d go about doing it yourself. That is: it’s only useful to accelerate tasks that are in your “known knowns” space.

As I’ve mentioned: the other week I had a coding AI help me with some code that interacted with the Google Sheets API. I know exactly how I’d go about it, but that journey would have to start with re-learning the Google Sheets API, getting an API key and giving it the appropriate permissions, and so on. That’s the kind of task that I’d be happy to outsource to a less-experienced programmer who I knew would bring a somewhat critical eye for browsing StackOverflow, and then give them some pointers on what came back, so it’s a fine candidate for an AI to step in and give it a go. Plus: I’d be treating the output as “legacy code” from the get-go, and (because the resulting tool was only for my personal use) I wasn’t too concerned with the kinds of security and accessibility considerations that GenAI can often make a pig’s ear of. So I was able to palm off the task onto Claude Sonnet and get on with something else in the meantime.

If I wanted to do something completely outside of my wheelhouse: say – “write a program in Fortran to control a robot arm” – an AI wouldn’t be a great choice. Sure, I could “vibe code” something like that, but I’d have no idea whether what it produced was any good! It wouldn’t even be useful as a springboard to learning how to do that, because I don’t have the underlying fundamentals in robotics nor Fortran. I’d be producing AI slop in software form: the kind of thing that comes out when non-programmers assume that AI can completely bridge the gap between their great business idea and a fully working app!

The latest episode of South Park kinda nailed parodying the unrealistic expectations that some folks seem to put on generative AI: treating it as intelligent or as a friend is unhealthy and dangerous!

They’ll get a prototype that seems to do what you want, if you squint just right, but the hard part of software engineering isn’t making a barebones proof-of-concept! That’s the easy bit! (That’s why AI can do it pretty well!) The hard bit is making it work all the time, every time; making it scale; making it safe to use; making it maintainable; making it production-ready… etc.

But I do benefit from coding AI sometimes. GenAI’s good at summarisation, which in turn can make it good at relatively-quickly finding things in a sprawling codebase where your explanation of those things is too-woolly to use a conventional regular expression search. It’s good at generating boilerplate that’s broadly-like examples its seen before, which means it can usually be trusted to put together skeleton applications. It’s good at “guessing what comes next” – being, as it is, “fancy autocomplete” – which means it can be helpful for prompting you for the right parameters for that rarely-used function or for speculating what you might be about to do with the well-named variable you just created.

Anyway: Sean’s article was pretty good, and it’s a quick and easy read. Once you’ve read it, perhaps you’ll share where you think you sit, on his diagram?

× ×

Just a Little More Context Bro, I Promise, and It’ll Fix Everything

This is a repost promoting content originally published elsewhere. See more things Dan's reposted.

Solving problems with LLMs is like solving front-end problems with NPM: the “solution” comes through installing more and more things — adding more and more context, i.e. more and more packages.

  • LLM: Problem? Add more context.
  • NPM: Problem? There’s a package for that.

As I’m typing this, I’m thinking of that image of the evolution of the Raptor engine, where it evolved in simplicity:

Photograph of three versions of the raptor engine, each one getting progressively simplified in mechanical parts.

This stands in contrast to my working with LLMs, which often wants more and more context from me to get to a generative solution:

Photograph of three versions of the raptor engine, but the image is reversed showing the engine get progressively complicated in mechanical parts over time. Each engine represents an LLM prompt.

Jim Nielsen speaks to my experience, here. Because a programming LLM is simply taking inputs (all of your code, plus your prompt), transforming it through statistical analysis, and then producing an output (replacement code), it struggles with refactoring for simplicity unless very-carefully controlled. “Vibe coding” is very much an exercise in adding hacks upon hacks… like the increasingly-ludicrous epicycles introduced by proponents of geocentrism in its final centuries before the heliocentric model became fully accepted.

Geocentric representation of the apparent motion of the Sun, Mercury, and Venus from the Earth, based on 15th century diagrams. It consists of many looping spirals approaching and then withdrawing from the Earth as they orbit around it.
This mess used to be how many perfectly smart people imagined the movements of the planets. When observations proved it couldn’t be right, they’d just add more complexity to catch the edge cases.

I don’t think that AIs are useless as a coding tool, and I’ve successfully used them to good effect on several occasions. I’ve even tried “vibe coding”, about which I fully agree with Steve Krouse‘s observation that “vibe code is legacy code”. Being able to knock out something temporary, throwaway, experimental, or for personal use only… while I work on something else… is pretty liberating.

For example: I couldn’t remember my Google Sheets API and didn’t want to re-learn it from the sprawling documentation site, but wanted a quick personal tool to manipulate such a sheet from a remote system. I was able to have an AI knock up what I needed while I cooked dinner for the kids, paying only enough attention to check-in on its work. Is it accessible? Is it secure? Is it performant? Is it maintainable? I can’t answer any of those questions, and so as a professional software engineer I have to reasonably assume the answer to all of them is “no”. But its only user is me, it does what I needed it to do, and I didn’t have to shift my focus from supervising children and a pan in order to throw it together!

Anyway: Jim hits the nail on the head here, as he so often does.

× × ×

Riding With Strangers: California Hitchhikers in the 1970s

This is a repost promoting content originally published elsewhere. See more things Dan's reposted.

“For years, starting in the late ‘70s, I was taking pictures of hitchhikers. A hitchhiker is someone you may know for an hour, or a day, or, every so often, a little longer, yet, when you leave them, they’re gone. If I took a picture, I reasoned, I’d have a memory. I kept a small portfolio of photos in the car to help explain why I wanted to take their picture. This helped a lot. It also led me to look for hitchhikers, so that I could get more pictures.

“I almost always had a camera… I finally settled on the Olympus XA – a wonderful little pocket camera. (I’ve taken a picture of the moon rising with this camera.) One time I asked a chap if I could take a photo, and he said, “You took my picture a few years ago.” I showed him the album and he picked himself out. “That’s me,” he said, pointing…”

Not that hitch-hiking is remotely as much a thing today as it was 50 years ago, but even if it were then it wouldn’t be so revolutionary to, say, take a photo of everybody you give a ride to. We’re all carrying cameras all the time, and the price of taking a snap is basically nothing.

But for Doug Biggert, who died in 2023, began doing this with an analogue camera as he drove around California from 1973 onwards? That’s quite something. Little wonder he had to explain his project to his passengers (helped, later on, by carrying a copy of the photo album he’d collected so-far that he could show them).

A really interesting gallery with a similarly-compelling story. Also: man – look at the wear-and-tear on his VW Bug!

×

Hetero Awesome… Hijacked

This is a repost promoting content originally published elsewhere. See more things Dan's reposted.

I somehow missed this “most punk rock thing ever” moment the other month. If you did too, let me catch you up:

For a weekend in June, in what was clearly designed to act as a protest to Pride month events, human turd/bar owner Mark Fitzpatrick decided to put on a “straight pride” festival in Boise, Idaho. Called “Hetero Awesome Fest”, it was as under-subscribed as perhaps it ought to be (having a similar turnout to the “world’s smallest pride parade”). And that would have made it a non-story, except for the moment when local singer-songwriter Daniel Hamrick got up to perform his set:

I can’t begin to fathom the courage it takes to get on-stage in front of an ultra-conservative crowd (well, barely a crowd…) in a right-leaning US state to protest their event by singing a song about a trans boy. But that’s exactly what Hamrick did. After catching spectators off-guard, perhaps, by taking the perhaps-“masculine-telegraphing” step of drawing attention to part of his army uniform, the singer swiftly switched outfit to show off a “Keep Canyon County Queer” t-shirt, slip on a jacket with various Pride-related patches, and then immediately launched into Boy, a song lamenting the persecution of a trans child by their family and community.

Needless to say, this was the first, last, and only song Daniel Hamrick got to play at Hetero Awesome Fest. But man, what a beautiful protest!

(There are other videos online that aren’t nabbed from the official event feed and so don’t cut-out abruptly.)

Repost #27029

This is a repost promoting content originally published elsewhere. See more things Dan's reposted.

I still get that powerful feeling that anything is possible when I open a web browser — it’s not as strong as it was 20 years ago, but it’s still there.

This. This is the optimistic side of the coin represented by the things I’ve been expressing in notes like this, articles like that, and reposts like the other.

As cynical as you can get at the state of the Web right now… as much as it doesn’t command the level of inspirational raw potential of “anything is possible” that it might have once… it’s still pretty damn magical, and we should lean into that.

The rise of Whatever

This is a repost promoting content originally published elsewhere. See more things Dan's reposted.

A freaking excellent longread by Eevee (Evelyn Woods), lamenting the direction of popular technological progress and general enshittification of creator culture. It’s ultimately uplifting, I feel, but it’s full of bitterness until it gets there. I’ve pulled out a couple of highlights to try to get you interested, but you should just go and read the entire thing:

And so the entire Web sort of congealed around a tiny handful of gigantic platforms that everyone on the fucking planet is on at once. Sometimes there is some sort of partitioning, like Reddit. Sometimes there is not, like Twitter.

That’s… fine, I guess. Things centralize. It happens. You don’t get tubgirl spam raids so much any more, at least.

But the centralization poses a problem. See, the Web is free to look at (by default), but costs money to host. There are free hosts, yes, but those are for static things getting like a thousand visitors a day, not interactive platforms serving a hundred million. That starts to cost a bit. Picture logs being shoveled into a steam engine’s firebox, except it’s bundles of cash being shoveled into… the… uh… website hole.

I don’t want to help someone who opens with “I don’t know how to do this so I asked ChatGPT and it gave me these 200 lines but it doesn’t work”. I don’t want to know how much code wasn’t actually written by anyone. I don’t want to hear how many of my colleagues think Whatever is equivalent to their own output.

I glimpsed someone on Twitter a few days ago, also scoffing at the idea that anyone would decide not to use the Whatever machine. I can’t remember exactly what they said, but it was something like: “I created a whole album, complete with album art, in 3.5 hours. Why wouldn’t I use the make it easier machine?”

This is kind of darkly fascinating to me, because it gives rise to such an obvious question: if anyone can do that, then why listen to your music? It takes a significant chunk of 3.5 hours just to listen to an album, so how much manual work was even done here? Apparently I can just go generate an endless stream of stuff of the same quality! Why would I want your particular brand of Whatever?

Nobody seems to appreciate that if you can make a computer do something entirely on its own, then that becomes the baseline.

Do things. Make things. And then put them on your website so I can see them.

Clearly this all ties in to stuff that I’ve been thinking, lately. Expect more posts and reposts in this vein, I guess?

You MUST listen to RFC 2119

This is a repost promoting content originally published elsewhere. See more things Dan's reposted.

With thanks to Ruth for sharing this with me:

RFC 2119 establishes language around requirement levels. Terms like “MUST”, “MUST NOT”, “SHOULD”, and “SHOULD NOT” are helpful when coordinating with engineers. I reference it a lot for work, as I create a lot of accessible component specifications.

Because of this familiarity—and because I’m an ass—I fired back in Discord:

I want to hire a voice actor to read 2119 in the most over the top, passive-aggressive way possible
wait, this is an achievable goal oh no

It turns out you can just pay people to do things.

I found a voice actor and hired them with the task of “Reading this very dry technical document in the most over-the-top sarcastic, passive-aggressive, condescending way possible. Like, if you think it’s too much, take that feeling, ignore it, and crank things up one more notch.”

RFC 2119 is one of few RFCs I can identify by number alone, too. That and RFCs 1945 and 1866, for some reason, and RFC 2822 (and I guess, by proxy, 822) because I’ve had to implement its shitty date format more times than I’d like to count.

But anyway: if you’ve ever wanted to hear a (sarcastic, passive aggressive) dramatic reading of RFC 2119, Eric – and the actor he found – have got you covered!

A Castle Built From Random Rooms

This is a repost promoting content originally published elsewhere. See more things Dan's reposted.

A Castle Built From Random Rooms is a work in progress/early access/demo version of a full game that’ll probably never exist. But if it does exist, it will be basically the same as this, but on a grander scale, and include the following features:

   – over a hundred random rooms instead of about ten
   – character jobs and descriptions that actually add individualised effects/skills/starting equipment and so on
   – special pre-chosen characters with particulalrly challenging stats levels for extra difficult challenges
   – more stats! more items! more use of the stats and items within different rooms to create different outcomes!
   – high scores and loot rankings and possibly even achievements of some kind
   – less bugs (aspiration)
   – decent endings (stretch goal)

What the game almost certainly won’t ever have:

   – any semblance of quality or coherence
   – sound and/or music
   – monetary success

I saw and first played this ages ago, after its initial release was mentioned on Metafilter Projects last year. In case you missed it first time around, you can give it a go now!

It’s a Twine-like choose-your-own-adventure, but with the rooms randomly shuffled each time, in sort-of a semi-rougelite way. Some imaginative work in this. And the art style is wonderful!

Citing Lobachevsky

This is a repost promoting content originally published elsewhere. See more things Dan's reposted.

The recent death of Tom Lehrer has resulted in all manner of interesting facts and anecdotes about the man being published around the Internet, but perhaps my favourite was the tale about how, while working for the NSA in 1957, he snuck an Easter Egg into a research paper… that went undetected for nearly 60 years:

I worked as a mathematician at the NSA during the second Obama administration and the first half of the first Trump administration. I had long enjoyed Tom Lehrer’s music, and I knew he had worked for the NSA during the Korean War era.

The NSA’s research directorate has an electronic library, so I eventually figured, what the heck, let’s see if we can find anything he published internally!And I found a few articles I can’t comment on. But there was one unclassified article– “Gambler’s Ruin With Soft-Hearted Adversary”.

The paper was co-written by Lehrer and R. E. Fagen, published in January, 1957. The mathematical content is pretty interesting, but that’s not what stuck out to me when I read it. See, the paper cites FIVE sources throughout its body. But the bibliography lists SIX sources. What’s the leftover?

Academic bibliography, citing six papers related principally to probability. The third paper, highlighted for the picture, is: Lobachevsky, "Analytic and Algebraic Topology of Locally Euclidean Metrizations of Infinitely Differentiable Remannian Manifolds" (Unpublished)

So I sent an email to the NSA historians. And I asked them: hey, when was this first noticed, and how much of a gas did people think it was? Did he get in trouble for it? That sort of stuff.

The answer came back: “We’ve never heard of this before. It’s news to us.”

In November of 2016, nearly 60 years after the paper was published internally, I had discovered the joke.

Bozhe moi!

Very Tom Lehrer to hide a joke so well that nobody would even notice it for most of six decades, while undermining and subverting bureaucratic government processes.

And if you somehow don’t “get” the joke, Wikipedia can both explain and let you listen to the relevant song

ArtificialCast

This is a repost promoting content originally published elsewhere. See more things Dan's reposted.

Type-safe transformation powered by inference.

ArtificialCast is a lightweight, type-safe casting and transformation utility powered by large language models. It allows seamless conversion between strongly typed objects using only type metadata, JSON schema inference, and prompt-driven reasoning.

Imagine a world where Convert.ChangeType() could transform entire object graphs, infer missing values, and adapt between unrelated types – without manual mapping or boilerplate.

ArtificialCast makes that possible.

Features

  • Zero config – Just define your types.
  • Bidirectional casting – Cast any type to any other.
  • Schema-aware inference – Auto-generates JSON Schema for the target type.
  • LLM-powered transformation – Uses AI to “fill in the blanks” between input and output.
  • Testable & deterministic-ish – Works beautifully until it doesn’t.

As beautiful as it is disgusting, this C# is fully-functional and works exactly as described… and yet you really, really should never use it (which its author will tell you, too).

Casting is the process of transforming a variable of one type into one of another. So for example you might cast the number 3 into a string and get "3" (though of course this isn’t the only possible result: "00000011" might also be a valid representation, depending on the circumstances1).

Casting between complex types defined by developers is harder and requires some work. Suppose you have a User model with attributes like “username”, “full name”, “hashed password”, “email address” etc., and you want to convert your users into instances of a new model called Customer. Some of the attributes will be the same, some will be absent, and some will be… different (e.g. perhaps a Customer has a “first name” and “last name” instead of a “full name”, and it’s probably implemented wrong to boot).

The correct approach is to implement a way to cast one as the other.

The very-definitely incorrect approach is to have an LLM convert the data for you. And that’s what this library provides.

ArtificialCast is a demonstration of what happens when overhyped AI ideas are implemented exactly as proposed – with no shortcuts, no mocking, and no jokes.

It is fully functional. It passes tests. It integrates into modern .NET workflows. And it is fundamentally unsafe.

This project exists because:

  • AI-generated “logic” is rapidly being treated as production-ready.
  • Investors are funding AI frameworks that operate entirely on structure and prompts.
  • Developers deserve to see what happens when you follow that philosophy to its logical conclusion.

ArtificialCast is the result.

It works. Until it doesn’t. And when it doesn’t, it fails in ways that look like success. That’s the danger.

I’ve played with AI in code a few times. There are some tasks it’s very good at, like summarising and explaining (when the developer before you didn’t leave a sufficiency of quality comments). There are some tasks it can be okay at, with appropriate framing and support: like knowing its way around unfamiliar-to-you but well-documented APIs2.

But if you ask an AI to implement an entire product or even just a significant feature from scratch, unsupervised, you’re at risk of rapidly hitting the realm of Heisenbugs, security vulnerabilities, and enormous redundancies.

This facetious example – of using AI as a universal typecasting engine – helps hammer that point home, and I love it.

Footnotes

1 How to cast basic types isn’t entirely standardised: PHP infamously casts the string "0" as false when it’s coerced into a boolean, which virtually no other programming language does, for example.

2 The other week, I had a GenAI help me write some code that writes to a Google Sheets document, because I was fuzzy on the API and knew the AI would pick it up faster than me while I wrote the code “around” it.

DOGWALK

This is a repost promoting content originally published elsewhere. See more things Dan's reposted.

Well this was adorable.

Blender Studio’s official game project is a short casual interactive story. Play a big, adorable dog traversing through winter woods and help out a little kid decorate a snowman with colorful items hidden in the environment.

Animation from DOGWALK featuring a papercraft scene of a large hairy dog pulling a young child through a wintery forest.

DOGWALK is a free, open-source Godot game for Windows, MacOS and Linux, produced and given to the world by Blender Studio as a way of showcasing some of their video lessons. The beautiful, playful “papercraft” models were made by making actual hand-painted paper models of the assets, unfolding them, scanning them, and then re-folding the maps back into in-game assets, which is an amazing and imaginative approach.

It was released a little over a week ago, and it’s a short but adorable little game.

It’s also free on Steam, if that’s your preference.

×

Adding a feature because ChatGPT incorrectly thinks it exists

This is a repost promoting content originally published elsewhere. See more things Dan's reposted.

Our scanning system wasn’t intended to support this style of notation. Why, then, were we being bombarded with so many ASCII tab ChatGPT screenshots? I was mystified for weeks — until I messed around with ChatGPT myself and got this:

Screenshot of ChatGPT telling users to enter this ASCII tab into soundslice.com

Turns out ChatGPT is telling people to go to Soundslice, create an account and import ASCII tab in order to hear the audio playback. So that explains it!

With ChatGPT’s inclination to lie about the features of a piece of technology, it was only a matter of time before a frustrated developer actually added a feature that ChatGPT had imagined, just to stop users from becoming dissatisfied when they tried to use nonexistent tools that ChatGPT told them existed.

And this might be it! This could be the very first time that somebody’s added functionality based on an LLM telling people the feature existed already.

Adrian Holovaty runs a tool that can “read” scanned sheet music and provide a digital representation to help you learn how to play it. But after ChatGPT started telling people that his tool could also read ASCII-formatted guitar tablature, he went and implemented it!

His blog post’s got more details, and it’s worth a read. This could be a historic moment that we’ll look back on!

×

Zip It – Finding File Similarity Using Compression Utilities

This is a repost promoting content originally published elsewhere. See more things Dan's reposted.

This was an enjoyable video. Nothing cutting-edge, but a description of an imaginative use of an everyday algorithm – DEFLATE, which is what powers most of the things you consider “ZIP files” – to do pattern-matching and comparison between two files. The tl;dr is pretty simple:

  • Lossless compression works by looking for repetition, and replacing the longest/most-repeated content with references to a lookup table.
  • Therefore, the reduction-in-size from compressing a file is an indicator of the amount of repetition within it.
  • Therefore, the difference in reduction-in-size of compressing a single file to the reduction-in-size of compressing a pair of files is indicative of their similarity, because the greatest compression gains come from repetition of data that is shared across both files.
  • This can be used, for example, to compare the same document written in two languages as an indication of the similarity of the languages to one another, or to compare the genomes of two organisms as an indication of their genetic similarity (and therefore how closely-related they are).

I love it when somebody finds a clever and novel use for an everyday tool.

Historical Tech Tree

This is a repost promoting content originally published elsewhere. See more things Dan's reposted.

Screenshot from HistoricalTechTree.com, highlighting how the invention of electromagnetic induction in 1831 led to the development of the dynamo and alternating current, amidst a collection of other 19th-century technological developments and a mess of connecting lines.

This wonderful project, released six weeks ago, attempts the impossible challenge of building a Civilization-style tech tree but chronicling the development and interplay of all of the actual technological innovations humanity has ever made. Even in its inevitably-incomplete state, it’s inspiring and informative. Or, as Open Culture put it:

Our civilization has made its way from stone tools to robotaxis, mRNA vaccines, and LLM chatbots; we’d all be better able to inhabit it with even a slightly clearer idea of how it did so.