QEF for the geohound and I. My GPSr can’t connect to the Internet and I’m not retyping a URL into my phone so I can’t comment on the hint, but the cache itself is fine. SL, TFTC.
Coming from Leigh Loop 8 the geohound and I figured we had an easy and direct run here, but intolerable mud at the East end of the field path drove us out onto the road and back before
we could get to this cache. Even here, conditions were boggy and crossing the path to improve our certainty of our GPS coordinates was a drag! But we found it in the end pretty easily,
although its logbook is so wet as to almost be unsignable! TFTC.
Since our house flooded a few weeks ago we’ve been living out of a series of short holiday lets, waiting for the floors to dry out and the insurance company to assess the damage, before
we’ll hopefully be able to secure a slightly longer tenancy somewhere while repairs are done.
Right now we’re staying in New Yatt. I’ve cached a little around here before, but never this loop, so this morning the geopup and I came out for a walk and to begin this loop… at its
end!
After some initial difficulty getting a GPS fix we eventually found a good looking hiding place. Despite this being the first find out this cache and the terrible recent weather, the
cache and its log are in fine condition. SL, TFTC!
Spring’s coming, and with it a stack of cachers either new or else returning from hibernation. Therefore; I’m briefly checking-in on my geocaches to ensure they’re in good condition.
Today I peeped at this one, and found it happy and well and ready to be hunted. Go find it!
Nineteen days after my house flooded, causing extensive damage on the ground floor, the insurance
company has finally accepted the claim and is willing to pay for our temporary accommodation in the meantime (a few days in a hotel, a few days with friends although that’s not
paid-for, four weeks in two different holiday lets), although we’re still waiting for their thumbs-up on a proposal for a ~6-month let of a house to live in while our floors are replaced and our kitchen rebuilt and whatnot.
Meanwhile, yesterday a surveyor came around and looked at all of our walls. Everything still feels like it’s taking a very long time. I appreciate that insurance companies are a maze of
bureaucracy and procedure, but from “this side” of the table – living and working out of strange places, never really feeling “unpacked” but without it being a holiday – it’s all a bit
of a drag!
Last night I was chatting to my friend (and fellow Three Rings volunteer) Ollie about our respective
workplaces and their approach to AI-supported software engineering, and it echoed conversations I’ve had with other friends. Some workplaces, it seems, are leaning so-hard into
AI-supported software development that they’re berating developers who seem to be using the tools less than their colleagues!
That’s a problem for a few reasons, principal among them that AI does not
make you significantly faster but does make you learn less.1. I stand by the statement that AI isn’t useless, and I’ve experimented with it for years. But I certainly wouldn’t feel very comfortable
working somewhere that told me I was underperforming if, say, my code contributions were less-likely than the average to be identifiably “written by an AI”.
Even if you’re one of those folks who swears by your AI assistant, you’ve got to admit that they’re not always the best choice.
I ran into something a little like what Ollie described when an AI code reviewer told me off for not describing how my AI agent assisted me with the code change… when no AI had been
involved: I’d written the code myself.2
I spoke to another friend, E, whose employers are going in a similar direction. E joked that at current rates they’d have to start tagging their (human-made!) commits with fake
AI agent logs in order to persuade management that their level of engagement with AI was correct and appropriate.3
Supposing somebody like Ollie or E or anybody else I spoke to did feel the need to “fake” AI agent logs in order to prove that they were using AI “the right way”… that sounds
like an excuse for some automation!
I got to thinking: how hard could it be to add a git hook that added an AI agent’s “logging” to each commit, as if the work had been done by a
robot?4
Turns out: pretty easy…
To try out my idea, I made two changes to a branch. When I committed, imaginary AI agent ‘frantic’ took credit, writing its own change log. Also: asciinema + svg-term remains awesome.
Here’s how it works (with source code!). After you make a commit, the post-commit hook creates a file in
.agent-logs/, named for your current branch. Each commit results in a line being appended to that file to say something like [agent] first line of your commit
message, where agent is the name of the AI agent you’re pretending that you used (you can even configure it with an array of agent names and it’ll pick one at
random each time: my sample code uses the names agent, stardust, and frantic).
There’s one quirk in my code. Git hooks only get the commit message (the first line of which I use as the imaginary agent’s description of what it did) after the commit has
taken place. Were a robot really used to write the code, it’d have updated the file already by this point. So my hook has to do an --amend commit, to
retroactively fix what was already committed. And to do that without triggering itself and getting into an infinite loop, it needs to use a temporary environment variable.
Ignoring that, though, there’s nothing particularly special about this code. It’s certainly more-lightweight, faster-running, and more-accurate than a typical coding LLM.
Sure, my hook doesn’t attempt to write any of the code for you; it just makes it look like an AI did. But in this instance: that’s a feature, not a
bug!
Footnotes
1 That research comes from Anthropic. Y’know, the company who makes Claude, one of the
most-popular AIs used by programmers.
3 Using “proportion of PRs that used AI” as a metric for success seems to me to be just
slightly worse than using “number of lines of code produced”. And, as this blog post demonstrates, the
former can be “gamed” just as effectively as the latter (infamously) could.
4 Obviously – and I can’t believe I have to say this – lying to your employer isn’t a
sensible long-term strategy, and instead educating them on what AI is (if anything) and isn’t good for in your workflow is a better solution in the end. If you read this blog post and
actually think for a moment hey, I should use this technique, then perhaps there’s a bigger problem you ought to be addressing!
Hurrah! I just made my first successful submission to Curious Cones, a weblog collecting photographs of traffic cones spotted in unusual places.
I spotted this cone while the younger child and I took a walk to the next-nearest village to our temporary accommodation, in order to find a geocache, tag some benches for OpenBenches, and have a cafe brunch.
Anyway: if you’re not following Curious Cones, it’s exactly as delightful as you might expect.
Despite being relatively ‘local’ – only half a mile away, and fans of the show – it took until this morning before the family and I actually came to up visit Clarkson’s Farm. We’re
even-more-local but now, after a flood made our house uninhabitable and we moved, temporarily, to a holiday let just up the road.
This morning we visited Diddly Squat for a round of sausage sandwiches, after which the youngest child and I decided to make a run for this nearby geocache. The kid put his finger right
on the cache before I’d even had a chance to take a look for it!
Log signed, and a car full of Hawkstone Lager acquired, we were all done. TFTC!
So it was inevitable that Apple would add video support to their podcasting apps. And it makes sense for Apple to update the technical underpinnings; the assumptions that were made
when designing podcasts over two decades ago aren’t really appropriate for many contemporary uses. For example, back then, by default an entire podcast episode would be downloaded
to your iPod for convenient listening on the go, just like songs in your music library. But downloading a giant 4K video clip of an hour-long podcast show that you might not even
watch, just in case you might want to see it, would be a huge waste of resources and bandwidth. Modern users are used to streaming everything. Thus, Apple updated their apps to
support just grabbing snippets of video as they’re needed, and to their credit, Apple is embracing an open video format when doing so, instead of some proprietary system that
requires podcasters to pay a fee or get permission.
The problem, though, is that Apple is only allowing these new video streams to be served by a small number of pre-approved
commercial providers that they’ve hand-selected. In the podcasting world, there are no gatekeepers; if I want to start a podcast today, I can publish a podcast feed here on
anildash.com and put up some MP3s with my episodes, and anyone anywhere in the world can subscribe to that podcast, I don’t have to ask anyone’s permission, tell anyone
about it, or agree to anyone’s terms of service.
…
When I started my pointless podcast, I didn’t need anybody else’s infrastructure or permission. Podcasts are, in the vein of the Web itself (and thanks at
least in part to my former coworker Dave Winer), distributed and democratised.
All you need to host a podcast is an RSS file and some audio files. You can put them onto your shared VM. You can put them onto your homelab, You can put them onto a
GitHub Pages site. You can put them onto a Neocities site. Or you can shell out for a commercial host and distribute your content across a global network of CDNs, for maximum
performance! All of these are podcasts, and they’re all equal from a technical perspective.1
Video podcasts could be the same. Even if – as Apple suggest – HLS is to be mandatory for their player2,
that doesn’t necessitate a big corporate third-party provider. Having an “allowlist” of people who can host your podcast’s video is gatekeeping.
Also, it’s… not really podcasting any more. It’s been pointed out that before “platform-exclusive” podcasts (I’m looking at you, Spotify) are not truly podcasts: if it’s not an RSS feed
plus some audio files, it’s not a podcast, it’s lightly sparkling audio.
Can the same analogy be used for a podcast player? Is a player that only supports content (in this case, video content) if it’s hosted by a particular partner…
not-a-podcast-player? Either way, it’s pretty embarrassing for Apple of all people to turn their back on what makes a podcast a podcast.
Footnotes
1 Technically, my podcast is just a collection of posts that share a ClassicPress “tag”;
ClassicPress gives me “RSS feed of a particular tag” for free, so all I needed to do was (a) add metadata to point to my MP3 files (b) and use a hook to inject the relevant
<enclosure> element into the feed. There are almost certainly plugins that could have done it for me, but it’s a simple task so I just wrote a few lines of PHP code
and called it a day!
2 This is… fine, I guess. I’d rather that an <enclosure> be
more-agnostic. If I only want to provide a single fat MP4 file, then it’s up to my listeners to say “this is shit, why can’t I stream this on my cellular data, I’m gonna listen/watch
somebody else instead”. But even if HLS is required, that’s not difficult: I talked about how to do it last year while
demonstrating variable-aspect-ratio videos (in vanilla HTML)!
Unbelievably muddy today Took me so long to pick my way along the boggy path (pictured) that this’ll probably be my only cache of the day. Still, one more for my LOL collection!
I made a massively multiplayer snake game that’s accessible over ssh. Play by running ssh snakes.run in your terminal…
I’d been speculating for the last month or so what Nolen Royalty had been working on recently that had required
such high-performance out of the SSH protocol, but now we know: it’s massively multiplayer snake.
My personal best length is in the region of 180, but I wouldn’t dare risking taking a shortcut when I’m at those lofty heights.
There’s a philosophical thing here that makes it feel different from probably any other Snake game you’ve ever played: it’s a thin client. All of the program
runs on the server.
Even slither.io is a split-client. The server is responsible for game state, but rendering the graphics based upon that state runs in JavaScript code in the browser. Not so
with snakes.run. The entire output is delivered as terminal rendering instructions.
It’s a reminder of how computing used to be. Dumb terminals sharing a mainframe that would do all of the processing, with only basic I/O being delegated to the terminals,
was the dominant way to use computers in the 1960s through 1970s, until the growth of microcomputer technologies made it cost-efficient to make “thick” clients: powerful computers that
would sit right on your desk.
It took decades until long-distance networks came to the scale and performance that we see today: fast enough that worldwide thin-client architectures are once again
possible. Nowadays, the closest thing most people use to a thin client… is their Web browser, passing messages back and forth. If you’re playing a real-time multiplayer game, though,
you expect the processing to be shared between your computer (for low latency graphical output) and the server (for state management and prevention of cheating).
snakes.run tips your expectation on its head. And that’s pretty cool.
Samsung have been showing off pre-release versions of their new Galaxy S26 range. It’s all pretty same-old predictable
changes (and I’m still not really looking for anything to replace my now-five-year-old mobile anyway!), but one feature in particular – one that they’re not even mentioning in their
marketing copy – seemed interesting and innovative.
You know those polarising filters you can use to try to stop people shoulder-surfing? Samsung have come up with a software-controlled one.
Demos show the feature being used to black-out the screen at a 15°+ angle when entering a PIN or password, but also show how it can configured on an app-by-app basis to e.g. black out
notifications so that only the person right in front of the screen can see them.
I assume that this black magic is facilitated by an additional layer between the screen and the glass, performing per-pixel selective polarisation in the same way as a monochrome LCD
display might. But the fact that each pixel can now show two images – one to a user directly ahead, superimposed with another (monochrome) one to users with an offset
viewing angle, is what interests me: my long-cultivated “hacker mentality” wants to ask “what I can make that do?”
Does the API of this (of this or of any similar or future screens?) provide enough control to manipulate the new layer? And is its resolution identical to that of the underlying screen?
Could “spoilers”, instead of being folded-away behind a <details>/<summary> or ROT13-encoded, say “tilt to reveal” and provide
a physicality to the mechanism of exposure?
Could diagrams embed their own metadata annotations: look at a blueprint from the side to see descriptions, or tilt your phone to see the alt-text on an image?
Can the polarisation layer be expanded to provide a more-sophisticated privacy overlay, such as a fake notification in place of a real one, to act as a honeypot?
Is there sufficient control over the angle of differentiation that a future screen could use eye tracking to produce a virtual lenticular barrier, facilitating a novel kind of
autostereoscopic 3D display that works – like a hologram – from any viewing angle?
I doubt I’m buying one of these devices. But I’m very curious about all of these questions!