Google turns to nuclear to power AI data centres

This is a repost promoting content originally published elsewhere. See more things Dan's reposted.

“The grid needs new electricity sources to support AI technologies,” said Michael Terrell, senior director for energy and climate at Google.

“This agreement helps accelerate a new technology to meet energy needs cleanly and reliably, and unlock the full potential of AI for everyone.”

The deal with Google “is important to accelerate the commercialisation of advanced nuclear energy by demonstrating the technical and market viability of a solution critical to decarbonising power grids,” said Kairos executive Jeff Olson.

Sigh.

First, something lighthearted-if-it-wasn’t-sad. Google’s AI is, of course, the thing that comes up with gems like this:

Google AI, confidently stating that the difference between a sauce and a dressing is that sauces add flavour and texture to dishes, while dressings are used to protect wounds. It goes on to say that a dressing should be large enough to cover a wound: a standard serving size is two tablespoons.
I’ve actually never seen Google do this shit, because I was fortunate enough to have dropped Google Search as my primary search engine long ago, but it hilari-saddens1 me to see it anyway. Screenshot courtesy @devopscats@toot.cat.
But here’s the thing: the optimist in me wants to believe that when the current fad for LLMs passes, we might – if we’re lucky – come out the other side with some fringe benefits in the form of technological advancements.

Western nations have, in general, been under-investing in new nuclear technologies2, instead continuing to operate ageing second-generation reactors for longer and longer timescales3 while flip-flopping over whether or not to construct a new fleet. It sickens me to say so, but if investment by tech companies is what’s needed to unlock the next-generation power plants, and those plants can keep running after LLMs have had its day and go back to being a primarily academic consideration… then that’s fine by me.

Of course, it’s easy to also find plenty of much more-pessimistic viewpoints too. The other week, I had a dream in which we determined the most-likely identity of the “great filter”: a hypothetical resolution to the Fermi paradox that posits that the reason we don’t see evidence of extraterrestrial life is because there’s some common barrier to the development of spacefaring civilisations that most fail to pass. In the dream, we decided that the most-likely cause was energy hunger: that over time, an advancing civilisation would inevitably develop an increasingly energy-hungry series of egoistic technologies (cryptocurrencies, LLMs, whatever comes next…) and, fuelled by the selfish, individualistic forces that ironically allowed them to compete and evolve to this point, destroy their habitat and/or their sources of power and collapsing. I woke from the dream thinking that there’d be a potential short story to be written there, from the perspective of some future human looking back on the path of the technologies that lead them to whatever technology ultimately lead to our energy-hunger downfall, but never got around to writing it.

I think I’ll try to keep a hold of the optimistic viewpoint, for now: that the snake-that-eats-its-own-tail that is contemporary AI will fizzle out of mainstream relevance, but not before big tech makes big investments in next-generation nuclear, renewable, and energy storage technologies. That’d be nice.

Footnotes

1 Hilari-saddening: when you laugh at something until you realise quite how sad it is.

2 I’m a big fan of nuclear power – as I believe that all informed environmentalists should be – as both a stop-gap to decarbonising energy production and potentially as a relatively-clean long-term solution for balancing grids.

3 Consider for example Hartlepool Nuclear Power Station, which supplies 2%-3% of the UK’s electricity. Construction began in the 1960s and was supposed to run until 2007. Which was extended to 2014 (by which point it was clearly showing signs of ageing). Which was extended to 2019. Which was extended to 2024. It’s still running. The site’s approved for a new reactor but construction will probably be a decade-long project and hasn’t started, sooo…

×

London Transport 25

This is a repost promoting content originally published elsewhere. See more things Dan's reposted.

Girl on the Net is a popular sex blogger, so this is a link to a SFW page on an otherwise NSFW site. If your only concern is seeing or hearing sexy things or somebody looking over your shoulder and thinking that’s what you’re doing, go ahead and read it. But if you’re connected through the monitored corporate firewall of a sex-negative employer, you might want to read on a different device…

25 different forms of London transport in a day

Hello! My name is Sarah and I love London transport. Because I am a very cool and interesting person, for a long time I’ve thought it might be fun to see how many different types of London transport I can take in a 24-hour period: bus, tram, tube, train, ferry, etc. The trip outlined below took me (and the lucky man I invited on this date) on 25 different forms of London transport in one single day. It criss-crossed the city from East through North to West, then South, Central, South East and back to where we started. I’m sharing the itinerary because this turned out to be a phenomenal adventure, and I thought others might like to give it a go.

The rules

The rules for the challenge were:

  1. Transport must be transport, not a ride, i.e. it must take me from A to B. So the London Eye doesn’t count, but the cable car definitely does.
  2. Each form of transport must continue the journey. So no going from A to B then immediately back again. The journey can meander, but it must keep moving forwards.
  3. No form of transport can be taken more than once. Changes on a single line are OK (for instance, if traveling on the DLR from Greenwich to Bank requires a change at Canary Wharf, that’s fine, or if you’re on a bus that gets taken out of service you can get on the next one) but repeated trips on the same form of transport aren’t allowed (you can’t take one double-decker bus in the morning then another in the afternoon). The exceptions to this are: walking; escalators; stairs. We’ll be using these a lot.

This sounds like a ludicrously fun adventure and a great use of a date for anybody who can find an even-remotely similarly-transport-obsessed partner.

The one thing this wonderful post is lacking is a map. Oh, and maybe a GPX file, but that’s a much bigger ask. Really I just want the map, to help me visualise the route. Maybe with the different forms of transport colour-coded or something? Okay, okay, now I’m asking a lot again.

Just go read it, it’s a fun London romp.

Calm after the storm

This is a repost promoting content originally published elsewhere. See more things Dan's reposted.

Regarding the alignment offer at Automattic that resulted in around 1 of every 12/13-or-so Automatticians being paid to leave, my colleague Rosie writes of her experience of the week of the offer and our subsequent week in Mexico:

I never thought about taking the offer, but last week took a toll on all of us. It was a weird and sad week. So the Woo DM worked not only as it usually does, a week to bond with colleagues, have fun and collaborate in person. It was also one hundred times more energizing than it usually is. It had that little taste of “we are here because we believe in this. LFG!!!”. A togetherness that feels special. We could talk, discuss, and share our concerns, opinions, memories and new ideas for the future of Woo and WordPress.

That’s a good summary of the week, I feel. It was weird and sad, especially to begin with, but it grew into something that was energising and hopeful. There was, in particular, a certain solidarity, of us being the ones who stayed. It’s great to be reminded that my experience is shared.

Whether or not somebody chose to stay for the same reason as me, or as Rosie, it felt like a bonding experience to be among those who made that same decision. I’m glad we got to have this meetup (even though I’m feeling a bit run-down by a combination of exhaustion, jetlag, and – principally – some kind of stomach bug I’ve contracted somewhere along the way, ugh).

EXIF geodata, and what if you took a picture on the moon?

This is a repost promoting content originally published elsewhere. See more things Dan's reposted.

A conversation about staying private and stripping EXIF tags on blogs lead to shdwcat asking the question “what would happen if you took a picture on the moon?”

However, I figured we could do better than “a point high above the Earth.” If you could state the coordinate system, you should be able to list an actual point on the moon.

It’s a fun question. Sure, you need to shoot down the naysayers who, like colin, rightly point out that you couldn’t reasonably expect to get a GPS/GNSS signal on the moon, but still.

GPS on Earth

GPS (and most other GNSS technologies) fundamentally work by the principle of trilateration. Here’s the skinny of what happens when your GPS receiver – whether that’s your phone, smartwatch, SatNav, or indeed digital camera – needs to work out where on Earth it is:

  1. It listens for the signal that’s transmitted from the satellites. This is already an amazing feat of engineering given that the signal is relatively quiet and it’s being transmitted from around 20,000km above the surface of the planet1.
  2. The signal fundamentally says, for example “Hi, I’m satellite #18, and the time is [some time].” Assuming your GPS receiver doesn’t contain an atomic clock2, it listens for the earliest time – i.e. the closest satellite (the signals “only” travel at the speed of light) – and assumes that this satellite’s time represents the actual time at your location.
  3. Your GPS receiver keeps listening until it’s found three more satellites, and compares the times that they claim it is. Using this, and knowing the speed of light, it’s able to measure the distance to each of those three satellites. The satellites themselves are on reasonably-stable orbits, so as long as you’ve been installing your firmware updates at least once every 5-10 years, your device knows where those satellites are expected to be.
  4. If you know your distance from one satellite, in 3D space, you know that your location is on the surface of an imaginary sphere with a radius of that distance, centered on the satellite. Once you’ve measured your distance from a second satellite, you know that you must be at a point where those two spheres intersect: i.e. somewhere on a circle. With a third satellite found and the distance measured, you’re able to cut that down to just two points (of which one is likely to be about 40,000km into space, so it’s probably not that one3).
  5. Your device will keep finding more passing satellites, measuring and re-measuring, and refining/averaging your calculated location for maximum accuracy.

So yeah: that tiny computer on top of your camera or within your wristwatch? It’s differentiating to miniscule precision measurements of the speed of light, from spacecraft as far away as half the circumference of the planet, while compensating for not being a timekeeping device accurate enough to do so and working-around the time dilation resulting from the effect of general relativity on the satellites4.

GPS on Luna

Supposing you could pick out GPS signals from Earth orbit, from the surface of the moon (which – again – you probably can’t – especially if you were on the dark side of the moon where you wouldn’t get a view of the Earth). Could it work?

I can’t see why not. You’d want to recalibrate your GPS receiver to assume that the “time” satellite – the one with the earliest-apparent clock – was much further away (and therefore that the real time was later than it appears to be) than an Earth-based GPS receiver would: the difference in the order of 1.3 seconds, which is a long time in terms of GNSS calculation.

Again, once you had distance measurements from three spatial satellites you’d be able to pinpoint your location, to within some sphere of uncertainty, to one of two points. One would be on the moon (where you know that you are5), and one would be on the far side of the Earth by almost the same distance. That’s a good start. And additional satellites could help narrow it down even more.

You might even be able to get a slightline to more satellites than is typically possible on Earth, not being limited by Earth curvature, nor being surrounded by relatively-large Earth features like mountains, buildings, trees, and unusually-tall humans. It’s feasible.

How about… LPS?

If we wanted to go further – and some day, if we aim to place permanent human settlements on the moon, we might – then we might consider a Lunar Positioning System: a network of a dozen or so orbiters whizzing around the moon to facilitate accurate positioning on its surface. They’d want to be in low orbits to avoid the impact of tidal forces from the much-larger nearby Eath, and with no atmosphere to scrape against there’s little harm in that.

By the time you’re doing that, though, you might as well ditch trilateration and use the doppler effect, Transit-style. It works great in low orbits but its accuracy on Eath was always limited by the fact that you can’t make the satellites fly low enough without getting atmospheric drag. There’s no such limitation on the moon. Maybe that’s the way forward.

Maybe far-future mobile phones and cameras will support satellite positioning and navigation networks on both Earth and Luna. And maybe then we’ll start seeing EXIF metadata spanning both the WGS-84 datum and the LRO-ME datum.

Footnotes

1 That’s still only about a twentieth of the way to the moon, by the way. But there are other challenging factors, like our atmosphere and all of the obstructions both geographic and human-made that litter our globe.

2 Protip: it doesn’t.

3 Satnav voice: “After falling for forty thousand kilometres, you will reach your destination.”

4 The relativistic effects on GPS satellites cannot be understated. Without compensation, GPS accuracy would drift by up to 10km for every day that the satellites were in orbit, which I reckon would make them useless for anything more than telling you what hemisphere you were in within 5½ years!

5 If you’re on the moon and don’t know it, you have a whole different problem.

What the heck is going on with WordPress?

This is a repost promoting content originally published elsewhere. See more things Dan's reposted.

Let’s play a little game. 😉

Look at the following list of words and try to find the intruder:

  • wp-activate.php
  • wp-admin
  • wp-blog-header.php
  • wp_commentmeta
  • wp_comments
  • wp-comments-post.php
  • wp-config-sample.php
  • wp-content
  • wp-cron.php
  • wp engine
  • wp-includes
  • wp_jetpack_sync_queue
  • wp_links
  • wp-links-opml.php
  • wp-load.php
  • wp-login.php
  • wp-mail.php
  • wp_options
  • wp_postmeta
  • wp_posts
  • wp-settings.php
  • wp-signup.php
  • wp_term_relationships
  • wp_term_taxonomy
  • wp_termmeta
  • wp_terms
  • wp-trackback.php
  • wp_usermeta
  • wp_users

What are these words?

Well, all the ones that contain an underscore _ are names of the WordPress core database tables. All the ones that contain a dash - are WordPress core file or folder names. The one with a space is a company name…

A smart (if slightly tongue-in-cheek) observation by my colleague Paolo, there. The rest of his article’s cleverer and worth-reading if you’re following the WordPress Drama (but it’s pretty long!).

The Duck Song 4

This is a repost promoting content originally published elsewhere. See more things Dan's reposted.

Cast your mind back to 15½ years ago, when the Internet was delighted by The Duck Song, a stupid adaptation of an already-ancient joke, presented as a song for a child and accompanied by some MS Paint-grade animation. It was catchy, though, and before long everybody had it stuck in their heads.

Over the subsequent year it was followed by The Duck Song 2 and The Duck Song 3, each in a similar vein but with a different accompanying joke. There’s sort-of an ongoing narrative – a story arc – than spans the three, as the foils of the first and second are introduced to one another in the third in a strange duck-related meet-cute.

And then there was nothing for… well, almost 14 years. The creators went on to do other things, and we all assumed that this series was completed (unlike for example the Wave Hello trilogy I mentioned the other day, which is clearly supposed to get one more part, and is overdue!). That’s fine, of course. Things are allowed to finish, contrary to what many American TV execs seem to think.

Then last year, we got a seasonal treat in the form of The Christmas Duck Song. It felt like a non-canonical spinoff, though, not a true “fourth Duck Song”. Like the Star Wars Holiday Special. Except good. It’s appearance wasn’t taken as heralding a return of duck songs.

But perhaps it should’ve, because earlier this year we got The Duck Song 4! Yet again, it retells a stupid joke – in this case, an especially silly and immature one – but man, it feels like an old friend coming home. Welcome back, Duck Song.

I don’t think I’ve done justice to it, though. Perhaps the Hillsdale Collegian manages to in their article, which implores:

Permit yourself to be entranced by the magnificence of the animation, the piquancy of the wordplay, the splendorous yet seductive simplicity of the G-C-D chord progression. Let the duck, like Virgil in Dante’s “Divine Comedy,” be your guide — lean into the quotidian but sempiternal question of whether the man at the lemonade stand has any grapes. Consider the irritation of the man at the stand and ask yourself if the wrath of Achilles is really that much more disastrous. Admire the cunning of the duck’s questioning — was Socrates so very different?

Yeah, that’s about right.

Transparency, Contribution, and the Future of WordPress

This is a repost promoting content originally published elsewhere. See more things Dan's reposted.

The people who make the most money in WordPress are not the people who contribute the most (Matt / Automattic really is one of the exceptions here, as I think we are). And this is a problem. It’s a moral problem. It’s just not equitable.

I agree with Matt about his opinion that a big hosting company such as WPEngine should contribute more. It is the right thing to do. It’s fair. It will make the WordPress community more egalitarian. Otherwise, it will lead to resentment. I’ve experienced that too.

In my opinion, we all should get a say in how we spend those contributions [from companies to WordPress]. I understand that core contributors are very important, but so are the organizers of our (flagship) events, the leadership of hosting companies, etc. We need to find a way to have a group of people who represent the community and the contributing corporations.

Just like in a democracy. Because, after all, isn’t WordPress all about democratizing?

Now I don’t mean to say that Matt should no longer be project leader. I just think that we should more transparently discuss with a “board” of some sorts, about the roadmap and the future of WordPress as many people and companies depend on it. I think this could actually help Matt, as I do understand that it’s very lonely at the top.

With such a group, we could also discuss how to better highlight companies that are contributing and how to encourage others to do so.

Some wise words from Joost de Valk, and it’s worth reading his full post if you’re following the WP Engine drama but would rather be focussing on looking long-term towards a better future for the entire ecosystem.

I don’t know whether Joost’s solution is optimal, but it’s certainly worth considering his ideas if we’re to come up with a new shape for WordPress. It’s good to see that people are thinking about the bigger picture here, than just wherever we find ourselves at the resolution of this disagreement between Matt/Automattic/the WordPress Foundation and WP Engine.

Thinking bigger is admirable. Thinking bigger is optimistic. Thinking bigger is future-facing.

Calculating the Ideal “Sex and the City” Polycule

This is a repost promoting content originally published elsewhere. See more things Dan's reposted.

I’ve never been even remotely into Sex and the City. But I can’t help but love that this developer was so invested in the characters and their relationships that when he asked himself “couldn’t all this drama and heartache have been simplified if these characters were willing to consider polyamorous relationships rather than serial monogamy?”1, he did the maths to optimise his hypothetical fanfic polycule:

Juan Pablo Sarmiento

As if his talk at !!Con 2024 wasn’t cool enough, he open-sourced the whole thing, so you’re free to try the calculator online for yourself or expand upon or adapt it to your heart’s content. Perhaps you disagree with his assessment of the relative relationship characteristics of the characters2: tweak them and see what the result is!

Or maybe Sex and the City isn’t your thing at all? Well adapt it for whatever your fandom is! How I Met Your Mother, Dawson’s CreekMamma Mia and The L-Word were all crying out for polyamory to come and “fix” them3.

Perhaps if you’re feeling especially brave you’ll put yourself and your circles of friends, lovers, metamours, or whatever into the algorithm and see who it matches up. You never know, maybe there’s a love connection you’ve missed! (Just be ready for the possibility that it’ll tell you that you’re doing your love life “wrong”!)

Footnotes

1 This is a question I routinely find myself asking of every TV show that presents a love triangle as a fait accompli resulting from an even moderately-complex who’s-attracted-to-whom.

2 Clearly somebody does, based on his commit “against his will” that increases Carrie and Big’s validatesOthers scores and reduces Big’s prioritizesKindness.

3 I was especially disappointed with the otherwise-excellent The L-Word, which did have a go at an ethical non-monogamy storyline but bungled the “ethical” at every hurdle while simultaneously reinforcing the “insatiable bisexual” stereotype. Boo! Anyway: maybe on my next re-watch I’ll feed some numbers into Juan’s algorithm and see what comes out…

How to Monetize a Blog

This is a repost promoting content originally published elsewhere. See more things Dan's reposted.

Maintaining a blog can be a lot of work. A single article can take weeks of research, drafting and editing, collecting and producing included materials, etc. It’s not unusual to seek some form of compensation for it, and those rewards require initiative. With a good monetization strategy, it can become a fairly lucrative venture.

So let’s talk about monetizing a blog, starting with the most obvious and perhaps easiest avenue: display advertising.

A content creator with an established audience can leverage that audience and sell ad space on their blog. Here’s an example:

Can you climb the leaderboards and become the most illustrious viscount of paper towels in the realm?

I’m not sure I have words for how awesome this blog post is. If you’ve ever wanted to monetise your blog and are considering an ad-driven model, this should absolutely be the first (and perhaps last) thing you read on the subject.

If you’re not convinced that Tyler is an appropriate authority to speak on this subject, I highly suggest you visit their other site that’s got a wealth of useful tips, PutAToothpickInTheChargingPortDoctorsHateThatShit.christmas. Yes, really.

AI posts on social media are the chicken nuggets of human interaction

This is a repost promoting content originally published elsewhere. See more things Dan's reposted.

Perhaps inspired by my resharing of Thomas‘s thoughts about the biggest problem in AI (tl;dr: he thinks it’s nomenclature; I agree that’s a problem but I don’t know if it’s the biggest issue), Ruth posted some thoughts to LinkedIn that I think are quite well-put:

I was going to write about something else but since LinkedIn suggested I should get AI to do it for me, here’s where I currently stand on GenAI.

As a person working in computing, I view it as a tool that is being treated as a silver bullet and is probably self-limiting in its current form. By design, it produces average code. Most companies prior to having access to cheap average code would have said they wanted good code. Since the average code produced by the tools is being fed back into those tools, mathematically this can’t lead anywhere good in terms of quality.

However, as a manager in tech I’m really alarmed by it. If we have tools to write code that is ok but needs a lot of double checking, we might be tempted to stop hiring people at that level. There already aren’t enough jobs for entry level programmers to feed the talent pipeline, and this is likely to make it worse. I’m not sure where the next generation of great programmers are supposed to come from if we move to an ecosystem where the junior roles are replaced by Copilot.

I think there’s a lot of potential for targeted tools to speed up productivity. I just don’t think GenAI is where they should come from.

This is an excellent explanation of no fewer than four of the big problems with “AI” as we’re seeing it marketed today:

  1. It produces mediocre output, (more on that below!)
  2. It’s a snake that eats its own tail,
  3. It’s treated as a silver bullet, and
  4. By pricing out certain types of low-tier knowledge work, it damages the pipeline for training higher-tiers of those knowledge workers (e.g. if we outsource all first-level tech support to chatbots, where will the next generation of third-level tech support come from, if they can’t work their way up the ranks, learning as they go?)

Let’s stop and take a deeper look at the “mediocre output” claim. Ruth’s right, but if you don’t already understand why generative AI does this, it’s worth a little bit of consideration about the reason for it… and the consequences of it:

Mathematically-speaking, that’s exactly what you would expect for something that is literally statistically averaging content, but that still comes as a surprise to people.

Bear in mind, of course, that there are plenty of topics in which the average person is less-knowledgable than the average of the content that was made available to the model. For example, I know next to noting about fertiliser application in large-scale agriculture. ChatGPT has doubtless ingested a lot of literature about it, and if I ask it what fertiliser I should use for a field of black beans in silty soil in the UK, it delivers me a confident-sounding answer:

ChatGPT screenshot. I ask 'I'm planting a field of black beans in silty soil in the UK in Spring. What fertiliser should I use to maximise my yield?' and it responds with ~560 words suggesting 30-40 kg/ha of phosphorus (P) and 60-70 kg/ha of potassium (K) at planting, among other things.
Who knows if this answer is right, of course! If the answer mattered to me – because I was about to drill my field – I’d have to do my own research to check, by which point I might as well have just done the research in the first place. If all I cared about was a quick sense-check to an answer I already knew, and it didn’t matter too much, this might be okay output. (It’s pretty verbose and repeats itself a lot, like it’s learned how to talk from YouTube tutorials: I’m surprised it didn’t finish by exhorting me to like and subscribe!)

When LLMs produce exceptional output (I use the term exceptional in the sense of unusual and not-average, not to mean “good”), it appears more-creative and interesting but is even more-likely to be riddled with fanciful hallucinations.

There’s a fine line in getting the creativity dial set just right, and even when you do there’s no guarantee of accuracy, but the way in which many chatbots are told to talk makes them sound authoritative on basically every subject. When you know it’s lying, that’s easy. But people don’t always use LLMs for subjects they’re knowledgeable about!

ChatGPT defined several words - 1. Quantifiable: Something that can be measured or expressed in numerical terms. 2. No cap: A slang term meaning "no lie" or "I'm being truthful." 3. Erinaceiophobia: An irrational fear of hedgehogs. 4. Undercontrastised: A medical term referring to an image, usually from a scan, that lacks sufficient contrast for clear diagnosis. (I made this word up, but ChatGPT defined it anyway!). 5. Ethology: The scientific study of animal behavior, particularly in natural environments.
I asked ChatGPT to define five words for me. Two (“quantifiable” and “ethology”) are real words that somebody might have trouble with. One (“no cap”) is a slang term. One (“erinaceiophobia” is a logically-sound construction from the Latin name for the biological family that hedgehogs belong to and the Greek suffix that’s applied to irrational fears). ChatGPT came up with perfectly reasonable definitions of all of these. But it also confidently defined “undercontrastised”, a word I made up and which I can’t find used anywhere at all!

In my example above, a more-useful robot would have stated that it didn’t know the answer to the question rather than, y’know, lying. But the nature of the statistical models used by LLMs means that they can’t know what they don’t know: they don’t have a “known unknowns” space.

Regarding the “damages the training pipeline”: I’m undecided on whether or not I agree with Ruth. She might be on to something there, but I’m not sure. Needs more thought before I commit to an opinion on that one.

Ruth followed-up to say:

Oh, and an addendum to this – as a human, I find the proliferation of AI tools in spaces that are all about creating connections with other humans deeply concerning. I saw a lot of job applications through Otta at my previous role, and they were all kind of the same – I had no sense of the person behind the averaged out CV I was looking at. We already have a huge problem with people presenting inauthentic versions of themselves on social media which makes it harder to have genuine interactions, smoothing off the rough edges of real people to get something glossy and processed is only going to make this worse.

AI posts on social media are the chicken nuggets of human interaction and I’d rather have something real every time.

Emphasis mine… because that’s a fantastic metaphor. Content generated where a generative AI is trying to “look human” are so-often bland, flat, and unexciting: a mass-produced most-basic form of social sustenance. So yeah: chicken nuggets.

Photo of chicken nuggets with "AI" written on each of them.
Ironically, I might’ve gotten a better picture here if I’d asked AI to draw this for me, because I couldn’t find any really unappetising-looking McDonalds-grade chicken nuggets on the stock photography site I used.
× × ×

The biggest problem with AI…

This is a repost promoting content originally published elsewhere. See more things Dan's reposted.

The biggest problem with “AI” is probably that it’s used as label for two completely different things:

1. Specialized neural networks trained to do highly specific tasks (e.g. cancer screening) which often work reasonably well as a tool to support human experts

2. Generative AI which thoroughly produces the most mid bullshit

It doesn’t help that neither are intelligent in any way, they’re both statistical pattern matching.

Fundamentally, Thomas seems to be arguing that the biggest problem with AI is how it is marketed, or things-that-are-called-AI are marketed as AI. Also that LLMs, by producing s statistical average of their input data, produce output that’s pretty-average (which is, of course, statistically that you’d expect)1.

I’m not sure he’s right: the energy footprint and the copyright issues of generative AI might be the biggest problems. But maybe.

Footnotes

1 That’s not entirely true, of course: sometimes they produce output that’s wild and random, but we describe those as “hallucinations” and for many purposes they’re even worse. At least “mid bullshit” can be useful if you’re specifically looking to summarise existing content (and don’t mind fact-checking it later if it’s important): y’know, the thing people use Wikipedia for.

Golf

This is a repost promoting content originally published elsewhere. See more things Dan's reposted.

Runners will talk about how much they enjoy the feeling of wind in their hair. Boxers won’t shut up about the grace and art of their profession. Even soccer players can be moved to wax poetical about how enjoyable it is to be part of a truly great game.

But all golfers ever talk about is how little golf they hope to play. A typical pre-match interview will go something like this:

Some guy in a blazer: Great to have you here with us, what are your goals for the first round this morning.

Golfer: Well today I hope to play as little golf as possible. Mathematically speaking the course could be done in 18 shots but that is probably physically impossible. But ideally as close to 18 as I can get. Any additional golf is bad.

Blazer: What is your strategy for avoiding the golf.

Golfer: I have a guy who follows me around to help share the burden of all this damn golf. He is going to help me out by suggesting ways to avoid playing any more golf than we have to. Of course, I pay him but his real motivation is to bring this sorry excuse for a pastime to the speediest conclusion.

Blazer: Better you than me, but good luck out there.

Excellent observations about golf by Andrew. Read the whole thing: it’s not long. If you don’t already follow everything on sheep.horse, you might consider doing so: when it’s not this funny, it doubly makes-up for it by being poignant. I like to think of Andrew as being the kind of blogger I would be like if I were more-focussed on quality than quantity.

Where Should Visual Programming Go?

This is a repost promoting content originally published elsewhere. See more things Dan's reposted.

Level 3: Diagrams are code

This is what the endgame should be IMO. Some things are better represented as text. Some are best understood visually. We should mix and match what works best on a case-by-case basis. Don’t try to visualize simple code. Don’t try to write code where a diagram is better.

One of the attempts was Luna. They tried dual representation: everything is code and diagram at the same time, and you can switch between the two:

But this way, you are not only getting benefits of both ways, you are also constrained by both text and visual media at the same time. You can’t do stuff that’s hard to visualize (loops, recursions, abstractions) AND you can’t do stuff that’s hard to code.

Interesting thoughts from Niki (and from Sebastian Bensusan) on how diagrams and code might someday be intertwined as first class citizens (but not in the gross ways you might have come across in the past when people have tried to sell you on “visual programming”).

As Niki wrote about what he calls levels 2 and 3 of the concept – in which diagrams and code are intrinsically linked I found myself thinking about Twine, a programming language (or framework? or tool?… not sure how best to describe or define it!) intended for making interactive “choose your own adventure”-style hypertext fiction.

Screenshot showing the Twine 2 IDE, with a story map alongside a scene description.

Twine’s sort-of a level 2 implementation of visual programming: the code (scene descriptions) is mostly what’s responsible for feeding the diagram. But that’s not entirely true: it’s possible to create new nodes in your story graph in a completely visual way, and then dip into them to edit their contents and imply how they link to others.

It’s possible that the IF engine community – who are working to lower the barriers to programming in order to improve accessibility to people who are fiction authors first, developers second – are ahead of the curve in the area of visual programming. Consider for example how Inform’s automated test framework graphs the permutations you (or your human testers) try, and allow you to “bless” (turn into assertions) the results so that regression testing becomes visually automated affair:

Inform 7's IDE, showing regression testing using the visual tree of the sample game Onyx.

× × ×

Write Websites

This is a repost promoting content originally published elsewhere. See more things Dan's reposted.

Enbies and gentlefolk of the class of ‘24:

Write websites.

If I could offer you only one tip for the future, coding would be it. The long term benefits of coding websites remains unproved by scientists, however the rest of my advice has a basis in the joy of the indie web community’s experiences. I will dispense this advice now:

Enjoy the power and beauty of PHP; or never mind. You will not understand the power and beauty of PHP until your stack is completely jammed. But trust me, in 20 years you’ll look back at your old sites and recall in a way you can’t grasp now, how much possibility lay before you and how simple and fast they were. JS is not as blazingly fast as you imagine.

Don’t worry about the scaling; or worry, but know that premature scalability is as useful as chewing bubble gum if your project starts cosy and small. The real troubles on the web are apt to be things that never crossed your worried mind; if your project grows, scale it up on some idle Tuesday.

Code one thing every day that amuses you.

Well that’s made my day.

I can’t say I loved Baz Luhrmann’s Everybody’s Free To Wear Sunscreen. I’m not sure it’s possible for anybody who lived through it being played to death in the late 1990s; a period of history when a popular song was basically inescapable. Also, it got parodied a lot. I must’ve seen a couple of dozen different parodies of varying quality in the early 2000s.

But it’s been long enough that I was, I guess, ready for one. And I couldn’t conceive of a better topic.

Y’see: the very message of the value of personal websites is, like Sunscreen, a nostalgic one. When I try to sell people on the benefits of a personal digital garden or blog, I tend to begin by pointing out that the best time to set up your own website is… like 20+ years ago.

But… the second-best time to start a personal website is right now. With cheap and free static hosting all over the place (and more-dynamic options not much-more expensive) and domain names still as variably-priced as they ever were, the biggest impediment is the learning curve… which is also the fun part! Siloed social media is either eating its own tail or else fighting to adapt to once again be part of a more-open Web, and there’s nothing that says “I’m part of the open Web” like owning your own online identity, carving out your own space, and expressing yourself there however you damn well like.

As always, this is a drum I’ll probably beat until I die, so feel free to get in touch if you want some help getting set up on the Web.

The Eyebrow Painter

This is a repost promoting content originally published elsewhere. See more things Dan's reposted.

There are a whole bunch of things that could be the source for the name, e.g. where we found most of their work (The Dipylon Master) or the potter with whom they worked (the Amasis Painter), a favourite theme (The Athena Painter), the Museum that ended up with the most famous thing they did (The Berlin Painter) or a notable aspect of their style. Like, say, The Eyebrow Painter.

Guess what kind of pottery the Eyebrow Painter made?

Collage of three Hellenic plates decorated with fish. The fish all have strange-looking eyebrows!

AristotelianComplacency

Just excellent.

A frowning fish, painted onto a plate, surely makes for the best funerary offering.

×