Generative AI use and human agency

This is a repost promoting content originally published elsewhere. See more things Dan's reposted.

5. If you use AI, you are the one who is accountable for whatever you produce with it. You have to be certain that whatever you produced was correct. You cannot ask the system itself to do this. You must either already be expert at the task you are doing so you can recognise good output yourself, or you must check through other, different means the validity of any output.

9. Generative AI produces above average human output, but typically not top human output. If you overuse generative AI you may produce more mediocre output than you are capable of.

I was also tempted to include in 9 as a middle sentence “Note that if you are in an elite context, like attending a university, above average for humanity widely could be below average for your context.”

In this excellent post, Joanna says more-succinctly what I was trying to say in my comments on “AI Is Reshaping Software Engineering — But There’s a Catch…” a few days ago. In my case, I was talking very-specifically about AI as a programmer’s assistant, and Joanna’s points 5. and 9. are absolutely spot on.

Point 5 is a reminder that, as I’ve long said, you can’t trust an AI to do anything that you can’t do for yourself. I sometimes use a GenAI-based programming assistant, and I can tell you this – it’s really good for:

  • Fancy autocomplete: I start typing a function name, it guesses which variables I’m going to be passing into the function or that I’m going to want to loop through the output or that I’m going to want to return-early f the result it false. And it’s usually right. This is smart, and it saves me keypresses and reduces the embarrassment of mis-spelling a variable name1.
  • Quick reference guide: There was a time when I had all of my PHP DateTimeInterface::format character codes memorised. Now I’d have to look them up. Or I can write a comment (which I should anyway, for the next human) that says something like // @returns String a date in the form: Mon 7th January 2023 and when I get to my date(...) statement the AI will already have worked out that the format is 'D jS F Y' for me. I’ll recognise a valid format when I see it, and I’ll be testing it anyway.
  • Boilerplate: Sometimes I have to work in languages that are… unnecessarily verbose. Rather than writing a stack of setters and getters, or laying out a repetitive tree of HTML elements, or writing a series of data manipulations that are all subtly-different from one another in ways that are obvious once they’ve been explained to you… I can just outsource that and then check it2.
  • Common refactoring practices: “Rewrite this Javascript function so it doesn’t use jQuery any more” is a great example of the kind of request you can throw at an LLM. It’s already ingested, I guess, everything it could find on StackOverflow and Reddit and wherever else people go to bemoan being stuck with jQuery in their legacy codebase. It’s not perfect – just like when it’s boilerplating – and will make stupid mistakes3 but when you’re talking about a big function it can provide a great starting point so long as you keep the original code alongside, too, to ensure it’s not removing any functionality!

Other things… not so much. The other day I experimentally tried to have a GenAI help me to boilerplate some unit tests and it really failed at it. It determined pretty quickly, as I had, that to test a particular piece of functionality need to mock a function provided by a standard library, but despite nearly a dozen attempts to do so, with copious prompting assistance, it couldn’t come up with a working solution.

Overall, as a result of that experiment, I was less-effective as a developer while working on that unit test than I would have been had I not tried to get AI assistance: once I dived deep into the documentation (and eventually the source code) of the underlying library I was able to come up with a mocking solution that worked, and I can see why the AI failed: it’s quite-possibly never come across anything quite like this particular problem in its training set.

Solving it required a level of creativity and a depth of research that it was simply incapable of, and I’d clearly made a mistake in trying to outsource the problem to it. I was able to work around it because I can solve that problem.

But I know people who’ve used GenAI to program things that they wouldn’t be able to do for themselves, and that scares me. If you don’t understand the code your tool has written, how can you know that it does what you intended? Most developers have a blind spot for testing and will happy-path test their code without noticing if they’ve introduced, say, a security vulnerability owing to their handling of unescaped input or similar… and that’s a problem that gets much, much worse when a “developer” doesn’t even look at the code they deploy.

Security, accessibility, maintainability and performance – among others, I’ve no doubt – are all hard problems that are not made easier when you use an AI to write code that you don’t understand.

Footnotes

1 I’ve 100% had an occasion when I’ve called something $theUserID in one place and then $theUserId in another and not noticed the case difference until I’m debugging and swearing at the computer

2 I’ve described the experience of using an LLM in this way as being a little like having a very-knowledgeable but very-inexperienced junior developer sat next to me to whom I can pass off the boring tasks, so long as I make sure to check their work because they’re so eager-to-please that they’ll choose to assume they know more than they do if they think it’ll briefly impress you.

3 e.g. switching a selector from $(...) to document.querySelector but then failing to switch the trailing .addClass(...) to .classList.add(...)– you know: like an underexperienced but eager-to-please dev!

AI Is Reshaping Software Engineering — But There’s a Catch…

This is a repost promoting content originally published elsewhere. See more things Dan's reposted.

I don’t believe AI will replace software developers, but it will exponentially boost their productivity. The more I talk to developers, the more I hear the same thing—they’re now accomplishing in half the time what used to take them days.

But there’s a risk… Less experienced developers often take shortcuts, relying on AI to fix bugs, write code, and even test it—without fully understanding what’s happening under the hood. And the less you understand your code, the harder it becomes to debug, operate, and maintain in the long run.

So while AI is a game-changer for developers, junior engineers must ensure they actually develop the foundational skills—otherwise, they’ll struggle when AI can’t do all the heavy lifting.

Comic comparing 'Devs Then' to 'Devs Now'. The 'Devs Then' are illustrated as muscular men, with captions 'Writes code without AI or Stack Overflow', 'Builds entire games in Assembly', 'Crafts mission-critical code fo [sic] Moon landing', and 'Fixes memory leaks by tweaking pointers'. The 'Devs Now' are illustrated with badly-drawn, somewhat-stupid-looking faces and captioned 'Googles how to center a div in 2025?', 'ChatGPT please fix my syntax error', 'Cannot exit vim', and 'Fixes one bug, creates three new ones'.

Eduardo picks up on something I’ve been concerned about too: that the productivity boost afforded to junior developers by AI does not provide them with the necessary experience to be able to continue to advance their skills. GenAI for developers can be a dead end, from a personal development perspective.

That’s a phenomenon not unique to AI, mind. The drive to have more developers be more productive on day one has for many years lead to an increase in developers who are hyper-focused on a very specific, narrow technology to the exclusion even of the fundamentals that underpin them.

When somebody learns how to be a “React developer” without understanding enough about HTTP to explain which bits of data exist on the server-side and which are delivered to the client, for example, they’re at risk of introducing security problems. We see this kind of thing a lot!

There’s absolutely nothing wrong with not-knowing-everything, of course (in fact, knowing where the gaps around the edges of your knowledge are and being willing to work to fill them in, over time, is admirable, and everybody should be doing it!). But until they learn, a developer that lacks a comprehension of the fundamentals on which they depend needs to be supported by a team that “fill the gaps” in their knowledge.

AI muddies the water because it appears to fulfil the role of that supportive team. But in reality it’s just regurgitating code synthesised from the fragments it’s read in the past without critically thinking about it. That’s fine if it’s suggesting code that the developer understands, because it’s like… “fancy autocomplete”, which you can accept or reject based on their understanding of the domain. I use AI in exactly this way many times a week. But when people try to use AI to fill the “gaps” at the edge of their knowledge, they neither learn from it nor do they write good code.

I’ve long argued that as an industry, we lack a pedagogical base: we don’t know how to teach people to do what we do (this is evidenced by the relatively high drop-out rate on computer science course, the popular opinion that one requires a particular way of thinking to be a programmer, and the fact that sometimes people who fail to learn programming through paradigm are suddenly able to do so when presented with a different one). I suspect that AI will make this problem worse, not better.

×

Get Ready with Me: Techfluencer Edition

This is a repost promoting content originally published elsewhere. See more things Dan's reposted.

WordPress.com (via YouTube)

WTF did I just watch?

It’s possible I don’t understand social media any more. To be fair, it’s possible that I never did.

This is something between absurd and hilarious. Aside from the 100 year plan (which is fascinating, and I keep meaning to share my thoughts on), I’m not sure what it’s supposed to be advertising. Maybe it’s trying to showcase how cool it is to work with Automattic? (It’s not… exactly like it’s depicted in the video. But I’d be lying if I said that fewer than 50% of my meetings this week have included a discussion on snack foods, so maybe we are I guess at least a little eccentric.)

I think I understand what it’s parodying. And that’s fun. But… wow. You don’t see many videos like this attached to a corporate YouTube account, do you? Kudos for keeping the Internet fun and weird, WordPress.com.

Frustrating At Scale

This is a repost promoting content originally published elsewhere. See more things Dan's reposted.

Large companies find HTML & CSS frustrating “at scale” because the web is a fundamentally anti-capitalist mashup art experiment, designed to give consumers all the power.

This. This is what I needed to be reminded, today.

When somebody complains that the Web is hard to scale, they’re already working against the grain of the Web.

At its simplest – and the way we used to use it – a website is a collection of .html files, one of which might have a special name so the webserver knows to put it first.

Writing HTML is punk rock. A “platform” is the tool of the establishment.

Thanks, Mia.

Queers make the world a safer place

This is a repost promoting content originally published elsewhere. See more things Dan's reposted.

A straight white guy friend was complaining about not being able to find any gaming groups for WoW that weren’t full of MAGA assholes. He said he keeps joining guilds with older (60+) casual gamers like himself because he can’t keep up with the kids, and he’ll start to make friends, but then they will reveal themselves to be Trump-lovers. He asked, “What am I doing wrong?”

This was about 3 months ago. Now, he tells me he joined a guild labeled as LGBTQ-friendly and has made several new cool friends.

He mentioned that there are many women and PoC in the group too, and “Everyone’s so nice on dungeon runs, telling people they did a good job and being supportive, sharing loot.”

I didn’t tell him that this is what the whole world would be like without patriarchal toxic masculinity, because I think he figured it out himself.

I’ve plucked out the highlights, but the deeper moral is in the full anecdote. I especially loved “…furries are like lichen…”. 😆

Bloomberg’s Terms

This is a repost promoting content originally published elsewhere. See more things Dan's reposted.

While perfectly legal, it is remarkable that to read a Bloomberg article, you must first agree to binding arbitration and waive your class action rights.

A pop-up notification indicating that the terms have been updated. The message states that by accepting, users agree to the updated Terms of Service, which includes an arbitration provision and class action waiver. It also mentions the processing of user information as described in the Privacy Policy, including potential sharing with third parties about the use of Bloomberg.com. A button labeled "Accept" is provided for users to acknowledge the terms.

I don’t often see dialog boxes like this one. In fact, if I go to the URL of a Bloomberg.com article, I don’t see any popups: nothing about privacy, nothing about cookies, nothing about terms of service, nothing about only being allowed to read a limited number of articles without signing up an account. I just… get… the article.

The reason for this is, most-likely, because my web browser is configured, among other things, to:

  • Block all third-party Javascript (thanks, uBlock Origin‘s “advanced mode”), except on domains where they’re explicitly allowed (and even then with a few exceptions: thanks, Ghostery),
  • Delete all cookies 30 seconds after I navigate away from a domain, except for domains that are explicitly greylisted/allowlisted (thanks, Cookie-AutoDelete), and
  • Resist other fingerprinting methods as best I can (thanks, Enhanced Tracking Protection).

But here’s the thing I’ve always wondered: if I don’t get to see a “do you accept our terms and conditions?” popup, is is still enforceable?

Obviously, one could argue that by using my browser in a non-standard configuration that explicitly results in the non-appearance of “consent” popups that I’m deliberately turning a blind eye to the popups and accepting them by my continued use of their services1. Like: if I pour a McDonalds coffee on my lap having deliberately worn blinkers that prevent me reading the warning that it’s hot, it’s not McDonalds’ fault that I chose to ignore their helpful legally-recommended printed warning on the cup, right?2

But I’d counter that if a site chooses to rely on Javascript hosted by a third party in order to ask for consent, but doesn’t rely on that same third-party in order to provide the service upon which consent is predicated, then they’re setting themselves up to fail!

The very nature of the way the Internet works means that you simply can’t rely on the user successfully receiving content from a CDN. There are all kinds of reasons my browser might not get the Javascript required to show the consent dialog, and many of them are completely outside of the visitor’s control: maybe there was a network fault, or CDN downtime, or my browser’s JS engine was buggy, or I have a disability and the technologies I use to mitigate its impact on my Web browsing experience means that the dialog isn’t read out to me. In any of these cases, a site visitor using an unmodified, vanilla, stock web browser might visit a Bloomberg article and read it without ever being asked to agree to their terms and conditions.

Would that be enforceable? I hope you’ll agree that the answer is: no, obviously not!

It’s reasonably easy for a site to ensure that consent is obtained before providing services based on that consent. Simply do the processing server-side, ask for whatever agreement you need, and only then provide services. Bloomberg, like many others, choose not to do this because… well, it’s probably a combination of developer laziness and search engine optimisation. But my gut feeling says that if it came to court, any sensible judge would ask them to prove that the consent dialog was definitely viewed by and clicked on by the user, and from the looks of things: that’s simply not something they’d be able to do!

tl;dr: if you want to fight with Bloomberg and don’t want to go through their arbitration, simply say you never saw or never agreed to their terms and conditions – they can’t prove that you did, so they’re probably unenforceable (assuming you didn’t register for an account with them or anything, of course). This same recommendation applies to many, many other websites.

Footnotes

1 I’m confident that if it came down to it, Bloomberg’s lawyers would argue exactly this.

2 I see the plaintiff’s argument that the cups were flimsy and obviously her injuries were tragic, of course. But man, the legal fallout and those “contents are hot” warnings remain funny to this day.

×

The 55 Words you Can’t Say in Faster Payments

This is a repost promoting content originally published elsewhere. See more things Dan's reposted.

Step aside, George Carlin! Sam Easterby-Smith – who works at The Co-Operative Bank – wants to share with the world the 55 words you can’t say in a UK faster payments reference (assuming your bank follows the regulator‘s recommendations):

So you know, this list is provided by Pay.uk the uk’s payment systems regulator. This is their idea of how to protect people from abusive content sent via the payment system.

Of course (a) all abusive messages must contain one of these English words, spelled correctly and (b) people are not in any way creative.

We’ve called it out and they are making us do it anyway.

  • bastard
  • beef curtains
  • bellend
  • clunge
  • cunt
  • dickhead
  • fuck
  • minge
  • motherfucker
  • prick
  • punani
  • pussy
  • shit
  • twat
  • bukkake
  • cocksucker
  • nonce
  • rapey
  • skank
  • slag
  • slut
  • wanker
  • whore
  • fenian
  • kufaar
  • kafir
  • kike
  • yid
  • batty boy
  • bum boy
  • faggot
  • fudge-packer
  • gender bender
  • homo
  • lesbo
  • lezza
  • muff diver
  • retard
  • spastic
  • spakka
  • spaz
  • window licker
  • gippo
  • gyppo
  • golliwog
  • nigger
  • nigaa
  • nig-nog
  • paki
  • raghead
  • sambo
  • wog
  • blow Job
  • clit
  • wank

Excellent.

Mobile phone, held in a white person's hand, showing an online banking screenshot: a payment to John Smith is being configured, with the reference set as "Minge fuck slag".

The big takeaway here, for me, is that it’s okay to send you money and call you a “dick head” (so long as I put a space between the words), “fuckface”, or “shitbag”, or talk about a “blowjob” (so long as I don’t put a space between the words).

But if I send you money to pay “for the bastard sword” that you’re selling then that’s a problem.

×

UK’s secret Apple iCloud backdoor order is a global emergency, say critics

This is a repost promoting content originally published elsewhere. See more things Dan's reposted.

In its latest attempt to erode the protections of strong encryption, the U.K. government has reportedly secretly ordered Apple to build a backdoor that would allow British security officials to access the encrypted cloud storage data of Apple customers anywhere in the world.

The secret order — issued under the U.K.’s Investigatory Powers Act 2016 (known as the Snoopers’ Charter) — aims to undermine an opt-in Apple feature that provides end-to-end encryption (E2EE) for iCloud backups, called Advanced Data Protection. The encrypted backup feature only allows Apple customers to access their device’s information stored on iCloud — not even Apple can access it.

Sigh. A continuation of a long-running saga of folks here in the UK attempting to make it easier for police to catch a handful of (stupid) criminals1… at the expense of making millions of people more-vulnerable to malicious hackers2.

If we continue on this path, it’ll only be a short number of years before you see a headline about a national secret, stored by a government minister (in the kind of ill-advised manner we know happens) on iCloud or similar and then stolen by a hostile foreign power who merely needed to bribe, infiltrate, or in the worst-case hack their way into Apple’s datacentres. And it’ll be entirely our own fault.

Meanwhile the serious terrorist groups will continue to use encryption that isn’t affected by whatever “ban” the UK can put into place (Al Qaeda were known to have developed their own wrapper around PGP, for example, decades ago), the child pornography rings will continue to tunnel traffic around whatever dark web platform they’ve made for themselves (I’m curious whether they’re actually being smart or not, but that’s not something I even remotely want to research), and either will still only be caught when they get sloppy and/or as the result of good old-fashioned police investigations.

Weakened and backdoored encryption in mainstream products doesn’t help you catch smart criminals. But it does help smart criminals to catch regular folks.

Footnotes

1 The smart criminals will start – or more-likely will already be using – forms of encryption that aren’t, and can’t, be prevented by legislation. Because fundamentally, cryptography is just maths. Incidentally, I assume you know that you can send me encrypted email that nobody else can read?

2 Or, y’know, abuse of power by police.

‘All Americans legally female’: Trump invites mockery with sloppy executive order

This is a repost promoting content originally published elsewhere. See more things Dan's reposted.

Obviously all of the 118 executive orders President Trump signed into effect on 20 January fall somewhere on the spectrum between fucking ridiculous and tragically fascist. But there’s a moment of joy to be taken in the fact that now, by Presidential executive order, one could argue that all Americans are legally female:

One of Trump’s order is titled “Defending Women from Gender Ideology Extremism and Restoring Biological Truth to the Federal Government.” In the definition, the order claims, “‘Female’ means a person belonging, at conception, to the sex that produces the large reproductive cell.” It then says, “’Male’ means a person belonging, at conception, to the sex that produces the small reproductive cell.”

What critics point out is the crucial phrase “at conception.” According to the Associated Press, the second “order declares that the federal government would recognize only two immutable sexes: male and female. And they’re to be defined based on whether people are born with eggs or sperm, rather than on their chromosomes, according to details of the upcoming order.”

So yeah, here’s the skinny: Trump and team wanted to pass an executive order that declared that (a) there are only two genders, and (b) it’s determined biologically and can be ascertained at birth. Obviously both of those things are categorically false, but that’s not something that’s always stopped lawmakers in the past (I’m looking at you, Indiana’s 1897 bill to declare Pi to be 3.2 exactly…).

But the executive order is not well thought-out (well duh). Firstly, it makes the unusual and somewhat-complicated choice of declaring that a person’s gender is determined by whether or not it carries sperm or egg cells. And secondly – and this is the kicker – it insists that the point at which the final and absolute point at which gender becomes fixed is… conception (which again, isn’t quite true, but in this particular legal definition it’s especially problematic…).

At conception, you consisted of exactly one cell. An egg cell. Therefore, under US law, all Americans ever conceived were – at the point at which their gender became concrete – comprised only of egg cells, and thus are legally female. Every American is female. Well done, Trump.

Obviously I’m aware that this is not what Mrs. Trump intended when she signed this new law into effect. But as much as I hate her policies I’d be a hypocrite if I didn’t respect her expressed gender identity, which is both legally-enforceable and, more-importantly, self-declared. As a result, you’ll note that I’ve been using appropriate feminine pronouns for her in this post. She’s welcome to get in touch with me if she uses different pronouns and I’ll respect those, too.

(I’m laughing on the outside, but of course I’m crying on the inside. I’m sorry for what your President is doing to you, America. It really sucks.)

Dark Patterns Detective

This is a repost promoting content originally published elsewhere. See more things Dan's reposted.

This was fun. A simple interactive demonstration of ten different dark patterns you’ve probably experienced online. I might use it as a vehicle for talking about such deceptive tactics with our eldest child, who’s now coming to an age where she starts to see these kinds of things.

Screenshot showing a basket containing a 'premium package' including several optional add-ons, with no obvious way to remove those add-ons.

After I finished exploring the dark patterns shown, I decided to find out more about the author and clicked the link in the footer, expecting to be taken to their personal web site. But instead, ironically, I came to a web page on a highly-recognisable site that’s infamous for its dark patterns: 🤣

LinkedIn screenshot showing not one but two popups to try to encourage me to log in to see more content.

×

Maybe Later

This is a repost promoting content originally published elsewhere. See more things Dan's reposted.

Maybe Later

“Install update” Maybe later.

“Sign in to access this content” No.

“It’s better in the app!” Whose fault is that?

“We completely redesigned this thing you need to do your job for no good reason” Got it.

“Disable any adblocker.” Absolutely not.

I don’t know if I’m supposed to read this as a poem, but I did, and I love it. It speaks to me. It speaks of my experience of using (way too much of) the Web nowadays, enshittified as it is.

(This toot about the evolution of videogaming seems almost like a sequel. Less like a poem, though.)

But yeah, I run a fine-tuned setup on most of my computers that works for me… by working against most of the way the Web seems to expect me to use it, these days. I block all third-party JavaScript and cookies by default (and drops first-party cookies extremely quickly). I use plugins to quietly reject consent banners, suppress soft paywalls, and so on. And when I come across sites that don’t work that way, I make a case-by-case decision on whether to use them at all (if you hide some features in your “app” only, I just don’t use those features).

Sure, there are probably half a dozen websites that you might use that I can’t. But in exchange I use a Web that’s fast, clean, and easy-to-read.

And just sometimes: when I’m on somebody else’s computer and I see an ad, or a cookie consent banner, or a “log in to keep reading” message, or a website weighed down and crawling because of the dozens of tracking scripts, or similar… I’m surprised to remember that these things actually exist, and wonder for a moment how people who do see them all time time cope with them!

Sigh.

Anyway: this was an excellent poem, assuming it was supposed to be interpreted as being a poem. Otherwise, it was an excellent whatever-it-is.

Recreational programming

This is a repost promoting content originally published elsewhere. See more things Dan's reposted.

the world needs more recreational programming.
like, was this the most optimal or elegant way to code this?

no, but it was the most fun to write.

Yes. This.

As Baz Luhrmann didn’t say, despite the implications of this video: code one thing every day that amuses you.

There is no greater way to protest the monetisation of the Web, the descent into surveillance capitalism, and the monoculture of centralised social media silos… than to create things just for the hell of it. Maybe that’s Kirby eating a blog post. Maybe that’s whatever slippy stuff Lu put out this week. Maybe it’s a podcast exclusively about random things that interest one person.

The pre-corporate Web was fun and weird. Nowadays, keeping the Internet fun and weird is relegated to a counterculture.

But making things (whether code, or writing, or videos, or whatever) “just because” is a critical part of that counterculture. It’s beautiful art flying in the face of rampant commercialism. The Web provides a platform where you can share what you create: it doesn’t have to be good, or original, or world-changing… there’s value in just creating and giving things away.

(And, of course, the weirder the better.)

A Friend Used AI to Wish me Happy Birthday

This is a repost promoting content originally published elsewhere. See more things Dan's reposted.

“I was sincere! I wanted to tell you happy Birthday but I wanted to have AI do it.”

“Why?” I shot back, instantly annoyed.

“Because I didn’t know how to make it lengthy. Plus, it’s just easier.”

I felt as if I’d been punched in the gut. I just sat there, stunned. The last sentence repeating itself in my head.

It’s just easier. It’s just easier. It’s. Just. Easier.

Robert shares his experience of receiving a birthday greeting from a friend, that had clearly been written by an AI. The friend’s justification was because they’d wanted to make the message longer, more easily. But the end result was a sour taste in the recipient’s mouth.

There’s a few things wrong here. First is the assumption by the greeting’s author (and perhaps a reflection on society in general) that a longer message automatically implies more care and consideration than a shorter one. But that isn’t necessarily true (and it certainly doesn’t extend to artificially stretching a message, like you’re being paid by the word or something).

A second problem was falling back on the AI for this task in the first place. If you want to tell somebody you’re thinking of them, tell somebody you’re thinking of them. Putting an LLM between you and then introduces an immediate barrier: like telling your personal assistant to tell your friend that you’re thinking of them. It weakens the connection.

And by way of a slippery slope, you can imagine (and the technology has absolutely been there for some time now) a way of hooking up your calendar so that an AI would automatically send a birthday greeting to each of your friends, when their special day comes around, perhaps making reference to the last thing they wrote online or the last message they sent to you, by way of personalisation. By which point: why bother having friends at all? Just stick with the AI, right? It’s just easier.

Ugh.

Needless to say: like Robert, I’d far rather you just said a simple “happy birthday” than asked a machine to write me a longer, more seemingly-thoughtful message. I care more about humans than about words.

(It’s not my birthday for another month, mind.)

kirby vs. this blog post

This is a repost promoting content originally published elsewhere. See more things Dan's reposted.

A cute use of Javascript to make a fun post more-fun, helping to keep the Internet fun and weird.

I want to do more crap like this.

But meanwhile, I should show this post to my 8-year-old, who recently finished playing through a Kirby game and won’t stop talking about it. He might appreciate it, but perhaps in a different way to me.

Balance bikes are just better

if [the option of a balance bike] isn’t available, you can convert a normal bike into a balance bike by removing the pedals and lowering the seat. Once the kids has learned how to balance as they roll, add the pedals, raise the seat, and watch them go.

An excellent suggestion from fellow RSS Club member Sean McP (he’s been full of those lately; I’ve been enjoying encouraging drivers through our village to slow down by smiling and waving, too).

Like Sean, I learned to ride a bike using training wheels (“stabilisers” on this side of the pond). Unlike him, I didn’t have any trouble with them, and so when I came to hear about balance bikes as an alternative learning approach I figured they were just two different approaches to the same thing.

But when our eldest learned using stabilisers, she really struggled, and only eventually “got it” with an un-stabilised bike and lots and lots of practice. It’s true what Sean says: for most children, learning to balance atop a bicycle is harder than learning to pedal and/or steer, so that’s the bit we should be focussing on.

Our youngest is (finally) ready and keen to learn to cycle, and so I’m thinking that when I get him his first bike (maybe for Christmas!) I’ll get him one that, were I to put the seat into its lowest position and remove the pedals, he could use as a balance bike for a day or two to get the feel of the thing before re-attaching them and letting him try the full experience.