What If the EU Never Existed?

This is a repost promoting content originally published elsewhere. See more things Dan's reposted.

Video framegrab showing an alternate history map of Europe in mid-to-late 20th century. West Germany has formed an economic union with several neighbouring states, while France and Italy have elected Communist governments and are within the Soviet sphere of influence.

This video, which I saw on Nebula but which is also available on YouTube, explores a hypothetical alternate history in which the Schuman Plan/European Coal & Steel Community never happened, and the knock-on effects lead to no EU, a more fragmented Europe, and an ultimately more-fractured and more-complicated Europe of the late 20th/early 21st century.

Obviously it’s highly-speculative and you could easily come up with your own alternative alternative history! But the Twilight Struggle player in me as well as the alternate history lover (and, of course, European Union fan) especially loves the way this story is told.

It’s worth remembering that for the last half-millenium or more, the default state of Europe has been to be fighting one another: if not outright war then at least agressive economic and political rivals. Post-WWII gave Europe perhaps its longest ever period of relative peace, and that’s great enough that all of the other benefits of a harmonised and cooperative union are just icing on the cake.

EU Made Simple is a fantastic channel in general, and I’d recommend you give it a look. It ties news and history in with its creators outlook, but it’s always clear which bits are opinion and it’s delightfully bitesized. For Europeans-in-exile in this post-Brexit age, it’s hopeful and happy, and I like it.

Happy Europe Day, one and all.

LayoffBot – eliminating the human in human resources

This is a repost promoting content originally published elsewhere. See more things Dan's reposted.

Illustration of the 'LayoffBot process': 1. Schedules casual 1:1. Our next gen AI schedules the dreaded "quick chat" for Friday at 4:55 PM, ensuring a ruined weekend. 2. Conducts Layoff. Our AI delivers the news with the emotional depth of a toaster while recording reactions for management entertainment. 3. Completes Paperwork. Instantly cuts off all access, calculates the minimum legal severance, and sends a pre-written reference that says 'they worked here'.

It was a bit… gallows humour… for a friend to share this website with me, but it’s pretty funny.

And also: a robot that “schedules a chat” to eject you from your job and then “delivers the news with the emotional depth of a toaster” might still have been preferable to an after-hours email to my personal address to let me know that I’d just had my last day! Maybe I’m old-fashioned, but there’s some news that email isn’t the medium for, right?

Reposts of spicy takes on Automattic leadership and silly jokes about redundancy will cease soon and normal bloggy content will resume, I’m sure.

The call is coming from inside the house

This is a repost promoting content originally published elsewhere. See more things Dan's reposted.

A master class in how not to lead through crisis is continuing at Automattic and it’s rough to watch.

After Matt Mullenweg sets off a firestorm by going “scorched earth” on WP Engine in September 2024 (if you don’t know the story) there’s been one dodgy decision after another but through it all Mullenweg and his executive leadership team continue to blame the lack of success and growth on staffers.

  • October 2024: Make two “alignment offers” to root out anyone willing to speak sense to the madness of all the wpdrama. 159 walk in the first and 25-30 in the second. Allege that the people who chose to leave were low performers and now that everyone left is aligned things should be better.
  • January 10: Matt Mullenweg states “There are no layoffs plans at Automattic, in fact we’re hiring fairly aggressively and have done a number of acquisitions since this whole thing started, and have several more in the pipeline.”
  • April 2: Automattic decimates its engineering staff in a 280 person layoff with a paltry severance compared to alignment offers. Leaving many feeling like suckers for showing loyalty.
  • April 8: The engineering lead admits that just this week they are beginning to see the gaps in engineering due to the layoffs.
  • April 9: CFO accidentally sends a Slack message to the entire company that not only do engineers need to, but all of Automattic needs to “Get aligned, get productive, deliver or move on”.

Slack message from Mark Davies, reading: Great message for all of Automattic, not just engineering. Get aligned, get productive, deliver or move on. It would be good to draft a comprehensive note like the above that is tailored for all employees. I suspect it will be well received and adopted immediately. However, we'd need to actually create the mantra/details/commitments from us as leaders. / Mr. @grierson let me know I sent the above note to the entire company. Somehow, I thought it was the ELT channel. But still relevant and something we should create and act on. For clarity, something the ELT should create and communicate.

The call is coming from inside the house. The C-suite is failing to get real alignment because “leadership” at WordPress.com, Automattic, et al. is not, and have not, been leading. They haven’t come up with an executable commercial plan for the company in nearly a decade. Any time they get close to doing so a shiny object appears and **poof** on to something else.

A scathing take from Kellie Peterson, who was Head of Domains at Automattic until 2023. There’s lots more/similar spiciness from her on Bluesky, for those inclined to such things.

I’m not sure whether I agree with everything Kellie asserts, but I’ve certainly been concerned about the direction of management for the last year or more. Obviously I’d be biased, speaking as one of the “suckers” who showed loyalty in October only to get axed in April

But for a while now it had felt like my reasons for staying were entirely about my love for (a) my team, a full half of whom got laid off at the same time as me anyway, and (b) WordPress and the open source space it represents, which of course Automattic’s been distancing itself from.

(Incidentally – and speaking of open source – I’m quite enjoying the freedom to contribute to ClassicPress, which previously might have been frowned-upon by my employers. I’ve not got a first PR out yet, but I’m hoping to soon.)

So yeah… while I might not agree with all of Kellie’s sentiments (here and elsewhere)… I increasingly find I have the clarity to agree with many of them. Automattic seems to be a ship on fire, right now, and I really feel for my friends and former colleagues still aboard what must be an increasingly polarised environment, seemingly steering hard towards profits over principles.

×

Why is there a “small house” in IBM’s Code page 437?

This is a repost promoting content originally published elsewhere. See more things Dan's reposted.

There’s a small house ( ) in the middle of IBM’s infamous character set Code Page 437. “Small house”—that’s the official IBM name given to the glyph at code position 0x7F, where a control character for “Delete” (DEL) should logically exist. It’s cute, but a little strange. I wonder, how did it get there? Why did IBM represent DEL as a house, of all things?

Code Page 437 table, highlighting the character 'small house' at 0x7F

It probably ought to be no surprise that I, somebody who’s written about the beauty and elegance of the ASCII table, would love this deep dive into the specifics of the unusual graphical representation of the DEL character in IBM Code Page 437.

It’s highly accessible, so even if you’ve only got a passing interest in, I don’t know, text encoding or typography or the history of computing, it’s a great read.

×

Scots Fire Brigade Union demand new legal protections for people with more than one romantic partner

This is a repost promoting content originally published elsewhere. See more things Dan's reposted.

In the fight for equal representation for polyamorous relationships, polyamorists may have a strange and unlikely ally in… the Scottish Fire Brigade Union:

Scotland’s Fire Brigade Union (FBU) has been blasted after calling for more legal protections for Scots who have more than one romantic partner. Members of the group, which is meant to campaign to protect firefighters, want to boost the legal rights of polyamorous people.

I love that a relatively mainstream union is taking seriously this issue that affects only a tiny minority of the population, but I have to wonder… why? What motivates such interest? Are Scottish fire bridades all secretly in a big happy polycule together? (That’d be super cute.)

Anyway: good for them, good for us, good all round at a time with a bit of a shortage of good news.

My on-again-off-again relationship with AI assistants

This is a repost promoting content originally published elsewhere. See more things Dan's reposted.

Sean McPherson, whom I’ve been following ever since he introduced me to the Five-Room Dungeons concept, said:

There is a lot of smoke in the work-productivity AI space. I believe there is (probably) fire there somewhere. But I haven’t been able to find it.

I find AI assistants useful, just less so than other folks online. I’m glad to have them as an option but am still on the lookout for a reason to pay $20/month for a premium plan. If that all resonants and you have some suggestions, please reach out. I can be convinced!

I’m in a similar position to Sean. I enjoy Github Copilot, but not enough that I would pay for it out of my own pocket (like him, I get it for free, in my case because I’m associated with a few eligible open source projects). I’ve been experimenting with Cursor and getting occasionally good results, but again: I wouldn’t have paid for it myself (but my employer is willing to do so, even just for me to “see if it’s right for me”, which is nice).

I think this is all part of what I was complaining about yesterday, and what Sean describes as “a lot of smoke”. There’s so much hype around AI technologies that it takes real effort to see through it all to the actual use-cases that exist in there, somewhere. And that’s the effort required before you even begin to grapple with questions of cost, energy usage, copyright ethics and more. It’s a really complicated space!

Bored of it

This is a repost promoting content originally published elsewhere. See more things Dan's reposted.

Every article glorifying it.

Every article vilifying it.

Every pub conversation winding up talking about it.

People incessantly telling you how they use it.

I feel dirty using it.

You know what I’m talking about, even though I’ve not mentioned it.

If you don’t know what “it” is without the rest of the context, maybe read the rest of Paul’s poem. I’ll wait.

As you might know, I remain undecided on the value of GenAI. It produces decidedly middle-of-the-road output, which while potentially better than the average human isn’t better than the average specialist in any particular area. It’s at risk of becoming a snake-eating-its-own-tail as slop becomes its own food. It “hallucinates”, of course. And I’m concerned about how well it acts as a teacher to potential new specialists in their field.

There are things it does well-enough, and much faster than a human, that it’s certainly not useless: indeed, I’ve used it for a variety of things from the practical to the silly to the sneaky, and many more activities besides 1. I routinely let an LLM suggest autocompletion, and I’ve experimented with having it “code for me” (with the caveat that I’m going to end up re-reading it all anyway!).

But I’m still not sure whether that, on the balance of things, GenAI represents a net benefit. Time will tell, I suppose.

And like Paul, I’m sick of “the pervasive, all encompassing nature of it”. I never needed AI integration in NOTEPAD.EXE before, and I still don’t need it now! Not everything needs to be about AI, just because it’s the latest hip thing. Remember when everybody was talking about how everything belonged on the blockchain (it doesn’t): same energy. Except LLMs are more-accessible to more-people, thanks to things like ChatGPT, so the signal-to-noise ratio in the hype machine is much, much worse. Nowadays, you actually have to put significant effort in if you want to find the genuinely useful things that AI does, amongst all of the marketing crap that surrounds it.

Footnotes

1 You’ll note that I specifically don’t make use of it for writing any content for this blog: the hallucinations and factual errors you see here are genuine organic human mistakes!

It’s not cheating if you write the video game solver yourself

This is a repost promoting content originally published elsewhere. See more things Dan's reposted.

I didn’t know how to solve the puzzle, but I did know how to write a computer program to solve it for me. That would probably be even more fun, and I could argue that it didn’t actually count as cheating. I didn’t want the solution to reveal itself to me before I’d had a chance to systematically hunt it down, so I dived across the room to turn off the console.

I wanted to have a shower but I was worried that if I did then inspiration might strike and I might figure out the answer myself. So I ran upstairs to my office, hit my Pomodoro timer, scrolled Twitter to warm up my brain, took a break, made a JIRA board, Slacked my wife a status update, no reply, she must be out of signal. Finally I fired up my preferred assistive professional tool. Time to have a real vacation.

Obviously, I’d be a fan of playing your single-player video game any damn way you like. But beyond that, I see Robert’s point: there are some puzzles that are just as much (or more) fun to write a program to solve than to solve as a human. Digital jigsaws would be an obvious and ongoing example, for me, but I’ve also enjoyed “solving” Hangman (not strictly a single-player game, but my “solution” isn’t really applicable to human opponents anyway), Mastermind (this is single-player, in my personal opinion – fight me! – the codemaster doesn’t technically have anything “real” to do; their only purpose is to hold secret information), and I never got into Sudoku principally because I found implementing a solver much more fun that being a solver.

Anyway: Robert’s post shows that he’s got too much time on his hands when his wife and kids are away, and it’s pretty fun.

It is as if you were on your phone

This is a repost promoting content originally published elsewhere. See more things Dan's reposted.

Being on your phone all the time and while also not being on your phone all the time has never been more important.

“It is as if you were on your phone” is a phone-based experience for pretending to be on your phone without needing to be on your phone. All from the comfort of your phone.

Relax and blend in with familiar gestures and realistic human behaviour.

When I tried this fun and experimental game, I was struck by a feeling of deja vu. Was this really new? It felt ever so familiar.

Turns out, it draws a lot of inspiration from its 2016 prequel, It is as if you were playing chess. Which I’d completely forgotten about until just now.

It really is almost as if I were on my phone.

Anyway, It is as if you were on your phone is… well, it’s certainly a faithful simulation of what it would be like to be on my phone. If you saw me, you’d genuinely think that I was on my phone, even though in reality I was just playing It is as if you were on your phone on my phone. That’s how accurate it is.

Give it a go on your phone and see if you agree.

Generative AI use and human agency

This is a repost promoting content originally published elsewhere. See more things Dan's reposted.

5. If you use AI, you are the one who is accountable for whatever you produce with it. You have to be certain that whatever you produced was correct. You cannot ask the system itself to do this. You must either already be expert at the task you are doing so you can recognise good output yourself, or you must check through other, different means the validity of any output.

9. Generative AI produces above average human output, but typically not top human output. If you overuse generative AI you may produce more mediocre output than you are capable of.

I was also tempted to include in 9 as a middle sentence “Note that if you are in an elite context, like attending a university, above average for humanity widely could be below average for your context.”

In this excellent post, Joanna says more-succinctly what I was trying to say in my comments on “AI Is Reshaping Software Engineering — But There’s a Catch…” a few days ago. In my case, I was talking very-specifically about AI as a programmer’s assistant, and Joanna’s points 5. and 9. are absolutely spot on.

Point 5 is a reminder that, as I’ve long said, you can’t trust an AI to do anything that you can’t do for yourself. I sometimes use a GenAI-based programming assistant, and I can tell you this – it’s really good for:

  • Fancy autocomplete: I start typing a function name, it guesses which variables I’m going to be passing into the function or that I’m going to want to loop through the output or that I’m going to want to return-early f the result it false. And it’s usually right. This is smart, and it saves me keypresses and reduces the embarrassment of mis-spelling a variable name1.
  • Quick reference guide: There was a time when I had all of my PHP DateTimeInterface::format character codes memorised. Now I’d have to look them up. Or I can write a comment (which I should anyway, for the next human) that says something like // @returns String a date in the form: Mon 7th January 2023 and when I get to my date(...) statement the AI will already have worked out that the format is 'D jS F Y' for me. I’ll recognise a valid format when I see it, and I’ll be testing it anyway.
  • Boilerplate: Sometimes I have to work in languages that are… unnecessarily verbose. Rather than writing a stack of setters and getters, or laying out a repetitive tree of HTML elements, or writing a series of data manipulations that are all subtly-different from one another in ways that are obvious once they’ve been explained to you… I can just outsource that and then check it2.
  • Common refactoring practices: “Rewrite this Javascript function so it doesn’t use jQuery any more” is a great example of the kind of request you can throw at an LLM. It’s already ingested, I guess, everything it could find on StackOverflow and Reddit and wherever else people go to bemoan being stuck with jQuery in their legacy codebase. It’s not perfect – just like when it’s boilerplating – and will make stupid mistakes3 but when you’re talking about a big function it can provide a great starting point so long as you keep the original code alongside, too, to ensure it’s not removing any functionality!

Other things… not so much. The other day I experimentally tried to have a GenAI help me to boilerplate some unit tests and it really failed at it. It determined pretty quickly, as I had, that to test a particular piece of functionality need to mock a function provided by a standard library, but despite nearly a dozen attempts to do so, with copious prompting assistance, it couldn’t come up with a working solution.

Overall, as a result of that experiment, I was less-effective as a developer while working on that unit test than I would have been had I not tried to get AI assistance: once I dived deep into the documentation (and eventually the source code) of the underlying library I was able to come up with a mocking solution that worked, and I can see why the AI failed: it’s quite-possibly never come across anything quite like this particular problem in its training set.

Solving it required a level of creativity and a depth of research that it was simply incapable of, and I’d clearly made a mistake in trying to outsource the problem to it. I was able to work around it because I can solve that problem.

But I know people who’ve used GenAI to program things that they wouldn’t be able to do for themselves, and that scares me. If you don’t understand the code your tool has written, how can you know that it does what you intended? Most developers have a blind spot for testing and will happy-path test their code without noticing if they’ve introduced, say, a security vulnerability owing to their handling of unescaped input or similar… and that’s a problem that gets much, much worse when a “developer” doesn’t even look at the code they deploy.

Security, accessibility, maintainability and performance – among others, I’ve no doubt – are all hard problems that are not made easier when you use an AI to write code that you don’t understand.

Footnotes

1 I’ve 100% had an occasion when I’ve called something $theUserID in one place and then $theUserId in another and not noticed the case difference until I’m debugging and swearing at the computer

2 I’ve described the experience of using an LLM in this way as being a little like having a very-knowledgeable but very-inexperienced junior developer sat next to me to whom I can pass off the boring tasks, so long as I make sure to check their work because they’re so eager-to-please that they’ll choose to assume they know more than they do if they think it’ll briefly impress you.

3 e.g. switching a selector from $(...) to document.querySelector but then failing to switch the trailing .addClass(...) to .classList.add(...)– you know: like an underexperienced but eager-to-please dev!

AI Is Reshaping Software Engineering — But There’s a Catch…

This is a repost promoting content originally published elsewhere. See more things Dan's reposted.

I don’t believe AI will replace software developers, but it will exponentially boost their productivity. The more I talk to developers, the more I hear the same thing—they’re now accomplishing in half the time what used to take them days.

But there’s a risk… Less experienced developers often take shortcuts, relying on AI to fix bugs, write code, and even test it—without fully understanding what’s happening under the hood. And the less you understand your code, the harder it becomes to debug, operate, and maintain in the long run.

So while AI is a game-changer for developers, junior engineers must ensure they actually develop the foundational skills—otherwise, they’ll struggle when AI can’t do all the heavy lifting.

Comic comparing 'Devs Then' to 'Devs Now'. The 'Devs Then' are illustrated as muscular men, with captions 'Writes code without AI or Stack Overflow', 'Builds entire games in Assembly', 'Crafts mission-critical code fo [sic] Moon landing', and 'Fixes memory leaks by tweaking pointers'. The 'Devs Now' are illustrated with badly-drawn, somewhat-stupid-looking faces and captioned 'Googles how to center a div in 2025?', 'ChatGPT please fix my syntax error', 'Cannot exit vim', and 'Fixes one bug, creates three new ones'.

Eduardo picks up on something I’ve been concerned about too: that the productivity boost afforded to junior developers by AI does not provide them with the necessary experience to be able to continue to advance their skills. GenAI for developers can be a dead end, from a personal development perspective.

That’s a phenomenon not unique to AI, mind. The drive to have more developers be more productive on day one has for many years lead to an increase in developers who are hyper-focused on a very specific, narrow technology to the exclusion even of the fundamentals that underpin them.

When somebody learns how to be a “React developer” without understanding enough about HTTP to explain which bits of data exist on the server-side and which are delivered to the client, for example, they’re at risk of introducing security problems. We see this kind of thing a lot!

There’s absolutely nothing wrong with not-knowing-everything, of course (in fact, knowing where the gaps around the edges of your knowledge are and being willing to work to fill them in, over time, is admirable, and everybody should be doing it!). But until they learn, a developer that lacks a comprehension of the fundamentals on which they depend needs to be supported by a team that “fill the gaps” in their knowledge.

AI muddies the water because it appears to fulfil the role of that supportive team. But in reality it’s just regurgitating code synthesised from the fragments it’s read in the past without critically thinking about it. That’s fine if it’s suggesting code that the developer understands, because it’s like… “fancy autocomplete”, which you can accept or reject based on their understanding of the domain. I use AI in exactly this way many times a week. But when people try to use AI to fill the “gaps” at the edge of their knowledge, they neither learn from it nor do they write good code.

I’ve long argued that as an industry, we lack a pedagogical base: we don’t know how to teach people to do what we do (this is evidenced by the relatively high drop-out rate on computer science course, the popular opinion that one requires a particular way of thinking to be a programmer, and the fact that sometimes people who fail to learn programming through paradigm are suddenly able to do so when presented with a different one). I suspect that AI will make this problem worse, not better.

×

Get Ready with Me: Techfluencer Edition

This is a repost promoting content originally published elsewhere. See more things Dan's reposted.

WordPress.com (via YouTube)

WTF did I just watch?

It’s possible I don’t understand social media any more. To be fair, it’s possible that I never did.

This is something between absurd and hilarious. Aside from the 100 year plan (which is fascinating, and I keep meaning to share my thoughts on), I’m not sure what it’s supposed to be advertising. Maybe it’s trying to showcase how cool it is to work with Automattic? (It’s not… exactly like it’s depicted in the video. But I’d be lying if I said that fewer than 50% of my meetings this week have included a discussion on snack foods, so maybe we are I guess at least a little eccentric.)

I think I understand what it’s parodying. And that’s fun. But… wow. You don’t see many videos like this attached to a corporate YouTube account, do you? Kudos for keeping the Internet fun and weird, WordPress.com.

Frustrating At Scale

This is a repost promoting content originally published elsewhere. See more things Dan's reposted.

Large companies find HTML & CSS frustrating “at scale” because the web is a fundamentally anti-capitalist mashup art experiment, designed to give consumers all the power.

This. This is what I needed to be reminded, today.

When somebody complains that the Web is hard to scale, they’re already working against the grain of the Web.

At its simplest – and the way we used to use it – a website is a collection of .html files, one of which might have a special name so the webserver knows to put it first.

Writing HTML is punk rock. A “platform” is the tool of the establishment.

Thanks, Mia.

Queers make the world a safer place

This is a repost promoting content originally published elsewhere. See more things Dan's reposted.

A straight white guy friend was complaining about not being able to find any gaming groups for WoW that weren’t full of MAGA assholes. He said he keeps joining guilds with older (60+) casual gamers like himself because he can’t keep up with the kids, and he’ll start to make friends, but then they will reveal themselves to be Trump-lovers. He asked, “What am I doing wrong?”

This was about 3 months ago. Now, he tells me he joined a guild labeled as LGBTQ-friendly and has made several new cool friends.

He mentioned that there are many women and PoC in the group too, and “Everyone’s so nice on dungeon runs, telling people they did a good job and being supportive, sharing loot.”

I didn’t tell him that this is what the whole world would be like without patriarchal toxic masculinity, because I think he figured it out himself.

I’ve plucked out the highlights, but the deeper moral is in the full anecdote. I especially loved “…furries are like lichen…”. 😆

Bloomberg’s Terms

This is a repost promoting content originally published elsewhere. See more things Dan's reposted.

While perfectly legal, it is remarkable that to read a Bloomberg article, you must first agree to binding arbitration and waive your class action rights.

A pop-up notification indicating that the terms have been updated. The message states that by accepting, users agree to the updated Terms of Service, which includes an arbitration provision and class action waiver. It also mentions the processing of user information as described in the Privacy Policy, including potential sharing with third parties about the use of Bloomberg.com. A button labeled "Accept" is provided for users to acknowledge the terms.

I don’t often see dialog boxes like this one. In fact, if I go to the URL of a Bloomberg.com article, I don’t see any popups: nothing about privacy, nothing about cookies, nothing about terms of service, nothing about only being allowed to read a limited number of articles without signing up an account. I just… get… the article.

The reason for this is, most-likely, because my web browser is configured, among other things, to:

  • Block all third-party Javascript (thanks, uBlock Origin‘s “advanced mode”), except on domains where they’re explicitly allowed (and even then with a few exceptions: thanks, Ghostery),
  • Delete all cookies 30 seconds after I navigate away from a domain, except for domains that are explicitly greylisted/allowlisted (thanks, Cookie-AutoDelete), and
  • Resist other fingerprinting methods as best I can (thanks, Enhanced Tracking Protection).

But here’s the thing I’ve always wondered: if I don’t get to see a “do you accept our terms and conditions?” popup, is is still enforceable?

Obviously, one could argue that by using my browser in a non-standard configuration that explicitly results in the non-appearance of “consent” popups that I’m deliberately turning a blind eye to the popups and accepting them by my continued use of their services1. Like: if I pour a McDonalds coffee on my lap having deliberately worn blinkers that prevent me reading the warning that it’s hot, it’s not McDonalds’ fault that I chose to ignore their helpful legally-recommended printed warning on the cup, right?2

But I’d counter that if a site chooses to rely on Javascript hosted by a third party in order to ask for consent, but doesn’t rely on that same third-party in order to provide the service upon which consent is predicated, then they’re setting themselves up to fail!

The very nature of the way the Internet works means that you simply can’t rely on the user successfully receiving content from a CDN. There are all kinds of reasons my browser might not get the Javascript required to show the consent dialog, and many of them are completely outside of the visitor’s control: maybe there was a network fault, or CDN downtime, or my browser’s JS engine was buggy, or I have a disability and the technologies I use to mitigate its impact on my Web browsing experience means that the dialog isn’t read out to me. In any of these cases, a site visitor using an unmodified, vanilla, stock web browser might visit a Bloomberg article and read it without ever being asked to agree to their terms and conditions.

Would that be enforceable? I hope you’ll agree that the answer is: no, obviously not!

It’s reasonably easy for a site to ensure that consent is obtained before providing services based on that consent. Simply do the processing server-side, ask for whatever agreement you need, and only then provide services. Bloomberg, like many others, choose not to do this because… well, it’s probably a combination of developer laziness and search engine optimisation. But my gut feeling says that if it came to court, any sensible judge would ask them to prove that the consent dialog was definitely viewed by and clicked on by the user, and from the looks of things: that’s simply not something they’d be able to do!

tl;dr: if you want to fight with Bloomberg and don’t want to go through their arbitration, simply say you never saw or never agreed to their terms and conditions – they can’t prove that you did, so they’re probably unenforceable (assuming you didn’t register for an account with them or anything, of course). This same recommendation applies to many, many other websites.

Footnotes

1 I’m confident that if it came down to it, Bloomberg’s lawyers would argue exactly this.

2 I see the plaintiff’s argument that the cups were flimsy and obviously her injuries were tragic, of course. But man, the legal fallout and those “contents are hot” warnings remain funny to this day.

×