Deprecate React

I’m keeping an eye out for my next career move (want to hire me?). Off the back of that I’ve been brushing up on the kinds of skills that I might be asked to showcase in any kind of “tech test”.

Not the kind of stuff I can do with one hand tied behind my back1, but the things for which I’d enjoy feeling a little more-confident2. Stuff that’s on my CV that I’ve done and can do, but where I’d like to check before somebody asks me about it in an interview.

React? Sure, I can do that…

LinkedIn, GlassDoor, and bits of the Fediverse are a gold mine for the kinds of things that people are being asked to demonstrate in tech tests these days. Like this post:

On LinkedIn, Avantika Raj shares a coding question asked during their React Developer interview with Volkswagon Software Solutions. It reads: Create a traffic light component with green, yellow, and red lights. On clicking a button, the light should change. Initially, it should show green. After 2 minutes, it should automatically switch to red for 30 seconds, then yellow for 10 seconds, and repeat this cycle continuously.
I’d describe myself as a “stack-agnostic senior/principal full-stack/backend web developer/security engineer”3, and so this question – which feels like it’s a filter for a junior developer with a React specialisation – isn’t really my wheelhouse. Which makes it a perfect excuse for an hour of playing about with React.

My recent React experience has mostly involved Gutenberg blocks and WordPress theme component. This seemed like an excuse to check that I can wrangle a non-WordPress React stack.

Animated GIF showing traffic lights changing through their phases on-demand or on a toggleable timer.
This isn’t particularly sophisticated. I added customisable durations for each light, but otherwise it’s pretty basic.

Half an hour later, I’d proven to myself that yes, I could throw together a fresh application with React DOM and implement some React components, pass state around and whatnot.

Time to move on to the next thing, right? That’s what a normal person would do.

But that’s not the kind of person I am.

Let’s reimplement this as Web Components

What I found myself thinking was… man, this is chunky. React is… not the right tool for this job.

(Or, increasingly, any job. But I’ll get back to that.)

A minified production build of my new component and its dependencies came in at 202kB (62.3kB compressed). That feels pretty massive for something that does so-little. So as an experiment, I re-implemented my new React component as a vanilla JS Web Component using a custom element. Identical functionality, but no third-party library dependencies. Here’s what I got:

This one’s interactive. Press a button or two!

The Web Component version of this control has no dependency chain and uses no JSX, and so it has no transpilation step: the source version is production-ready. You could minify it, but modern HTTP compression makes the impact of that negligible anyway: the whole thing weighs in at 19.5kB (5.2kB compressed) without minification.

And while I appreciate of course that there’s much more to JavaScript complexity and performance than file sizes… and beyond that I appreciate that there’s a lot more to making great components than the resulting bundle size… it’s hard to argue that delivering the same functionality (and less fragility) in a twelfth of the payload isn’t significant.

Composite screenshots showing the Chrome performance metrics and Network download sizes for the React and Web Components versions of my traffic lights. LCP - React 0.06s, Web Components 0.04s. INP - React 16ms, Web Components 8ms. Transferred - React 62.3kb (compressed), 202kB (uncompressed), in 37ms, Web Components 5.2kB (compressed), 19.5kB (uncompressed), in 22ms.
By any metric you like, the Web Components version outperforms the React version of my traffic light component. And while it’s a vastly-simplified example, it scales. Performance is a UX concern, and if you favour “what we’re familiar with” over “what’s best for our users”, that has to be a conscious choice.

But there’s a bigger point here:

React is the new jQuery

I’m alarmed by the fact that I’m still seeing job ads for “React developers”, with little more requirement than an ability to “implement things in React”.

From where I’m sitting, React is the new jQuery. It:

  • Was originally built to work around missing or underdeveloped JavaScript functionality
    • e.g. React’s components prior to Web Components
    • e.g. jQuery’s manipulation prior to document.querySelectorAll
  • Continued to be valuable as a polyfill and as a standard middleware while that functionality become commonplace
  • No longer provides enough value to be worth using in a new project
    • And yet somehow gets added “out of habit” for many years

If you’ve got a legacy codebase with lots of React in it, you’re still going to need React for a while. Just like how you’re likely to continue to need jQuery for a while until you can tidy up all those edge-cases where you’re using it.

(You might even be locked-in to using both React and jQuery for some time, if say you’ve got a plugin architecture that demands backwards-compatibility: I’m looking at you, WordPress!)

But just as you’re already (hopefully) working to slowly extricate your codebases from any now-unnecessary jQuery dependencies they have… you should be working on an exit plan for your React code, too. It’s done its time; it’s served its purpose: now it’s just a redundant dependency making your bundles cumbersome and harder to debug.

Everything React gives you on the client-side – components, state/hooks, routing4, etc. – is possible (and easy) in modern JavaScript supported in all major browsers. And if you still really want an abstraction layer, there are plenty of options (and they’re all a lot lighter than React!).

The bottom line is, I suppose…

You shouldn’t be hiring “React developers”!

If you’re building a brand new project, you shouldn’t be using React. It should be considered deprecated.

If you’ve got an existing product that depends on React… you should be thinking about how you’ll phase it out over time. And with that in mind, you want to be hiring versatile developers. They’ll benefit from some experience with React, sure, but unless they can also implement for the modern Web of tomorrow, they’ll just code you deeper into your dependency on React.

It’s time you started recruiting “Front-End Developers (React experience a plus)”. Show some long-term thinking! Or else the Web is going to move on without you, and in 5-10 years you’ll struggle to recruit people to maintain your crumbling stack.

You can download all my code and try it for yourself, if you like. The README has lots more information/spicy rants, and the whole thing’s under a public domain license so you can do whatever you like with it.

Footnotes

1 Exploiting or patching an injection vulnerability, optimising an SQL query, implementing a WordPress plugin, constructing a CircleCI buildchain, expanding test coverage over a Rubygem, performing an accessibility audit of a web application, extending a set of high-performance PHP-backed REST endpoints, etc. are all – I’d hope! – firmly in the “hold my beer” category of tech test skills I’d ace, for example. But no two tech stacks are exactly alike, so it’s possible that I’ll want to brush up on some of the adjacent technologies that are in the “I can do it, but I might need to hit the docs pages” category.

2 It’s actually refreshing to be learning and revising! I’ve long held that I should learn a new programming language or framework every year or two to stay fresh and to keep abreast of what’s going on in world. I can’t keep up with every single new front-end JavaScript framework any more (and I’m not sure I’d want to!)! But in the same way as being multilingual helps unlock pathways to more-creative thought and expression even if you’re only working in your native tongue, learning new programming languages gives you a more-objective appreciation of the strengths and weaknesses of what you use day-to-day. tl;dr: if you haven’t written anything in a “new” (to you) programming language for over a year, you probably should.

3 What do job titles even mean, any more? 😂 A problem I increasingly find is that I don’t know how to describe what I do, because with 25+ years of building stuff for the Web, I can use (and have used!) most of the popular stacks, and could probably learn a new one without too much difficulty. Did I mention I’m thinking about my next role? If you think we might “click”, I’d love to hear from you…

4 Though if you’re doing routing only on the client-side, I already hate you. Consider for example the SlimJS documentation which becomes completely unusable if a third-party JavaScript CDN fails: that’s pretty fragile!

× ×

Geocities Live

I used Geocities.live to transform the DanQ.me homepage into “Geocities style” and I’ve got to say… I don’t hate what it came up with

90s-style-homepage version of DanQ.me, as generated by geocities.live. It features patterned backgrounds, Comic Sans, gaudy colours, and tables.
Sure, it’s gaudy, but it’s got a few things going for it, too.

Let’s put aside for the moment that you can already send my website back into “90s mode” and dive into this take on how I could present myself in a particularly old-school way. There’s a few things I particularly love:

  • It’s actually quite lightweight: ignore all the animated GIFs (which are small anyway) and you’ll see that, compared to my current homepage, there are very few images. I’ve been thinking about going in a direction of less images on the homepage anyway, so it’s interesting to see how it comes together in this unusual context.
  • The page sections are solidly distinct: they’re a mishmash of different widths, some of which exhibit a horrendous lack of responsivity, but it’s pretty clear where the “recent articles” ends and the “other recent stuff” begins.
  • The post kinds are very visible: putting the “kind” of a post in its own column makes it really clear whether you’re looking at an article, note, checkin, etc., much more-so than my current blocks do.
Further down the same page, showing the gap between the articles and the other posts, with a subscribe form (complete with marquee!).
Maybe there’s something we can learn from old-style web design? No, I’m serious. Stop laughing.

90s web design was very-much characterised by:

  1. performance – nobody’s going to wait for your digital photos to download on narrowband connections, so you hide them behind descriptive links or tiny thumbnails, and
  2. pushing the boundaries – the pre-CSS era of the Web had limited tools, but creators worked hard to experiment with the creativity that was possible within those limits.

Those actually… aren’t bad values to have today. Sure, we’ve probably learned that animated backgrounds, tables for layout, and mystery meat navigation were horrible for usability and accessibility, but that doesn’t mean that there isn’t still innovation to be done. What comes next for the usable Web, I wonder?

Geocities.live interpretation of threerings.org.uk. It's got some significant design similarities.
As soon as you run a second or third website through the tool, its mechanisms for action become somewhat clear and sites start to look “samey”, which is the opposite of what made 90s Geocities great.

The only thing I can fault it on is that it assumes that I’d favour Netscape Navigator: in fact, I was a die-hard Opera-head for most of the nineties and much of the early naughties, finally switching my daily driver to Firefox in 2005.

I certainly used plenty of Netscape and IE at various points, though, but I wasn’t a fan of the divisions resulting from the browser wars. Back in the day, I always backed the ideals of the “Viewable With Any Browser” movement.


88x31 animated GIF button in the Web 1.0 style, reading "DAN Q". The letter Q is spinning. Best Viewed With Any Browser button, in original (90s) style.

I guess I still do.

× × ×

WebDX: Does More Mean Better?

Enumerating Web features

The W3C‘s WebDX Community Group this week announced that they’ve reached a milestone with their web-features project. The project is an effort to catalogue browser support for Web features, to establish an understanding of the baseline feature set that developers can rely on.

That’s great, and I’m in favour of the initiative. But I wonder about graphs like this one:

Stacked area chart illustrating the evolution of the web platform in terms of number of features in core browsers (2003-2025)

The graph shows the increase in time of the number of features available on the Web, broken down by how widespread they are implemented across the browser corpus.

The shape of that graph sort-of implies that… more features is better. And I’m not entirely convinced that’s true.

Does “more” imply “better”?

Don’t get me wrong, there are lots of Web features that are excellent. The kinds of things where it’s hard to remember how I did without them. CSS grids are for many purposes an improvement on flexboxes; flexboxes were massively better than floats; and floats were an enormous leap forwards compared to using tables for layout! The “new” HTML5 input types are wonderful, as are the revolutionary native elements for video, audio, etc. I’ll even sing the praises of some of the new JavaScript APIs (geolocation, web share, and push are particular highlights).

But it’s not some kind of universal truth that “more features means better developer experience”. It’s already the case, for example, that getting started as a Web developer is harder than it once was, and I’d argue harder than it ought to be. There exist complexities nowadays that are barriers to entry. Like the places where the promise of a progressively-enhanced Web has failed (they’re rare, but they exist). Or the sheer plethora of features that come with caveats to their use that simply must be learned (yes, you need a <meta name="viewport">; no, you can’t rely on JS to produce content).

Meanwhile, there are technologies that were standardised, and that we did need, but that never took off. The <keygen> element never got implemented into the then-dominant Internet Explorer (there were other implementation problems too, but this one’s the killer). This made it functionally useless, which meant that its standard never evolved and grew. As a result, its implementation in other browsers stagnated and it was eventually deprecated. Had it been implemented properly and iterated on, we’d could’ve had something like WebAuthn over a decade earlier.

Which I guess goes to show that “more features is better” is only true if they’re the right features. Perhaps there’s some way of tracking the changing landscape of developer experience on the Web that doesn’t simply count enumerate a baseline of widely-available features? I don’t know what it is, though!

A simple web

Mostly, the Web worked fine when it was simpler. And while some of the enhancements we’ve seen over the decades are indisputably an advancement, there are also plenty of places where we’ve let new technologies lead us astray. Third-party cookies appeared as a naive consequence of first-party ones, but came to be used to undermine everybody’s privacy. Dynamic DOM manipulation started out as a clever idea to help with things like form validation and now a significant number of websites can’t even show their images – or sometimes their text – unless their JavaScript code gets downloaded and interpreted successfully.

Simulated screenshot of this article but in Medium, with three annoying popups covering much of the content.
Were you reading this article on Medium, you’d have downloaded ~5MB of data including 48 JS files and had 7 cookies set, just so you could… have most of the text covered with popovers? (for comparison, reading it here takes about half a megabyte and the cookies are optional delicious)

A blog post, news article, or even an eCommerce site or social networking platform doesn’t need the vast majority of the Web’s “new” features. Those features are important for some Web applications, but most of the time, we don’t need them. But somehow they end up being used anyway.

Whether or not the use of unnecessary new Web features is a net positive to developer experience is debatable. But it’s certainly not often to the benefit of user experience. And that’s what I care about.

This blog post, of course, can be accessed with minimal features: it’s even available over ultra-lightweight Gemini at gemini://danq.me/posts/webdx-does-more-mean-better/, and I’ve also written it as plain text on my plain text blog (did you know about that?).

× ×

Dan Q 88×31

My love of the yesterweb forced me to teach myself just-enough Blender to make an animation for a stupid thing: an 88×31 button representing “me” (and, I suppose, my blog, whenever I next end up redesigning its theme).

88x31 animated GIF button in the Web 1.0 style, reading "DAN Q". The letter Q is spinning.

The same image, but expanded so it's larger.

× ×

Reply to Vika, re: Content-Security-Policy

This is a reply to a post published elsewhere. Its content might be duplicated as a traditional comment at the original source.

Vika said:

Had a fight with the Content-Security-Policy header today. Turns out, I won, but not without sacrifices.
Apparently I can’t just insert <style> tags into my posts anymore, because otherwise I’d have to somehow either put nonces on them, or hash their content (which would be more preferrable, because that way it remains static).

I could probably do the latter by rewriting HTML at publish-time, but I’d need to hook into my Markdown parser and process HTML for that, and, well, that’s really complicated, isn’t it? (It probably is no harder than searching for Webmention links, and I’m overthinking it.)

I’ve had this exact same battle.

Obviously the intended way to use nonces in a Content-Security-Policy is to have the nonce generated, injected, and served in a single operation. So in PHP, perhaps, you might do something like this:

<?php
  $nonce = bin2hex(random_bytes(16));
  header("Content-Security-Policy: script-src 'nonce-$nonce'");
?>
<!DOCTYPE html>
<html lang="en">
<head>
  <meta charset="UTF-8">
  <title>PHP CSP Nonce Test</title>
</head>
<body>
  <h1>PHP CSP Nonce Test</h1>
  <p>
    JavaScript did not run.
  </p>

  <!-- This JS has a valid nonce: -->
  <script nonce="<?php echo $nonce; ?>">
    document.querySelector('p').textContent = 'JavaScript ran successfully.';
  </script>

  <!-- This JS does not: -->
  <script nonce="wrong-nonce">
    alert('The bad guys won!');
  </script>
</body>
</html>
Viewing this page in a browser (with Javascript enabled) should show the text “JavaScript ran successfully.”, but should not show an alertbox containing the text “The bad guys won!”.

But for folks like me – and you too, Vika,, from the sounds of things – who serve most of their pages, most of the time, from the cache or from static HTML files… and who add the CSP header on using webserver configuration… this approach just doesn’t work.

I experimented with a few solutions:

  • A long-lived nonce that rotates.
    CSP allows you to specify multiple nonces, so I considered having a rotating nonce that was applied to pages (which were then cached for a period) and delivered by the header… and then a few hours later a new nonce would be generated and used for future page generations and appended to the header… and after the cache expiry time the oldest nonces were rotated-out of the header and became invalid.
  • Dynamic nonce injection.
    I experimented with having the webserver parse pages and add nonces: randomly generating a nonce, putting it in the header, and then basically doing a s/<script/<script nonce="..."/ to search-and-replace it in.

Both of these are terrible solutions. The first one leaves a window of, in my case, about 24 hours during which a successfully-injected script can be executed. The second one effectively allowlists all scripts, regardless of their provenance. I realised that what I was doing was security theatre: seeking to boost my A-rating to an A+-rating on SecurityHeaders.com without actually improving security at all.

But the second approach gave me an idea. I could have a server-side secret that gets search-replaced out. E.g. if I “signed” all of my legitimate scripts with something like <script nonce="dans-secret-key-goes-here" ...> then I could replace s/dans-secret-key-goes-here/actual-nonce-goes-here/ and thus have the best of both worlds: static, cacheable pages, and actual untamperable nonces. So long as I took care to ensure that the pages were never delivered to anybody with the secret key still intact, I’d be sorted!

Alternatively, I was looking into whether Caddy can do something like mod_asis does for Apache: that is, serve a file “as is”, with headers included in the file. That way, I could have the CSP header generated with the page and then saved into the cache, so it’s delivered with the same none every time… until the page changes. I’d love more webservers to have an “as is” mode, but I appreciate that might be a big ask (Apache’s mechanism, I suspect, exploits the fact that HTTP/1.0 and HTTP/1.1 literally send headers, followed by two CRLFs, then content… but that’s not what happens in HTTP/2+).

So yeah, I’ll probably do a server-side-secret approach, down the line. Maybe that’ll work for you, too.

WordPress to ClassicPress

As I mentioned in my recent Blog Questions Challenge, I recently switched my blog from WordPress, which it had been running on for over 20 years of its 26 year history, to ClassicPress.1 I’m aware that I’m not the only person for whom ClassicPress might be a better fit than WordPress2, so I figured I should share the process by which I undertook the change.

Switching from WordPress to ClassicPress

Switching from WordPress to ClassicPress should be a non-destructive, 100% reversible process, but (even though I’ve got solid backups) I wasn’t ready to trust that, so I decided to operate on a copy of my site. I’m glad I did, because there were a couple of teething issues I needed to tackle before I could launch.

1. Duplicating the site

I took a simple approach to duplicating the site: (1) I copied the site directory, and (2) I copied the database, and (3) I set up a new subdomain to use for testing. Here’s how I did each step:

1.1. Copying the site directory

This should’ve been simple, but a du -sh revealed that my /wp-content/uploads directory is massive (I should look into that) and I didn’t want to clone it. And I didn’t want r need to clone my /wp-content/cache directory either. So I ran:

  1. rsync -av --exclude=wp-content ./old-site-directory/ ./new-site-directory/ to copy everything except wp-content, and then
  2. rsync -av --exclude=uploads --exclude=cache ./old-site-directory/wp-content/ ./new-site-directory/wp-content/ to copy wp-content except the uploads and cache subdirectories, and then finally
  3. ln -s ./old-site-directory/wp-content/uploads ./new-site-directory/wp-content/uploads to symlink the uploads directory, sharing it between the two sites

1.2. Copying the database

I just piped mysqldump into mysql to clone from one database to the other:

mysqldump -uUSERNAME -p --lock-tables=false old-site-database | mysql -uUSERNAME -p new-site-database

I edited DB_NAME in wp-config.php in the new site’s directory to point it at the new database.

Screenshot from nano, editing wp-config.php. The constant defintions for DB_NAME, DB_USER, and DB_PASSWORD are highlighted with the text 'change these!'.
If you’re going to clone your WordPress site before converting to ClassicPress, you’ll want to be comfortable editing your wp-config.php.

1.3. Setting up a new subdomain

My DNS is already configured with a wildcard to point (almost) all *.danq.me subdomains to this server already. I decided to use the name classicpress-testing.danq.me as my temporary/test domain name. To keep any “changes” to my cloned site to a minimum, I overrode the domain name in my wp-config.php rather than in my database, by adding the following lines:

define('WP_HOME','https://classicpress-testing.danq.me');
define('WP_SITEURL','https://classicpress-testing.danq.me');

Because I use Caddy/FrankenPHP as my webserver3, configuration was really easy: I just copied the relevant part of my Caddyfile (actually an include), changed the domain name and the root, and it just worked, even provisioning me out a LetsEncrypt SSL certificate. Magical4.

2. Switching the duplicate to ClassicPress

Now that I had a duplicate copy of my blog running at https://classicpress-testing.danq.me/, it was time to switch it to ClassicPress. I started by switching my wp-admin colour scheme to a different one in my cloned site, so it’d be immediately visually-obvious to me if I’d accidentally switched and was editing the “wrong” site (I also made sure I was logged-out of my primary, live site, so I was confident I wouldn’t break anything while I was experimenting!).

ClassicPress provides a migration plugin which checks for common problems and then switches your site from WordPress to ClassicPress, so I installed it and ran it. It said that everything was okay except for my (custom) theme and a my self-built plugins, which it understandably couldn’t check compatibility of. It recommended that I install Twenty Seventeen – the last WordPress default theme to not require the block editor – but I didn’t do so: I was confident that my theme would work anyway… and if it didn’t, I’d want to fix it rather than switch theme!

ClassicPress migration plugin showing a series of green checks: everything's good to go.
I failed to take a screenshot of the actual process, but it looked broadly like this.

And then… it all broke.

3. Fixing what broke

After swiftly doing a safety-check that my live site was still intact, I started trying to work out why my site wasn’t broken. Debugging a ClassicPress PHP issue is functionally identical to debugging a similar WordPress issue, for obvious reasons: check the logs, work out what’s broken, realise it’s a plugin, disable that plugin while you investigate further, etc.

ClassicPress reporting 'There has been a critical error on this website.'
Yeah, I should have expected this. And I did.

In my case, the “blocking” issues were:

  • Jetpack: this plugin does not play nicely with ClassicPress, presumably because it fails if it’s unable to register a block. Fortunately, I wasn’t actually using Jetpack for anything other than for VaultPress (which has saved my butt on at least one occasion and whose praises I sing), so I uninstalled Jetpack and installed the standalone plugin version of VaultPress instead, which worked fine.
  • EWWW Image Optimizer: I use this plugin to pregenerate WebP variants of my images, which I then serve using webserver rules. It’s not a complex job, and I should probably integrate the feature into my theme at some point, but for now I use this plugin. Version 8.0.0 of the plugin doesn’t work on ClassicPress 2.3.1, so I used WP-CLI to downgrade to the last version that does (7.7.0), and then it worked fine.
  • Dan’s Geocaching Log Reposter: a self-made plugin that copies my logs from geocaching websites stopped working properly, which I think is because ClassicPress is doing a more-aggressive job than WordPress at nonce validation on admin REST endpoints? I put a quick hack into my plugin to work around it, but I’ll need to look into this properly at some point.
  • Some other bits of my stack, e.g. CapsulePress (my Gemini/Spartan/Nex server), have their own copies of my database credentials, because I’ve been too lazy to centralise them into environment variables, and needed updating (but not until live switchover time).

Everything else worked fine, as far as I’ve determined in the weeks that have followed. My other plugins, which include Advanced Editor Tools (I should probably look into Enriched Editor), Antispam Bee, Google Authenticator, IndieAuth, Post Kinds, Post Snippets, Regenerate Thumbnails, Syndication Links, Webmention, WebSub, and WP-SCSS all “just worked”.

4. Completing the switchover

I ran the two sites in-parallel for a couple of weeks, with the ClassicPress one as a “read only” version (so I didn’t pollute my uploads directory!), but it was pretty unnecessary because it all worked pretty seamlessly, despite my complex stack of custom code. When I wanted to switch for-real, all I needed to do was swap the domain names over in my Caddyfile and edit the wp-config.php of my ClassicPress installation: step 1.3, but in reverse!

If you hadn’t been told5, you probably wouldn’t have even known I’d made a change: I suppress basically all infrastructure-identifying headers from my server output as a matter of course, and ClassicPress and WordPress are functionally-interchangeable from a front-end perspective6.

So what’s difference?

From my experience, here are the differences I’ve discovered since switching from WordPress to ClassicPress:

The good stuff

  • 😅 ClassicPress has no Gutenberg/block editor. This would absolutely be a showstopper for many people, and that’s fine: I have nothing against the block editor (I use it basically every day elsewhere!), but I’ve never really used it on danq.me and don’t feel the need to change that! My theme, my workflow, and my custom plugins are all geared around the perfectly-good “classic” editor, and so getting a more-lightweight CMS by removing a feature I wasn’t using anyway falls somewhere between neutral and a blessing.
  • The backend is fast again! One of the changes the ClassicPress team have been working on applying to WordPress is to strip out jQuery and other redundancies from the backend, and I love how much faster and lighter my editor interface is as a result. (With caveat; see below!)
  • 🔌Virtually everything “just works”. With the few exceptions described above, everything works exactly as it does under WordPress. Which is what you’d hope for a fork that’s mostly “WordPress, but without the block editor”, right, but it’s still reassuring (and, for me, an essential feature). There are a few “new” features to do with paging through posts and the media library and they’re fine, I suppose, but not by themselves worth switching for (though it might be nice to backport them into WordPress!).

The bad stuff

  • 🏷️ Adding tags to posts takes a step backwards. A side-effect of dropping jQuery is the partial loss of the autocomplete feature when selecting tags to add to a post. You still get a partial autocomplete, but not after typing a comma: you need to press enter to submit the tag you were writing and then start typing them next, which frankly sucks. This is because they’re relying on a <datalist>, which isn’t as full-featured as the Javascript solution WordPress employs. This bugs me almost enough to be a showstopper, but I gather it’s getting fixed in a near-future version.
  • 🗺️ You’re in uncharted territory when things go wrong. One great benefit of WordPress is the side-effects of its ubiquity. If you have a query or a problem you can throw a stone at your favourite search engine and get a million answers… and some of them will even be right! If you have a problem in ClassicPress and it’s not shared with (or you’re not sure if it’s shared with) WordPress… you’re mostly on your own. The forums are good and friendly, but if you want a quick answer to something, you’re likely to have to roll your sleeves up and open some source code. I don’t mind this at all – when I first started using WordPress, this was the case, too! – but it might be a showstopper for some folks.

In summary: I’m enjoying using ClassicPress, even where there are rough edges. For me, 99% of my experience with it is identical to how I used WordPress anyway, it’s relatively lightweight and fast, and it’s easy enough to switch back if I change my mind.

Footnotes

1 It saddens me that I have to keep clarifying this, but I feel like I do: my switch from WordPress to ClassicPress is absolutely nothing to do with any drama in the WordPress space that’s going on right now: in fact, I’d been planning to try it out since before any of the drama appeared. I appreciate that some people making a similar switch, including folks who use this blog post as a guide, might have different motivations to me, and that’s fine too. Personally, I think that ditching an installation of open-source WordPress based on your interpretation of what’s going on in the ecosystem is… short-sighted? But hey: the joy of open source is you can – and should! – do what you want. Anyway: the short of it is – the desire to change from WordPress to ClassicPress was, for me, 100% a technical decision and 0% a political one. And I’ll thank you for leaving any of your drama at the door if you slide into my comments, ta!

2 Matt recently described ClassicPress as “the last decent fork attempt for WordPress”, and I absolutely agree. There’s been a spate of forks and reimplementations recently. I’ve looked into many of them and been… very much underwhelmed. Want my hot take? Sure, here you go: AspirePress is all lofty ideas and no deliverables. FreeWP seems to be the same, but somehow without the lofty ideas. ForkPress is a ghost. Speaking of ghosts, Ghost isn’t a WordPress fork; they have got some cool ideas though. b2evolution is even less a WordPress fork but it’s pretty cool in its own right. I’m not sure what clamPress is trying to achieve but I’ve not given it a serious look. So yeah: ClassicPress is, in my mind, the only WordPress fork even worth consideration at this point, and as I describe in this blog post: it’s not for everybody.

3 I switched from Nginx over the winter and it’s been just magical: I really love Caddy’s minimal approach to production configuration. The only thing I’ve been able to fault it on is that it’s not capable of setting up client-side SSL certificate authentication on a path, only on an entire domain, which meant I needed to reimplement the authentication mechanism I use on a small part of my (non-blog) internal infrastructure.

4 To be fair, it wouldn’t have been hard if I’d still be using Nginx, because I’d set up Certbot to use DNS-based vertification to issue me wildcard SSL certificates. But doing this in Caddy still felt magical.

5 And assuming you don’t religiously check my colophon page.

6 Indeed, I wouldn’t have considered a switch to ClassicPress in the first place if it wasn’t a closely-aligned-enough fork that I retained the ability to flip-flop between the two to my heart’s content! I’ve loved WordPress for over two decades; that’s not going to change any time soon… and if e.g. ClassicPress ceased tracking WordPress releases and the fork diverged too far for my comfort, I’d probably switch back to regular old WordPress!

× × ×

Frustrating At Scale

This is a repost promoting content originally published elsewhere. See more things Dan's reposted.

Large companies find HTML & CSS frustrating “at scale” because the web is a fundamentally anti-capitalist mashup art experiment, designed to give consumers all the power.

This. This is what I needed to be reminded, today.

When somebody complains that the Web is hard to scale, they’re already working against the grain of the Web.

At its simplest – and the way we used to use it – a website is a collection of .html files, one of which might have a special name so the webserver knows to put it first.

Writing HTML is punk rock. A “platform” is the tool of the establishment.

Thanks, Mia.

Bloomberg’s Terms

This is a repost promoting content originally published elsewhere. See more things Dan's reposted.

While perfectly legal, it is remarkable that to read a Bloomberg article, you must first agree to binding arbitration and waive your class action rights.

A pop-up notification indicating that the terms have been updated. The message states that by accepting, users agree to the updated Terms of Service, which includes an arbitration provision and class action waiver. It also mentions the processing of user information as described in the Privacy Policy, including potential sharing with third parties about the use of Bloomberg.com. A button labeled "Accept" is provided for users to acknowledge the terms.

I don’t often see dialog boxes like this one. In fact, if I go to the URL of a Bloomberg.com article, I don’t see any popups: nothing about privacy, nothing about cookies, nothing about terms of service, nothing about only being allowed to read a limited number of articles without signing up an account. I just… get… the article.

The reason for this is, most-likely, because my web browser is configured, among other things, to:

  • Block all third-party Javascript (thanks, uBlock Origin‘s “advanced mode”), except on domains where they’re explicitly allowed (and even then with a few exceptions: thanks, Ghostery),
  • Delete all cookies 30 seconds after I navigate away from a domain, except for domains that are explicitly greylisted/allowlisted (thanks, Cookie-AutoDelete), and
  • Resist other fingerprinting methods as best I can (thanks, Enhanced Tracking Protection).

But here’s the thing I’ve always wondered: if I don’t get to see a “do you accept our terms and conditions?” popup, is is still enforceable?

Obviously, one could argue that by using my browser in a non-standard configuration that explicitly results in the non-appearance of “consent” popups that I’m deliberately turning a blind eye to the popups and accepting them by my continued use of their services1. Like: if I pour a McDonalds coffee on my lap having deliberately worn blinkers that prevent me reading the warning that it’s hot, it’s not McDonalds’ fault that I chose to ignore their helpful legally-recommended printed warning on the cup, right?2

But I’d counter that if a site chooses to rely on Javascript hosted by a third party in order to ask for consent, but doesn’t rely on that same third-party in order to provide the service upon which consent is predicated, then they’re setting themselves up to fail!

The very nature of the way the Internet works means that you simply can’t rely on the user successfully receiving content from a CDN. There are all kinds of reasons my browser might not get the Javascript required to show the consent dialog, and many of them are completely outside of the visitor’s control: maybe there was a network fault, or CDN downtime, or my browser’s JS engine was buggy, or I have a disability and the technologies I use to mitigate its impact on my Web browsing experience means that the dialog isn’t read out to me. In any of these cases, a site visitor using an unmodified, vanilla, stock web browser might visit a Bloomberg article and read it without ever being asked to agree to their terms and conditions.

Would that be enforceable? I hope you’ll agree that the answer is: no, obviously not!

It’s reasonably easy for a site to ensure that consent is obtained before providing services based on that consent. Simply do the processing server-side, ask for whatever agreement you need, and only then provide services. Bloomberg, like many others, choose not to do this because… well, it’s probably a combination of developer laziness and search engine optimisation. But my gut feeling says that if it came to court, any sensible judge would ask them to prove that the consent dialog was definitely viewed by and clicked on by the user, and from the looks of things: that’s simply not something they’d be able to do!

tl;dr: if you want to fight with Bloomberg and don’t want to go through their arbitration, simply say you never saw or never agreed to their terms and conditions – they can’t prove that you did, so they’re probably unenforceable (assuming you didn’t register for an account with them or anything, of course). This same recommendation applies to many, many other websites.

Footnotes

1 I’m confident that if it came down to it, Bloomberg’s lawyers would argue exactly this.

2 I see the plaintiff’s argument that the cups were flimsy and obviously her injuries were tragic, of course. But man, the legal fallout and those “contents are hot” warnings remain funny to this day.

×

Dark Patterns Detective

This is a repost promoting content originally published elsewhere. See more things Dan's reposted.

This was fun. A simple interactive demonstration of ten different dark patterns you’ve probably experienced online. I might use it as a vehicle for talking about such deceptive tactics with our eldest child, who’s now coming to an age where she starts to see these kinds of things.

Screenshot showing a basket containing a 'premium package' including several optional add-ons, with no obvious way to remove those add-ons.

After I finished exploring the dark patterns shown, I decided to find out more about the author and clicked the link in the footer, expecting to be taken to their personal web site. But instead, ironically, I came to a web page on a highly-recognisable site that’s infamous for its dark patterns: 🤣

LinkedIn screenshot showing not one but two popups to try to encourage me to log in to see more content.

×

Caddy

I’m pretty impressed with running WordPress on Caddy so far.

It took a little jiggerypokery to configure it with an equivalent of the Nginx configuration I use for DanQ.me. But off the back of it I get the capability for HTTP/3, 103 Early Hints, and built-in “batteries included” infrastructure for things like certificate renewal and log rotation.

Browser network debugger showing danq.me being served over protocol 'h3' (HTTP/3) and an 'Early Hints Headers' section loading a WOFF2 font and a JavaScript file.

(why yes, I am celebrating my birthday by doing selfhosting server configuration, why do you ask? 😅)

×

Maybe Later

This is a repost promoting content originally published elsewhere. See more things Dan's reposted.

Maybe Later

“Install update” Maybe later.

“Sign in to access this content” No.

“It’s better in the app!” Whose fault is that?

“We completely redesigned this thing you need to do your job for no good reason” Got it.

“Disable any adblocker.” Absolutely not.

I don’t know if I’m supposed to read this as a poem, but I did, and I love it. It speaks to me. It speaks of my experience of using (way too much of) the Web nowadays, enshittified as it is.

(This toot about the evolution of videogaming seems almost like a sequel. Less like a poem, though.)

But yeah, I run a fine-tuned setup on most of my computers that works for me… by working against most of the way the Web seems to expect me to use it, these days. I block all third-party JavaScript and cookies by default (and drops first-party cookies extremely quickly). I use plugins to quietly reject consent banners, suppress soft paywalls, and so on. And when I come across sites that don’t work that way, I make a case-by-case decision on whether to use them at all (if you hide some features in your “app” only, I just don’t use those features).

Sure, there are probably half a dozen websites that you might use that I can’t. But in exchange I use a Web that’s fast, clean, and easy-to-read.

And just sometimes: when I’m on somebody else’s computer and I see an ad, or a cookie consent banner, or a “log in to keep reading” message, or a website weighed down and crawling because of the dozens of tracking scripts, or similar… I’m surprised to remember that these things actually exist, and wonder for a moment how people who do see them all time time cope with them!

Sigh.

Anyway: this was an excellent poem, assuming it was supposed to be interpreted as being a poem. Otherwise, it was an excellent whatever-it-is.

Cable Car Marquee

This is a repost promoting content originally published elsewhere. See more things Dan's reposted.

Was playing around with some HTML and made a cable car for my page. Hmh.

Beautiful. It feels like it ought to have been wrapped in a HTML Web Component, maybe called <cable-car>, with progressive enhancement bonus features (maybe it’ll only run during daylight hours? or when the wind isn’t too fast?)?

But really: I can’t fault this. Beautiful.

Enumerating Domains

I’ve just enumerated my personal domain names. There’s a lot fewer of them than there used to be!1

Anyway: here’s the list –

I think that’s all of them, but it’s hard to be sure…

Footnotes

1 Maybe I’ve finally shaken off my habit of buying a domain name for everything. Or maybe it’s just that I’ve embraced subdomains for more stuff. Probably the latter.

Digital Dustbusting

tl;dr: I’m tidying up and consolidating my personal hosting; I’ve made a little progress, but I’ve got a way to go – fortunately I’ve got a sabbatical coming up at work!

At the weekend, I kicked-off what will doubtless be a multi-week process of gradually tidying and consolidating some of the disparate digital things I run, around the Internet.

I’ve a long-standing habit of having an idea (e.g. gamebook-making tool Twinebook, lockpicking puzzle game Break Into Us, my Cheating Hangman game, and even FreeDeedPoll.org.uk!), deploying it to one of several servers I run, and then finding it a huge headache when I inevitably need to upgrade or move said server because there’s such an insane diversity of different things that need testing!

Screenshot from Cheating Hangman: I guessed an 'E', but when I guessed an 'O' I was told that there was one (the computer was thinking of 'CLOSE'), but now there isn't because it's switched to a different word that ends with 'E'.
My “cheating hangman” game spun out from my analysis of the hardest words for an optimal player to guess, which was in turn inspired by the late Nick Berry’s examination of optimal strategy.

I can simplify, I figured. So I did.

And in doing so, I rediscovered several old projects I’d neglected or forgotten about. I wonder if anybody’s still using any of them?

Hosting I’ve tidied so far…

  • Cheating Hangman is now hosted by GitHub Pages.
  • DNDle, my Wordle-clone where you have to guess the Dungeons & Dragons 5e monster’s stat block, is now hosted by GitHub Pages. Also, I fixed an issue reported a month ago that meant that I was reporting Giant Scorpions as having a WIS of 19 instead of 9.
  • Abnib, which mostly reminds people of upcoming birthdays and serves as a dumping ground for any Abnib-related shit I produce, is now hosted by GitHub Pages.
  • RockMonkey.org.uk, which doesn’t really do much any more, is now hosted by GitHub Pages.
  • EGXchange, my implementation of a digital wallet for environmentally-friendly cryptocurrency EmmaGoldCoin, which I’ve written about before, is now hosted by GitHub Pages.
  • Sour Grapes, the single-page promo for a (remote) murder mystery party I hosted during a COVID lockdown, is now hosted by GitHub Pages.
  • A convenience-page for giving lost people directions to my house is now hosted by GitHub Pages.
  • Dan Q’s Things is now automatically built on a schedule and hosted by GitHub Pages.
  • Robin’s Improbable Blog, which spun out from 52 Reflect, wasn’t getting enough traffic to justify “proper” hosting so now it sits in a Docker container on my NAS.
  • My μlogger server, which records my location based on pings from my phone, has also moved to my NAS. This has broken Find Dan Q, but I’m not sure if I’ll continue with that in its current form anyway.
  • All of my various domain/subdomain redirects have been consolidated on, or are in the process of moving to, to a tiny Linode/Akamai instance. It’s a super simple plain Nginx server that does virtually nothing except redirect people – this is where I’ll park the domains I register but haven’t found a use for yet, in future.
Screenshot showing EGXchange, saying "everybody has an EGX wallet, log in to yours now".
I was pretty proud of EGXchange.org, but I’ll be first to admit that it’s among the stupider of my throwaway domains.

It turns out GitHub pages is a fine place to host simple, static websites that were open-source already. I’ve been working on improving my understanding of GitHub Actions anyway as part of what I’ve been doing while wearing my work, volunteering, and personal hats, so switching some static build processes like DNDle’s to GitHub Actions was a useful exercise.

Stuff I’m still to tidy…

There’s still a few things I need to tidy up to bring my personal hosting situation under control:

DanQ.me

Screenshot showing this blog post.
You’re looking at it. But later this year, you might be looking at it… elsewhere?

This is the big one, because it’s not just a WordPress blog: it’s also a Gemini, Spartan, and Gopher server (thanks CapsulePress!), a Finger server, a general-purpose host to a stack of complex stuff only some of which is powered by Bloq (my WordPress/PHP integrations): e.g. code to generate the maps that appear on my geopositioned posts, code to integrate with the Fediverse, a whole stack of configuration to make my caching work the way I want, etc.

FreeDeedPoll.org.uk

Right now this is a Ruby/Sinatra application, but I’ve got a (long-running) development branch that will make it run completely in the browser, which will further improve privacy, allow it to run entirely-offline (with a service worker), and provide a basis for new features I’d like to provide down the line. I’m hoping to get to finishing this during my Automattic sabbatical this winter.

Screenshot showing freedeedpoll.org.uk
The website’s basically unchanged for most of a decade and a half, and… umm… it looks it!

A secondary benefit of it becoming browser-based, of course, is that it can be hosted as a static site, which will allow me to move it to GitHub Pages too.

Geohashing.site

When I took over running the world’s geohashing hub from xkcd‘s Randall Munroe (and davean), I flung the site together on whatever hosting I had sitting around at the time, but that’s given me some headaches. The outbound email transfer agent is a pain, for example, and it’s a hard host on which to apply upgrades. So I want to get that moved somewhere better this winter too. It’s actually the last site left running on its current host, so it’ll save me a little money to get it moved, too!

Screenshot from Geohashing.site's homepage.
Geohashing’s one of the strangest communities I’m honoured to be a part of. So it’d be nice to treat their primary website to a little more respect and attention.

My FreshRSS instance

Right now I run this on my NAS, but that turns out to be a pain sometimes because it means that if my home Internet goes down (e.g. thanks to a power cut, which we have from time to time), I lose access to the first and last place I go on the Internet! So I’d quite like to move that to somewhere on the open Internet. Haven’t worked out where yet.

Next steps

It’s felt good so far to consolidate and tidy-up my personal web hosting (and to rediscover some old projects I’d forgotten about). There’s work still to do, but I’m expecting to spend a few months not-doing-my-day-job very soon, so I’m hoping to find the opportunity to finish it then!

× × × × ×