It opens almost apologetically, like an explanation for the gap in new releases for most of the twenty-teens. But it quickly becomes a poetic exploration of a detached depression of a
man trapped under the weight of the world. It’s sad, and beautiful, and relatable.
This adventure began, in theory at least, on my birthday in January. I’ve long expressed an interest
in taking a dance class together, and so when Ruth pitched me a few options for a birthday gift, I jumped on the opportunity to learn tango. My knowledge of the dance was
basically limited to what I’d seen in films and television, but it had always looked like such an amazing dance: careful, controlled… synchronised, sexy.
After shopping around for a bit, Ruth decided that the best approach was for us to do a “beginners” video course in the comfort of our living room, and then take a weekend
getaway to do an “improvers” class.
After all, we’d definitely have time to complete the beginners’ course and get a lot of practice in before we had to take to the dance floor with a group of other “improvers”,
right?2
Okay, let me try again to enumerate you everything I actually know about tango3:
Essentials. A leader and follower4
hold one another’s upper torso closely enough that, with practice, each can intuit from body position where the other’s feet are without looking. While learning, you will not manage
to do this, and you will tread on one another’s toes.
The embrace. In the embrace, one side – usually the leader’s left – is “open”, with the dancers’ hands held; the other side is “closed”, with the dancers holding one
another’s bodies. Generally, you should be looking at one another or towards the open side. But stop looking at your feet: you should know where your own feet are by proprioception,
and you know where your partners’ feet are by guesswork and prayer.
The walk. You walk together, (usually) with opposite feet moving in-sync so that you can be close and not tread on one another’s toes, typically forward
(from the leader’s perspective) but sometimes sideways or even backwards (though not usually for long, because it increases the already-inevitable chance that you’ll collide
embarrassingly with other couples).
Movement. Through magic and telepathy a good connection with one another, the pair will, under the leader’s direction, open opportunities to perform more
advanced (but still apparently beginner-level) steps and therefore entirely new ways to mess things up. These steps include:
Forward ochos. The follower stepping through a figure-eight (ocho) on the closed side, or possibly the open side, but they probably forget which
way they were supposed to turn when they get there, come out on the wrong foot, and treat on the leader’s toes.
Backwards ochos. The follower moves from side to side or in reverse through a series of ochos, until the leader gets confused which way they’re
supposed to pivot to end the maneuver and both people become completely confused and
unstuck.
The cross. The leader walks alongside the follower, and when the leader steps back the follower chooses to assume that the leader intended for them to cross their
legs, which opens the gateway to many other steps. If the follower guesses incorrectly, they probably fall over during that step. If the follower guesses correctly but forgets
which way around their feet ought to be, they probably fall over on the very next step. Either way, the leader gets confused and does the wrong thing next.
Giros. One or both partners perform a forwards step, then a sideways step, then a backwards step, then another sideways step, starting on the inside leg
and pivoting up to 270° with each step such that the entire move rotates them some portion of a complete circle. In-sync with one another, of course.
Sacadas. Because none of the above are hard enough to get right together, you should start putting your leg out between your partner’s leg and try and
trip them up as they go. They ought to know you’re going to do this, because they’ve got perfect predictive capabilities about where your feet are going to end, remember?
Also remember to use the correct leg, which might not be the one you expect, or you’ll make a mess of the step you’ll be doing in three beats’ time. Good luck!
Barridas and mordidas. What, you finished the beginners’ course? Too smart to get tripped up by your partner’s sacada any more? Well
now it’s time to start kicking your partner’s feet out from directly underneath them. That’ll show ’em.
Style. All of the above should be done gracefully, elegantly, with perfect synchronicity and in time with the music… oh, and did I mention you should be able to
improve the whole thing on the fly, without pre-communication with your partner. 😅
Ultimately, it was entirely our own fault we felt out-of-our-depth up in Edinburgh at the weekend. We tried to run before we could walk, or – to put it another way – to milonga
before we could caminar.
A somewhat-rushed video course and a little practice on carpet in your living room is not a substitute for a more-thorough práctica on a proper-sized dance floor, no matter how
often you and your partner use any excuse of coming together (in the kitchen, in an elevator, etc.) to embrace and walk a couple of steps! Getting a hang of the fluid connections and
movement of tango requires time, and practice, and discipline.
But, not least because of our inexperience, we did learn a lot during our weekend’s deep-dive. We got to watch (and, briefly, partner with) some much better dancers
and learned some advanced lessons that we’ll doubtless reflect back upon when we’re at the point of being ready for them. Because yes: we are continuing! Our next step is a
Zoom-based lesson, and then we’re going to try to find a more-local group.
Also, we enjoyed the benefits of some one-on-one time with Jenny and Ricardo, the amazingly friendly and supportive teachers whose video course got us started and whose
in-person event made us feel out of our depth (again: entirely our own fault).
If you’ve any interest whatsoever in learning to dance tango, I can wholeheartedly recommend Ricardo and Jenny Oria as teachers. They run
courses in Edinburgh and occasionally elsewhere in the UK as well as providing online resources, and they’re the most amazingly
supportive, friendly, and approachable pair imaginable!
Just… learn from my mistake and start with a beginner course if you’re a beginner, okay? 😬
Footnotes
1 I’m exaggerating how little I know for effect. But it might not be as much of an
exaggeration as you’d hope.
4 Tango’s progressive enough that it’s come to reject describing the roles in binary
gendered terms, using “leader” and “follower” in place of what was once described as “man” and “woman”, respectively. This is great for improving access to pairs of dancers who don’t
consist of a man and a woman, as well as those who simply don’t want to take dance roles imposed by their gender.
For my examples below, assume a three-person family. I’m using unrealistic numbers for easy arithmetic.
Alice earns £2,000, Bob earns £1,000, and Chris earns £500, for a total household income of £3,500.
Alice spends £1,450, Bob £800, and Chris £250, for a total household expenditure of £2,500.
Model #1: Straight Split
We’ve never done things this way, but for completeness sake I’ll mention it: the simplest way that households can split their costs is by dividing them between the participants equally:
if the family make a £60 shopping trip, £20 should be paid by each of Alice, Bob, and Chris.
My example above shows exactly why this might not be a smart choice: this model would have each participant contribute £833.33 over the course of the month, which is more than Chris
earned. If this month is representative, then Chris will gradually burn through their savings and go broke, while Alice will put over a grand into her savings account every month!
Model #2: Income-Assessed
We’re a bunch of leftie socialist types, and wanted to reflect our political outlook in our household finances, too. So rather than just splitting our costs equally between us, we
initially implemented a means-assessment system based on the relative differences between our incomes. The thinking was that somebody that earns twice as much should
contribute twice as much towards the costs of running the household.
Using our example family above, here’s how that might look:
Alice earned 57% of the household income, so she should have contributed 57% of the household costs: £1,425. She overpaid by £25.
Bob earned 29% of the household income, so he should have contributed 29% of the household costs: £725. He overpaid by £75.
Chris earned 14% of the household income, so they should have contributed 14% of the household costs: £350. They underpaid by £100.
Therefore, at the end of the month Chris should settle up by giving £25 to Alice and £75 to Bob.
By analogy: The “Income-Assessed” model is functionally equivalent to splitting each and every expense according to the participants income – e.g. if a £100 bill landed
on their doormat, Alice would pay £57, Bob £29, and Chris £14 of it – but has the convenience that everybody just pays for things “as they go along” and then square everything up when
their paycheques come in.
Over time, our expenditures grew and changed and our incomes grew, but they didn’t do so in an entirely simple fashion, and we needed to make some tweaks to our income-assessed model of
household finance contributions. For example:
Gross vs Net Income: For a while, some of our incomes were split into a mixture of employed income (on which income tax was paid as-we-earned) and self-employed
income (for which income tax would be calculated later), making things challenging. We agreed that net income (i.e. take-home pay) was the correct measure for us to use for the
income-based part of the calculation, which also helped keep things fair as some of us began to cross into and out of the higher earner tax bracket.
Personal Threshold: At times, a subset of us earned a disproportionate portion of the household income (there were short periods where one of us earned over 50% of
the household income; at several other times two family members each earned thrice that of the third). Our costs increased too, but this imposed an regressive burden on the
lower-earner(s), for whom those costs represented a greater proportion of their total income. To attempt to mitigate this, we introduced a personal threshold somewhat analogous to the
income tax “personal allowance” (the policy that means that you don’t pay tax on your first £12,570 of income).
Eventually, we came to see that what we were doing was trying to patch a partially-broken system, and tried something new!
Model #3: Same-Residual
In 2022, we transitioned to a same-residual system that attempts to share out out money in an even-more egalitarian way. Instead of each person contributing in accordance
with their income, the model attempts to leave each person with the same average amount of disposable personal income at the end. The difference is most-profound where the
relative incomes are most-diverse.
With the example family above, that would mean:
The household earned £3,500 and spent £2,500, leaving £1,000. Dividing by 3 tells us that each person should have £333.33 after settling up.
Alice earned earned £2,000 and spent £1,450, so she has £550 left. That’s £216.67 too much.
Bob earned earned £1,000 and spent £800, so she has £200 left. That’s £133.33 too little.
Chris earned earned £500 and spent £250, so she has £250 left. That’s £83.33 too little.
Therefore, at the end of the month Alice should settle up by giving £133.33 to Bob and £83.33 to Chris (note there’s a 1p rounding error).
That’s a very different result than the Income-Assessed calculation came up with for the same family! Instead of Chris giving money to Alice and Bob, because those two
contributed to household costs disproportionately highly for their relative incomes, Alice gives money to Bob and Chris, because their incomes (and expenditures) were much lower.
Ignoring any non-household costs, all three would expect to have the same bank balance at the start of the month as at the end, after settlement.
By analogy: The “Same-Residual” model is functionally equivalent to having everybody’s salary paid into a shared bank account, out of which all household expenditures
are paid, and at the end of the month everything that’s left in the bank account gets split equally between the participants.
We’ve made tweaks to this model, too, of course. For example: we’ve set a “target” residual and, where we spend little enough in a month that we would each be eligible for more
than that, we instead sweep the excess into our family savings account. It’s a nice approach to help build up a savings reserve without feeling a pinch.
I’m sure our model will continue to evolve, as it has for the last decade and a half, but for now it seems stable, fair, and reasonable. Maybe it’ll work for your household too (whether
or not you’re also a polyamorous family!): take a look at the spreadsheet in Google Drive and give it a go.
Semi-inspired by a similar project by Kev Quirk,
I’ve got a project I want to run on my blog in 2024.
I want you to be my pen pal for a month. Get in touch by emailing penpals@danq.me or any other way you like and let’s do this!
I don’t know much about the people who read my blog, whether they’re ad-hoc visitors or regular followers1.
So here’s the plan: I’m looking to do is to fill a “dance card” of interesting people each of with whom I’ll “pen pal” for a month.
The following month, I’ll blog about the experience: who I met, what I learned about them, what I learned about myself. Have a look below and see if there’s a slot for you: I’d love to
chat to you about, well – anything!
My goals:
Get inspired to blog about new/different things (and hopefully help inspire others to do the same).
Connect with a dozen folks on a more-interpersonal level than I normally do via my blog.
Maybe even make, or deepen, some friendships!
The “rules”:
Aiming for at least 3 email exchanges over a month. Maybe more.2
There’s no specific agenda: I promise to bring what I’ve been thinking about and working on, and possibly a spicy conversation-starter from LetsLifeChat.com. You bring whatever you like. No topic is explicitly off the table unless somebody says it is (which anybody can do
at any time, for any or no reason).
I’ll blog a summary of my experience the month afterwards, but I won’t share anything without permission. I’ll happily share an unpublished draft with each penpal first so they
can veto any bits they don’t like. I’ll refer to you by whatever name, link etc. suits you best.
If you have a blog/digital garden/social presence of any kind, you’re welcome to blog about it too. Or not: entirely up to you!
You! If you’re reading this, you’re probably somebody I want to meet! But I’d be especially interested in penpalling with people who tick one or more of the following
boxes:
Personal bloggers at the edges of or just outside my usual social circles. Maybe you’re an IndieWeb, RSS Club, or Geminispace explorer?
Regular readers, whether you just skim the post titles and dive in once in a blue moon or read every post and comment on the things you care about.
Automatticians from parts of the company I don’t get to interact with. Let’s build some bridges!
People whose interests overlap with mine in any way, large or small. That overlap might be technology (web standards, accessibility, security, blogging, open
source…), hobbies (GPS sports, board games, magic, murder mysteries, science fiction, getting lost on Wikipedia…),
volunteering (third sector support, tech for good, diversity in tech…), social (queer issues, polyamory, socialism…), or something else entirely.
Missed connections. Did we meet briefly or in-passing (conferences, meetups, friends-of-friends, overlapping volunteering circles) but not develop anything further?
I’d love to pick up where we left off!
Distant- and nearly-friends. Did we drift apart long ago, or never quite move into one another’s orbit in the first place? This could be your excuse to touch bases!
If you read this far and didn’t email penpals@danq.me yet, go do that. I’m looking forward to hearing from you!
Footnotes
1 Not-knowing who reads my blog might come at least in part from the fact that I actively sabotage any plugin that might give me any analytics! One might say I’ve shot myself in the foot, there.
2 If we stay in touch afterwards that’s fine too, but it’s not essential.
3 I’m looking for longer-form, but slower, communication than you get via e.g. instant
messengers and whatnot: a more “penpal” experience.
✅ To-Do:Obsidian, physical notepad [not happy with this; want something more productive]
📆 Calendar: Google Calendar (via Thunderbird on Desktop) [not happy with this; want something not-Google – still waiting on Proton Calendar getting good!]
A particular joy of the Gemini and Spartan protocols – and the Markdown-like syntax of Gemtext – is their simplicity.
Even without a browser, you can usually use everyday command-line tools that you might have installed already to access relatively human-readable content.
Here are a few different command-line options that should show you a copy of this blog post (made available via CapsulePress, of course):
Gemini
Gemini communicates over a TLS-encrypted channel (like HTTPS), so we need a to use a tool that speaks the language. Luckily: unless you’re on Windows you’ve probably got one installed
already1.
Using OpenSSL
This command takes the full gemini:// URL you’re looking for and the domain name it’s at. 1965 refers to the port number on
which Gemini typically runs –
GnuTLS closes the connection when STDIN closes, so we use cat to keep it open. Note inclusion of --no-ca-verification to allow self-signed
certificates (optionally add --tofu for trust-on-first-use support, per the spec).
Spartan is a little like “Gemini without TLS“, but it sports an even-more-lightweight request format which makes it especially
easy to fudge requests2.
Using Telnet
Note the use of cat to keep the connection open long enough to get a response, as we did for Gemini over GnuTLS.
Because TLS support isn’t needed, this also works perfectly well with Netcat – just substitute nc/netcat or whatever your platform calls it in place of
ncat:
The Internet is full of guides on easily making your WordPress installation run fast. If you’re looking to speed up your WordPress site, you should go read those, not this.
Those guides often boil down to the same old tips:
uninstall unnecessary plugins,
optimise caching (both on the server and, via your headers, on clients/proxies),
resize your images properly and/or ensure WordPress is doing this for you,
tune your PHP installation so it’s got enough memory, keeps a process alive, etc.,
ensure your server is minifying2
and compressing files, and
run it on a faster server/behind a faster connection3
The hard way
This article is for people who aren’t afraid to go tinkering in their WordPress codebase to squeeze a little extra (real world!) performance.
It’s for people whose neverending quest for perfection is already well beyond the point of diminishing returns.
But mostly, it’s for people who want to gawp at me, the freak who actually did this stuff just to make his personal blog a tiny bit nippier without spending an extra penny on
hosting.
Don’t start with the hard way. Exhaust all the easy solutions – or at least, make a conscious effort which easy solutions to enact or reject – first. Only if you really
want to get into the weeds should you actually try doing the things I propose here. They’re not for most sites, and they’re not the for faint of heart.
Performance is a tradeoff. Every performance improvement costs you something else: time, money, DX, UX, etc. What you choose to trade for performance gains depends on your priority of constituencies, which may differ from mine.4
This is not a recipe book. This won’t tell you what code to change or what commands to run. The right answers for your content will be different than the right answers
for mine. Also: you shouldn’t change what you don’t understand! But I hope these tips will help you think about what questions you need to ask to make your site blazing fast.
Okay, let’s get started…
1. Backstab the plugins you can’t live without
If there are plugins you can’t remove because you depend upon their functionality, and those plugins inject content (especially JavaScript) on the front-end… backstab them to
undermine that functionality.
For example, if you want Jetpack‘s backup and downtime monitoring features, but you don’t want it injecting random <linkrel='stylesheet' id='...-jetpack-css' href='...' media='all' />‘s (an
extra stylesheet to download and parse) into your pages: find the add_filter hook it uses and remove_filter it in your theme5.
Better yet, remove wp_head() from your theme entirely6.
Now, instead of blocking the hooks you don’t want polluting your <head>, you’re specifically allowing only those you want. You’ll want to take care to get
some semi-essential ones like <link rel="canonical" href="...">7.
Now most of your plugins are broken, but in exchange, your theme has reclaimed complete control over what gets sent to the user. You can select what content you actually
want delivered, and deliver no more than that. It’s harder work for you, but your site becomes so much lighter.
2. Throw away 100% of your render-blocking JavaScript (and as much as you can of the rest)
The single biggest bottleneck to the user viewing a modern WordPress website is the JavaScript that needs to be downloaded, compiled, and executed before the page can be rendered. Most
of that’s plugins, but even on a nearly-vanilla installation you might find a copy of jQuery (eww!) and some other files.
In step 1 you threw it all away, which is great… but I’m betting you were depending on some of that to make your site work? Let’s put it back, carefully and selectively, while
minimising the impact on load time.
That means scripts should be loaded (a) low-down, and/or (b) marked defer (or, better yet, async), so they don’t block page rendering.
If you haven’t already, you might like to View Source on this page. Count my <script> tags. You’ll probably find just two of them: one external file marked
async, and a second block right at the bottom.
The inline <script> in my footer.php wraps a single line of PHP: which looks a little like
this: <?php echo implode("\n\n", apply_filters( 'danq_footer_js', [] ) ); ?>. For each item in an initially-empty array, it appends to the script tag. When I render
anything that requires JavaScript, e.g. for 360° photography, I can just add to that
(keyed, to prevent duplicates when viewing an archive page) array. Thus, the relevant script gets added exclusively to the pages where it’s needed, not to the entire site.
The only inline script added to every page loads my service worker, which itself aims to optimise caching as well as providing limited “offline” functionality.
While you’re tweaking your JavaScript anyway, you might like to check that any suitable addEventListeners are set to passive mode. Especially if you’re doing anything with touch or
mousewheel events, you can often increase the perceived performance of these interactions by not letting your custom code block the default browser behaviour.
3. Don’t use a CDN
Wait, what? That’s the opposite of what everybody else recommends. To understand why, you have to think about why people recommend a CDN in the first place. Their reasons are usually threefold:
Proximity
Claim: A CDN delivers content geographically-closer to the user.
Retort: Often true. But in step 4 we’re going to make sure that everything critical comes within the first TCP
sliding window anyway, so there’s little benefit, and there’s a cost to that extra DNS lookup and fresh handshake. Edge
caching your own contentmay have value, but for most sites it’ll have a much smaller impact than almost everything else on this list.
Precaching
Claim: A CDN improves the chance resources are precached in the user’s browser.
Retort: Possibly true, especially with fonts (although see step 6) but less than you’d think with JS libraries because
there are so many different versions/hosts of each. Yours may well be the only site in the user’s circuit that uses a particular one!
Power Claim: A CDN has more resources than you and so can better-withstand spikes of traffic.
Retort: Maybe, but they also introduce an additional single-point-of-failure. CDNs aren’t magically immune
to downtime nor content-blocking, and if you depend on one you’ve just doubled the number of potential failure points that can make your site instantly useless. Furthermore:
in exchange for those resources you’re trading away your users’ privacy and security: if a CDN gets hacked, every site that
uses it gets hacked too.
Consider edge-caching your own content only if you think you need it, but ditch jsDeliver, cdnjs, Google Hosted Libraries etc.
Hell: if you can, ditch all JavaScript served from third-parties and slap a Content-Security-Policy: script-src 'self' header on your domain to dramatically reduce
the entire attack surface of your site!8
4. Reduce your HTML and CSS size to <12kb compressed
There’s a magic number you need to know: 12kb. Because of some complicated but fascinating maths (and depending on how your hosting is configured), it can be significantly faster to
initially load a web resource of up to 12kb than it is to load one of, say, 15kb. Also, for the same reason, loading a web resource of much less than 12kb
might not be significantly faster than loading one only a little less than 12kb.
Inlining as much essential content as possible (CSS, SVGs,
JavaScript etc.) to bring you back up to close-to that magic number again!
Again, this probably flies in the face of everything you were taught about performance. I’m sure you were told that you should <link> to your stylesheets so that they
can be cached across page loads. But it turns out that if you can make your HTML and CSS small enough, the opposite is true and you should inline the stylesheet again: caching styles becomes almost irrelevant if you get all the content in
a single round-trip anyway!
For extra credit, consider optimising your homepage’s CSS so it’s even smaller by excluding directives that only apply to
non-homepage pages, and vice-versa. Assuming you’re using a preprocessor, this shouldn’t be too hard: at simplest, you can have a homepage.css and main.css,
each derived from a set of source files some of which they share (reset/normalisation, typography, colours, whatever) and the rest which is specific only to that part of the site.
Can’t manage to get your HTML and CSS down below the magic
number? Then at least ensure that your HTML alone weighs in at <12kb compressed and you’ll still get some of the
benefits. If you’ve got the headroom, you can selectively include a <style> block containing only the most-crucial CSS, with a particular focus on any that results in layout shifts (e.g. anything that specifies the height: of otherwise dynamically-sized
block elements, or that declares an element position: absolute or position: fixed). These kinds of changes are relatively computationally-expensive because
they cause content to re-flow, so provide hints as soon as possible so that the browser can accommodate for them.
5. Make the first load awesome
We don’t really talk about content being “above the fold” like we used to, because the modern Web has such a diverse array of screen sizes and resolutions that doing so doesn’t make
much sense.
But if loading your full page is still going to take multiple HTTP requests (scripts, images, fonts, whatever),
you should still try to deliver the maximum possible value in the first round-trip. That means:
Making sure all your textual content loads immediately! Unless you’re delivering a huge amount of text, there’s absolutely no excuse for lazy-loading text: it’s
usually tiny, compresses well, and it’s fast to parse. It’s also the most-important content of most pages. Get it delivered to the browser so it can be rendered rightaway.
Reserving space for blocks by sizing images appropriately, e.g. using <img width="..." height="..." ...> or having them load as a background with
background-size: cover or contain in a block sized with CSS delivered in the initial payload. This
reduces layout shift, which mitigates the need for computationally-expensive content reflows.
If possible (see point 4), move vector images that support basic site functionality, like logos, inline. This might also apply to icons, if they’re “as important” as text content.
Marking everything up with standard semantic HTML. There’s a trend for component-driven design to go much too
far, resulting in JavaScript components being used in place of standard elements like links, buttons, and images, resulting in highly-fragile websites: when those scripts fail (or are
very slow to load), the page becomes unusable.
6. Reduce your dependence on downloaded fonts
Fonts are lovely and can be an important part of your brand identity, but they can also add a lot of weight to your web pages.
If you’re ready and able to drop your webfonts and appreciate the beauty and flexibility of a system font stack (I get it: I’m not there quite yet!), you can at least make
smarter use of your fonts:
Every modern browser supports WOFF2, so you can ditch those chunky old formats you’re clinging onto.
If you’re only using the Latin alphabet, minify your fonts further by dropping the characters you don’t need: tools like Google Webfonts
Helper can help with this, as well as making it easier to selfhost fonts from the most-popular library (is a smart idea for the reasons described under point 3, above!). There are
tools available to further minify fonts if e.g. you only need the capital letters for your title font or something.
Browsers are pretty clever and will work-around it if you make a mistake. Didn’t include an emoji or some obscure mathematical symbol, and then accidentally used them in a
post? Browsers will switch to a system font that can fill in the gap, for you.
Make the most-liberal use of the font-display: CSS directive that you can tolerate!
Don’t use font-display: block, which is functionally the default in most browsers, unless you absolutely have to.
font-display: fallback is good if you’re too cowardly/think your font is too important for you to try font-display: optional.
font-display: optional is an excellent choice for body text: if the browser thinks it’s worthwhile to download the font (it might choose not to if the operating
system indicates that it’s using a metered or low-bandwidth connection, for example), it’ll try to download it, but it won’t let doing so slow things down too much and it’ll
fall-back to whatever backup (system) font you specify.
font-display: swap is also worth considering: this will render any text immediately, even if the right font hasn’t downloaded yet, with no blocking time
whatsoever, and then swap it for the right font when it appears. It’s probably better for headings, because large paragraphs of text can be a little disorienting if they change
font while a user is looking at them!
7. Cache pre-compressed static files
It’s possible that by this point you’re saying “if I had to do this much work, I might as well just use a static site generator”. Well good news: that’s what you’re about to
do!
Obviously you should make sure all your regular caching improvements (appropriate HTTP headers for caching, a
service worker that further improves on that logic based on your content’s update schedule, etc.) first. Again: everything in this guide presupposes that you’ve already done the things
that normal people do.
By aggressively caching pre-compressed copies of all your pages, you’re effectively getting the best of both worlds: a website that, for anonymous visitors, is served directly from
.html.gz files on a hard disk or even straight from RAM in memcached10,
but which still maintains all the necessary server-side interactivity to allow it to be used as a conventional Web-based CMS
(including accepting comments if that’s your jam).
WP Super Cache can do the heavy lifting for you for a filesystem-based solution so long as you put it into “Expert” mode and
amending your webserver configuration. I’m using Nginx, so I needed a try_files directive like this:
I’m sure your favourite performance testing tool has already complained at you about your failure to use the best formats possible when serving images to your users. But how can you fix
it?
There are some great plugins for improving your images automatically and/or in bulk – I use EWWW Image Optimizer – but
to really make the most of them you’ll want to reconfigure your webserver to detect clients that Accept: image/webp and attempt to dynamically
serve them .webp variants, for example. Or if you’re ready to give up on legacy formats and replace all your .pngs with .webps, that’s probably
fine too!
Assuming you’ve got curl and Imagemagick‘s identify, you can see this in action:
curl -s https://danq.me/_q23u/2023/11/dynamic.png -H "Accept: image/webp" | identify -
(Will give you a WebP image)
curl -s https://danq.me/_q23u/2023/11/dynamic.png -H "Accept: image/png" | identify -
(Will give you a PNG image, even though the URL is the same)
9. Simplify, simplify, simplify
The single biggest impact you can have upon the performance of your WordPress pages is to make them less complex.
Writing my templates and posts so that they’re compatible with CapsulePress helps keep my code necessarily-simple. You don’t have to
do that, though, but you should be asking yourself:
Does my DOM need to cascade so deeply? Could I achieve the same with less?
Am I pre-emptively creating content, e.g. adding a hidden <dialog> directly to the markup in the anticipation that it might be triggered later using
JavaScript, rather than having that JavaScript run document.createElement the element after the page becomes readable?
Have I created unnecessarily-long chains of CSS selectors11
when what I really want is a simple class name, or perhaps even a semantic element name?
10. Add a Service Worker
A service worker isn’t magic. In particular, it can’t help you with those new visitors hitting your site for the first time12.
But a suitable service worker can do a few things that can help with performance. In particular, you might consider:
Precaching assets that you anticipate they’re likely to need (e.g. if you use different stylesheets for the homepage and other pages, you can preload both so no matter
where a user lands they’ve already got the CSS they’ll need for the entire site).
Preloading popular pages like the homepage and recent articles, allowing them to load quickly.
Caching a fallback pages – and other resources as-they’re-accessed – to support a full experience for users even if they (or your site!) disconnect from the Internet (or even
embedding “save for offline” functionality!).
Chapters 7 and 8 of Going Offline by Jeremy Keith are
especially good for explaining how this can be achieved, and it’s all much easier than everything else I just described.
Anything else?
Did I miss anything? If you’ve got a tip about ramping up WordPress performance that isn’t one of the “typical seven” – probably because it’s too hard to be worthwhile for most people –
I’d love to hear it!
Footnotes
1 You’ll sometimes see guides that suggest that using a CDN is to be recommended specifically because it splits your assets among multiple domains/subdomains, which mitigates browsers’ limitation on the
number of files they can download simultaneously. This is terrible advice, because such limitations essentially don’t exist any more, but DNS lookups and TLS handshakes still have a bandwidth and computational cost. There are good
things about CDNs, sometimes, but this has not been one of them for some time now.
2 I’m not sure why guides keep stressing the importance of minifying code,
because by the time you’re compressing them too it’s almost pointless. I guess it’s helpful if your compression fails?
3 “Use a faster server” is a “just throw money/the environment at it” solution. I’d like
to think we can do better.
4 For my personal blog, I choose to prioritise user experience, privacy, accessibility,
resilience, and standards compliance above almost everything else.
5 If you prefer to keep your backstab code separate, you can put it in a custom plugin,
but you might find that you have to name it something late in the alphabet – I’ve previously used names like zzz-danq-anti-plugin-hacks – to ensure that they load
after the plugins whose functionality you intend to unhook: broadly-speaking, WordPress loads plugins in alphabetical order.
6 I’ve assumed you’re using a classic, not block, theme. If you’re using a block theme,
you get a whole different set of performance challenges to think about. Don’t get me wrong: I love block themes and think they’re a great way to put more people in control of their
site’s design! But if you’re at the point where you’re comfortable digging this deep into your site’s PHP code,
you probably don’t need that feature anyway, right?
7 WordPress is really good at serving functionally-duplicate content, so search
engines appreciate it if you declare a proper canonical URL.
8 Before you choose to block all third-party JavaScript, you might have to
whitelist Google Analytics if you’re the kind of person who doesn’t mind selling their visitor data to the world’s biggest harvester of personal information in exchange for some
pretty graphs. I’m not that kind of person.
10 I’ve experimented with mounting a ramdisk and storing the WP Super Cache directory
there, but it didn’t make a huge difference, probably because my files are so small that the parse/render time on the browser side dominates the total cascade, and they’re already
being served from an SSD. I imagine in my case memcached would provide similarly-small benefits.
11 I really love the power of CSS preprocessors like Sass, but they do make it deceptively easy to create many more – and longer – selectors
than you intended in your final compiled stylesheet.
12 Tools like Lighthouse usually simulate first-time visitors, which can be a little
unfair to sites with great performance for established visitors. But everybody is a first-time visitor at least once (and probably more times, as caches expire or are
cleared), so they’re still a metric you should consider.
Set in the early-to-mid-1990s world in which the BBS is still alive and kicking, and the Internet’s gaining traction but still
lacks the “killer app” that will someday be the Web (which is still new and not widely-available), the story follows a handful of teenagers trying to find their place in the world.
Meeting one another in the 90s explosion of cyberspace, they find online communities that provide connections that they’re unable to make out in meatspace.
So yeah: the whole thing feels like a trip back into the naivety of the online world of the last millenium, where small, disparate (and often local) communities flourished and
early netiquette found its feet. Reading Incredible Doom provides the same kind of nostalgia as, say, an afternoon spent on textfiles.com. But
it’s got more than that, too.
It touches on experiences of 90s cyberspace that, for many of us, were very definitely real. And while my online “scene” at around the time that the story is set might have been
different from that of the protagonists, there’s enough of an overlap that it felt startlingly real and believable. The online world in which I – like the characters in the story – hung
out… but which occupied a strange limbo-space: both anonymous and separate from the real world but also interpersonal and authentic; a frontier in which we were still working out the
rules but within which we still found common bonds and ideals.
Anyway, this is all a long-winded way of saying that Incredible Doom is a lot of fun and if it sounds like your cup of tea, you should read it.
Also: shortly after putting the second volume down, I ended up updating my Geek Code for the first time in… ooh, well over a decade. The standards have moved on a little (not entirely
in a good way, I feel; also they’ve diverged somewhat), but here’s my attempt:
----- BEGIN GEEK CODE VERSION 6.0 -----
GCS^$/SS^/FS^>AT A++ B+:+:_:+:_ C-(--) D:+ CM+++ MW+++>++
ULD++ MC+ LRu+>++/js+/php+/sql+/bash/go/j/P/py-/!vb PGP++
G:Dan-Q E H+ PS++ PE++ TBG/FF+/RM+ RPG++ BK+>++ K!D/X+ R@ he/him!
----- END GEEK CODE VERSION 6.0 -----
Footnotes
1 I was amazed to discover that I could still remember most of my Geek Code
syntax and only had to look up a few components to refresh my memory.
One of my favourite parts of my former role at
the Bodleian Libraries was getting to work on exhibitions. Not just because it was varied and interesting work, but because it let me get
up-close to remarkable artifacts that
most people never even get the chance to see.
A personal favourite of mine are the Herculaneum Papyri. These charred scrolls were part of a private library near Pompeii that was buried by the eruption
of Mount Vesuvius in 79 CE. Rediscovered from 1752, these ~1,800 scrolls were distributed to academic institutions around the world, with
the majority residing in Naples’ Biblioteca Nazionale Vittorio Emanuele III.
As you might expect of ancient scrolls that got buried, baked, and then left to rot, they’re pretty fragile. That didn’t stop Victorian era researchers trying a variety of techniques to
gently unroll them and read what was inside.
Like many others, what I love about the Herculaneum Papyri is the air of mystery. Each could be anything from a lost religious text to, I don’t know, somebody’s to-do list (“buy milk, arrange for annual service of chariot, don’t forget to renew
volcano insurance…”).1
In recent years, we’ve tried “virtually unrolling” the scrolls using a variety of related technologies. And – slowly – we’re getting there.
So imagine my delight when this week, for the first time ever, a
complete word was extracted from one of the carbonised, still-rolled-up scrolls from Herculaneum. Something that would have seemed inconceivable to the historians who first
discovered and catalogued the scrolls is now possible, thanks to their careful conservation over the years along with the steady advance of technology.
Anyway, I thought that was exciting news so I wanted to share.
I’m probably not going to get you a Christmas present. You probably shouldn’t get me one either.
If you’re one of my kids and you’ve decided that maybe my blog isn’t just “boring grown-up stuff” and have come by, then you’re one of the exceptions. Lucky you.
Children get Christmas gifts from me. But if you’re an adult, all you’re likely to get from me is a hug, a glass of wine, and more food than you can possibly eat in a single
sitting.
I’ve come to the conclusion – much later than my mother and my sisters, who were clearly ahead of the curve – that Christmas presents are for kids.
Maybe, once, Christmas presents were for adults too, but by now the Internet has broken gift-giving to the extent it’s almost certainly preferable for me and the adults in my life
if they just, y’know, order the thing they want than hoping that I’ll pick it out for them. Especially as so many of us are at a point where we already have a plethora of
“stuff”, and don’t want to add to it unnecessarily at a time of year when, frankly, we’ve got better things to spend our time and money on.
Birthdays are still open season, because they aren’t hampered by the immediate expectation of reciprocity that Christmas carries. And I reserve the right to buy groups of (or
containing) adults gifts at Christmas. But individual adults aren’t getting one this year, and they certainly shouldn’t feel like they need to get me anything either.1
I don’t know to what extent, if at all, Ruth and JTA will be following me in this idea, so
if you’re somebody who might have expected a gift from or wanted to give a gift to one of them… you’re on your own; you work it out!
Here’s to a Merry Christmas full of presents for children, only!
Footnotes
1 If you’ve already bought me a gift for Christmas this year… firstly, that’s way
too organised: you know it’s only October, right? And secondly: my birthday’s only a couple of weeks later…
In the parallel universe of last year’s Weird: The Al Yankovic Story, Dr. Demento encourages a young Al Yankovic (Daniel Radcliffe) to move away from song parodies and start writing
original songs of his own. During an LSD trip, Al writes “Eat It,” a 100% original song that’s definitely not based on any other song, which quickly becomes “the biggest hit by
anybody, ever.”
Later, Weird Al’s enraged to learn from his manager that former Jackson 5 frontman Michael Jackson turned the tables on him, changing the words of “Eat It” to make his own parody,
“Beat It.”
Your browser does not support the video tag.
This got me thinking: what if every Weird Al song was the original, and every other artist was covering his songs instead? With recent advances in A.I. voice cloning, I realized
that I could bring this monstrous alternate reality to life.
This was a terrible idea and I regret everything.
…
Everything that is wrong with, and everything that is right with, AI voice cloning, brought together in one place. Hearing
simulations of artists like Michael Jackson, Madonna, and Kurt Cobain singing Weird Al’s versions of their songs is… strange and unsettling.
Some of them are pretty convincing, which is a useful and accessible reminder about how powerful these tools are becoming. An under-reported story from a few years back identified what might be
the first recorded case of criminals using AI-based voice spoofing as part of a telephone scam, and since then the technology
needed to enact such fraud has only become more widely-available. While this weirder-than-Weird-Al project is first and foremost funny, for many it foreshadows darker things.