Somewhat-Effective Spam Filters

I’ve tried a variety of unusual strategies to combat email spam over the years.

Here are some of them (each rated in terms the geekiness of its implementation and its efficacy), in case you’d like to try any yourself. They’re all still in use in some form or another:

Spam filters

Geekiness: 1/10
Efficacy: 5/10

A colander filters spam email out of a stream of emails.

Your email provider or your email software probably provides some spam filters, and they’re probably pretty good. I use Proton‘s and, when I’m at my desk, Thunderbird‘s. Double-bagging your spam filter only slightly reduces the amount of spam that gets through, but increases your false-positive rate and some non-spam gets mis-filed.

A particular problem is people who email me for help after changing their name on FreeDeedPoll.org.uk, probably because they’re not only “new” unsolicited contacts to me but because by definition many of them have strange and unusual names (which is why they’re emailing me for help in the first place).

Frankly, spam filters are probably enough for many people. Spam filtering is in general much better today than it was a decade or two ago. But skim the other suggestions in case they’re of interest to you.

Unique email addresses

Geekiness: 3/10
Efficacy: 8/10

If you give a different email address to every service you deal with, then if one of them misuses it (starts spamming you, sells your data, gets hacked, whatever), you can just block that one address. All the addresses come to the same inbox, for your convenience. Using a catch-all means that you can come up with addresses on-the-fly: you can even fill a paper form with a unique email address associated with the company whose form it is.

On many email providers, including the ever-popular GMail, you can do this using plus-sign notation. But if you want to take your unique addresses to the next level and you have your own domain name (which you should), then you can simply redirect all email addresses on that domain to the same inbox. If Bob’s Building Supplies wants your email address, give them bobs@yourname.com, which works even if Bob’s website erroneously doesn’t accept email addresses with plus signs in them.

This method actually works for catching people misusing your details. On one occasion, I helped a band identify that their mailing list had been hacked. On another, I caught a dodgy entrepreneur who used the email address I gave to one of his businesses without my consent to send marketing information of a different one of his businesses. As a bonus, you can set up your filtering/tagging/whatever based on the incoming address, rather than the sender, for the most accurate finding, prioritisation, and blocking.

Emails to multiple email addresses reach the same inbox. Spam emails are blocked based on the addresses they're sent to.

Also, it makes it easy to have multiple accounts with any of those services that try to use the uniqueness of email addresses to prevent you from doing so. That’s great if, like me, you want to be in each of three different Facebook groups but don’t want to give Facebook any information (not even that you exist at the intersection of those groups).

Signed unique email addresses

Geekiness: 10/10
Efficacy: 2/10

Unique email addresses introduce two new issues: (1) if an attacker discovers that your Dreamwidth account has the email address dreamwidth@yourname.com, they can probably guess your LinkedIn email, and (2) attackers will shotgun “likely” addresses at your domain anyway, e.g. admin@yourname.com, management@yourname.com, etc., which can mean that when something gets through you get a dozen copies of it before your spam filter sits up and takes notice.

What if you could assign unique email addresses to companies but append a signature to each that verified that it was legitimate? I came up with a way to do this and implemented it as a spam filter, and made a mobile-friendly webapp to help generate the necessary signatures. Here’s what it looked like:

  1. The domain directs all emails at that domain to the same inbox.
  2. If the email address is on a pre-established list of valid addresses, that’s fine.
  3. Otherwise, the email address must match the form of:
    • A string (the company name), followed by
    • A hyphen, followed by
    • A hash generated using the mechanism described below, then
    • The @-sign and domain name as usual

The hashing algorithm is as follows: concatenate a secret password that only you know with a colon then the “company name” string, run it through SHA1, and truncate to the first eight characters. So if my password were swordfish1 and I were generating a password for Facebook, I’d go:

  1. SHA1 ( swordfish1 : facebook) [ 0 ... 8 ] = 977046ce
  2. Therefore, the email address is facebook-977046ce@myname.com
  3. If any character of that email address is modified, it becomes invalid, preventing an attacker from deriving your other email addresses from a single point (and making it hard to derive them given multiple points)

I implemented the code, but it soon became apparent that this was overkill and I was targeting the wrong behaviours. It was a fun exercise, but ultimately pointless. This is the one method on this page that I don’t still use.

Honeypots

Geekiness: 8/10
Efficacy: ?/10

Emails to multiple email addresses reach an inbox, but senders who reach a "honeypot" inbox are blocked from reaching the real inbox.

A honeypot is a “trap” email address. Anybody who emails it get aggressively marked as a spammer to help ensure that any other messages they send – even to valid email addresses – also get marked as spam.

I litter honeypots all over the place (you might find hidden email addresses on my web pages, along with text telling humans not to use them), but my biggest source of honeypots is formerly-valid unique addresses, or “guessed” catch-all addresses, which already attract spam or are otherwise compromised!

I couldn’t tell you how effective it is without looking at my spam filter’s logs, and since the most-effective of my filters is now outsourced to Proton, I don’t have easy access to that. But it certainly feels very satisfying on the occasions that I get to add a new address to the honeypot list.

Instant throwaways

Geekiness: 5/10
Efficacy: 6/10

OpenTrashmail is an excellent throwaway email server that you can deploy in seconds with Docker, point some MX records at, and be all set! A throwaway email server gives you an infinite number of unique email addresses, like other solutions described above, but with the benefit that you never have to see what gets sent to them.

Emails are delivered to an inbox and to a trash can, depending on the address they're sent to. The inbox subscribes to the trash can using RSS.

If you offer me a coupon in exchange for my email address, it’s a throwaway email address I’ll give you. I’ll make one up on the spot with one of my (several) trashmail domains at the end of it, like justgivemethedamncoupon@danstrashmailserver.com. I can just type that email address into OpenTrashmail to see what you sent me, but then I’ll never check it again so you can spam it to your heart’s content.

As a bonus, OpenTrashmail provides RSS feeds of inboxes, so I can subscribe to any email-based service using my feed reader, and then unsubscribe just as easily (without even having to tell the owner).

Summary

With the exception of whatever filters your provider or software comes with, most of these options aren’t suitable for regular folks. But you’re only a domain name (assuming you don’t have one already) away from being able to give unique email addresses to everybody you deal with, and that’s genuinely a game-changer all by itself and well worth considering, in my opinion.

× × × ×

A completely plaintext WordPress Theme

This article is a repost promoting content originally published elsewhere. See more things Dan's reposted.

This is a silly idea. But it works. I saw Dan Q wondering about plaintext WordPress themes – so I made one.

This is what this blog looks like using it:

Screenshot showing my blog rendered just as text.

I clearly nerdsniped Terence at least a little when I asked whether a blog necessarily had to be HTML, because he went on to implement a WordPress theme that delivers content entirely in plain text.

Naturally, I’ve also shared his accomplishment on my own text/plain blog (which uses a much simpler CMS based on static files).

×

Does a blog have to be HTML?

Terence Eden wrote about his recent experience of IndieWebCamp Brighton, in which he mentioned that somebody – probably Jeremy Keith – had said, presumably to provoke discussion:

A blog post doesn’t need a title.

Terence disagrees, saying:

In a literal sense, he was wrong. The HTML specification makes it clear that the <title> element is mandatory. All documents have title.

But I think that’s an overreach. After all, where is it written that a blog must be presented in HTML?

Non-HTML blogs

There are plenty of counter-examples already in existence, of course:

But perhaps we can do better…

A totally text/plain blog

We’ve looked at plain text, which as a format clearly does not have to have a title. Let’s go one step further and implement it. What we’d need is:

  1. A webserver configured to deliver plain text files by preference, e.g. by adding directives like index index.txt; (for Nginx).5
  2. An index page listing posts by date and URL. Most browser won’t render these as “links” so users will have to copy-paste or re-type them, so let’s keep them short,
  3. Pages for each post at those URLs, presumably without any kind of “title” (just to prove a point), and
  4. An RSS feed: usually I use RSS as shorthand for all feed types, but this time I really do mean RSS and not e.g. Atom because RSS, strangely, doesn’t require that an <item> has a <title>!

I’ve implemented it! it’s at textplain.blog.

textplain.blog in Lynx
Unlike other sites, I didn’t need to test textplain.blog in Lynx to know it’d work well. But I did anyway.

In the end I decided it’d benefit from being automated as sort-of a basic flat-file CMS, so I wrote it in PHP. All requests are routed by the webserver to the program, which determines whether they’re a request for the homepage, the RSS feed, or a valid individual post, and responds accordingly.

It annoys me that feed discovery doesn’t work nicely when using a Link: header, at least not in any reader I tried. But apart from that, it seems pretty solid, despite its limitations. Is this, perhaps, an argument for my .well-known/feeds proposal?

Anyway, I’ve open-sourced the entire thing in case it’s of any use to anybody at all, which is admittedly unlikely! Here’s the code.

Footnotes

1 no-ht.ml technically does use HTML, but the same content could easily be delivered with an appropriate non-HTML MIME type if he’d wanted.

2 Again, I suppose this technically required HTML, even if what was delivered was an empty file!

3 Gemtext is basically Markdown, and doesn’t require a title.

4 Plain text obviously doesn’t require a title.

5 There’s no requirement that default files served by webservers are HTML, although it’s highly-unsual for that not to be the case.

Nerd Sniped Traveller

I think I might be more-prone to nerd sniping when I’m travelling.

Last week, a coworker pointed out an unusually-large chimney on the back of a bus depot and I lost sleep poring over 50s photos of Dutch building sites to try to work out if it was original.

1950s black and white photo showing a newly-opened "GVB" bus depot in West Amsterdam.

When a boat tour guide told me that the Netherlands used to have a window tax, I fell down a rabbit hole of how it influenced local architecture and why the influence was different in the UK.

Why does travelling make me more-prone to nerd sniping? Maybe I should see if there’s any likely psychological effect that might cause that…

×

5 Cool Apps for your Unraid NAS

I’ve got a (now four-year-old) Unraid NAS called Fox and I’m a huge fan. I particularly love the fact that Unraid can work not only as a NAS, but also as a fully-fledged Docker appliance, enabling me to easily install and maintain all manner of applications.

A cube-shaped black computer sits next to a battery pack on a laminated floor. A sign has been left atop it, reading "Caution: Generator connected to this installation."
There isn’t really a generator attached to Fox, just a UPS battery backup. The sign was liberated from our shonky home electrical system.

I was chatting this week to a colleague who was considering getting a similar setup, and he seemed to be taking notes of things he might like to install, once he’s got one. So I figured I’d round up five of my favourite things to install on an Unraid NAS that:

  1. Don’t require any third-party accounts (low dependencies),
  2. Don’t need any kind of high-powered hardware (low specs), and
  3. Provide value with very little set up (low learning curve).
Dan, his finger to his lips and his laptop on his knees, makes a "shush" action. A coworker can be seen working behind him.
It’d have been cooler if I’d have secretly written this blog post while sitting alongside said colleague (shh!). But sadly it had to wait until I was home.

Here we go:

Syncthing

I’ve been raving about Syncthing for years. If I had an “everyday carry” list of applications, it’d be high on that list.

Syncthing screenshot for computer Rebel, sharing with Fox, Idiophone, Lemmy and Maxine.
Syncthing’s just an awesome piece of set-and-forget software that facilitates file synchronisation between all of your devices and can also form part of a backup strategy.

Here’s the skinny: you install Syncthing on several devices, then give each the identification key of another to pair them. Now you can add folders on each and “share” them with the others, and the two are kept in-sync. There’s lots of options for power users, but just as a starting point you can use this to:

  • Manage the photos on your phone and push copies to your desktop whenever you’re home (like your favourite cloud photo sync service, but selfhosted).
  • Keep your Obsidian notes in-sync between all your devices (normally costs $4/month).1
  • Get a copy of the documents from all your devices onto your NAS, for backup purposes (note that sync’ing alone, even with versioning enabled, is not a good backup: the idea is that you run an actual backup from your NAS!).

Huginn

You know IFTTT? Zapier? Services that help you to “automate” things based on inputs and outputs. Huginn’s like that, but selfhosted. Also: more-powerful.

Screenshot showing Huginn workflows.
When we first started looking for a dog to adopt (y’know, before we got this derper), I set up Huginn watchers to monitor the websites of several rescue centres, filter them by some of our criteria, and push the results to us in real-time on Slack, giving us an edge over other prospective puppy-parents.

The learning curve is steeper than anything else on this list, and I almost didn’t include it for that reason alone. But once you’ve learned your way around its idiosyncrasies and dipped your toe into the more-advanced Javascript-powered magic it can do, you really begin to unlock its potential.

It couples well with Home Assistant, if that’s your jam. But even without it, you can find yourself automating things you never expected to.

FreshRSS

I’ve written a lot about how and why FreshRSS continues to be my favourite RSS reader. But you know what’s even better than an awesome RSS reader? An awesome selfhosted RSS reader!

FreshRSS screenshot.
Yes, I know I have a lot of “unread” items. That’s fine, and I can tell you why.

Many of these suggested apps benefit well from you exposing them to the open Web rather than just running them on your LAN, and an RSS reader is probably the best example (you want to read your news feeds when you’re out and about, right?). What you need for that is a reverse proxy, and there are lots of guides to doing it super-easily, even if you’re not on a static IP address.2. Alternatively you can just VPN in to your home: your router might be able to arrange this, or else Unraid can do it for you!

Open Trashmail

You know how sometimes you need to give somebody your email address but you don’t actually want to. Like: sure, I’d like you to email me a verification code for this download, but I don’t trust you not to spam me later! What you need is a disposable email address.3

Open Trashmail screenshot showing a subscription to Thanks for subscribing to Dan Q's Spam-Of-The-Hour List!
How do you feel about having infinite email addresses that you can make up on-demand (without even having access to a computer), subscribe to by RSS, and never have to see unless you specifically want to.

You just need to install Open Trashmail, point the MX records of a few domain names or subdomains (you’ve got some spare domain names lying around, right? if not; they’re pretty cheap…) at it, and it will now accept email to any address on those domains. You can make up addresses off the top of your head, even away from an Internet connection when using a paper-based form, and they work. You can check them later if you want to… or ignore them forever.

Couple it with an RSS reader, or Huginn, or Slack, and you can get a notification or take some action when an email arrives!

  • Need to give that escape room your email address to get a copy of your “team photo”? Give them a throwaway, pick up the picture when you get home, and then forget you ever gave it to them.
  • Company give you a freebie on your birthday if you sign up their mailing list? Sign up 366 times with them and write a Huginn workflow that puts “today’s” promo code into your Obsidian notetaking app (Sync’d over Syncthing) but filters out everything else.
  • Suspect some organisation is selling your email address on to third parties? Give them a unique email address that you only give to them and catch them in a honeypot.

YOURLS

Finally: a URL shortener. The Internet’s got lots of them, but they’re all at the mercy of somebody else (potentially somebody in a country that might not be very-friendly with yours…).

YOURLS screenshot (Your Own URL Shortener).
It isn’t pretty, but… it doesn’t need to be! Nobody actually sees the admin interface except you anyway.

Plus, it’s just kinda cool to be able to brand your shortlinks with your own name, right? If you follow only one link from this post, let it be to watch this video that helps explain why this is important: danq.link/url-shortener-highlights.

I run many, many other Docker containers and virtual machines on my NAS. These five aren’t even the “top five” that I use… they’re just five that are great starters because they’re easy and pack a lot of joy into their learning curve.

And if your NAS can’t do all the above… consider Unraid for your next NAS!

Footnotes

1 I wrote the beginnings of this post on my phone while in the Channel Tunnel and then carried on using my desktop computer once I was home. Sync is magic.

2 I can’t share or recommend one reverse proxy guide in particular because I set my own up because I can configure Nginx in my sleep, but I did a quick search and found several that all look good so I imagine you can do the same. You don’t have to do it on day one, though!

3 Obviously there are lots of approachable to on-demand disposable email addresses, including the venerable “plus sign in a GMail address” trick, but Open Trashmail is just… better for many cases.

× × × × × × ×

Oxford’s Area Code at OGN 57

Ever wondered why Oxford’s area code is 01865? The story is more-complicated than you’d think.

Modern desk telephone with an 01865 number (masked) and a keypad with letters on the buttons, 2=ABC through 9=WXYZ, with superimposed question marks between the two.
As a child, I was told that city STD codes were usually associated to the letters that appear on some telephones… but that wouldn’t make any sense for Oxford’s code!

I’ll share the story on my blog, of course. But before then, I’ll be telling it from the stage of the Jericho Tavern at 21:15 on Wednesday 17 April as my third(?) appearance at Oxford Geek Nights! So if you’re interested in learning about some of the quirks of UK telephone numbering history, I can guarantee that this party’s the only one to be at that Wednesday night!

Not your jam? That’s okay: there’s plenty of more-talented people than I who’ll be speaking, about subjects as diverse as quantum computing with QATboxen, bringing your D&D experience to stakeholder management (!), video games without screens, learnings from the Horizon scandal, and whatever Freyja Domville means by The Unreasonable Effectiveness of the Scientific Method (but I’m seriously excited by that title).

Oxford Geek Night logo

Anyway: I hope you’ll be coming along to Oxford Geek Nights 57 next month, if not to hear me witter on about the fossils in our telecommunications networks then to enjoy a beer and hear from the amazing speakers I’ll be sharing the stage with. The event’s always a blast, and I’m looking forward to seeing you there!

×

Length Extension Attack Demonstration

Prefer to watch/listen than read? There’s a vloggy/video version of this post in which I explain all the key concepts and demonstrate an SHA-1 length extension attack against an imaginary site.

I understood the concept of a length traversal attack and when/how I needed to mitigate them for a long time before I truly understood why they worked. It took until work provided me an opportunity to play with one in practice (plus reading Ron Bowes’ excellent article on the subject) before I really grokked it.

Would you like to learn? I’ve put together a practical demo that you can try for yourself!

Screenshot of vulnerable site with legitimate "download" link hovered.
For the demonstration, I’ve built a skeletal stock photography site whose download links are protected by a hash of the link parameters, salted using a secret string stored securely on the server. Maybe they let authorised people hotlink the images or something.

You can check out the code and run it using the instructions in the repository if you’d like to play along.

Using hashes as message signatures

The site “Images R Us” will let you download images you’ve purchased, but not ones you haven’t. Links to the images are protected by a SHA-1 hash1, generated as follows:

Diagram showing SHA1 being fed an unknown secret key and the URL params "download=free" and outputting a hash as a "download key".
The nature of hashing algorithms like SHA-1 mean that even a small modification to the inputs, e.g. changing one character in the word “free”, results in a completely different output hash which can be detected as invalid.

When a “download” link is generated for a legitimate user, the algorithm produces a hash which is appended to the link. When the download link is clicked, the same process is followed and the calculated hash compared to the provided hash. If they differ, the input must have been tampered with and the request is rejected.

Without knowing the secret key – stored only on the server – it’s not possible for an attacker to generate a valid hash for URL parameters of the attacker’s choice. Or is it?

Changing download=free to download=valuable invalidates the hash, and the request is denied.

Actually, it is possible for an attacker to manipulate the parameters. To understand how, you must first understand a little about how SHA-1 and its siblings actually work:

SHA-1‘s inner workings

  1. The message to be hashed (SECRET_KEY + URL_PARAMS) is cut into blocks of a fixed size.2
  2. The final block is padded to bring it up to the full size.3
  3. A series of operations are applied to the first block: the inputs to those operations are (a) the contents of the block itself, including any padding, and (b) an initialisation vector defined by the algorithm.4
  4. The same series of operations are applied to each subsequent block, but the inputs are (a) the contents of the block itself, as before, and (b) the output of the previous block. Each block is hashed, and the hash forms part of the input for the next.
  5. The output of running the operations on the final block is the output of the algorithm, i.e. the hash.
Diagram showing message cut into blocks, the last block padded, and then each block being fed into a function along with the output of the function for the previous block. The first function, not having a previous block, receives the IV as its secondary input. The final function outputs the hash.
SHA-1 operates on a single block at a time, but the output of processing each block acts as part of the input of the one that comes after it. Like a daisy chain, but with cryptography.

In SHA-1, blocks are 512 bits long and the padding is a 1, followed by as many 0s as is necessary, leaving 64 bits at the end in which to specify how many bits of the block were actually data.

Padding the final block

Looking at the final block in a given message, it’s apparent that there are two pieces of data that could produce exactly the same output for a given function:

  1. The original data, (which gets padded by the algorithm to make it 64 bytes), and
  2. A modified version of the data, which has be modified by padding it in advance with the same bytes the algorithm would; this must then be followed by an additional block
Illustration showing two blocks: one short and padded, one pre-padded with the same characters, receiving the same IV and producing the same output.
A “short” block with automatically-added padding produces the same output as a full-size block which has been pre-populated with the same data as the padding would add.5
In the case where we insert our own “fake” padding data, we can provide more message data after the padding and predict the overall hash. We can do this because we the output of the first block will be the same as the final, valid hash we already saw. That known value becomes one of the two inputs into the function for the block that follows it (the contents of that block will be the other input). Without knowing exactly what’s contained in the message – we don’t know the “secret key” used to salt it – we’re still able to add some padding to the end of the message, followed by any data we like, and generate a valid hash.

Therefore, if we can manipulate the input of the message, and we know the length of the message, we can append to it. Bear that in mind as we move on to the other half of what makes this attack possible.

Parameter overrides

“Images R Us” is implemented in PHP. In common with most server-side scripting languages, when PHP sees a HTTP query string full of key/value pairs, if a key is repeated then it overrides any earlier iterations of the same key.

Illustration showing variables in a query string: "?one=foo&two=bar&one=baz". When parsed by PHP, the second value of "one" ("baz") only is retained.
Many online sources say that this “last variable matters” behaviour is a fundamental part of HTTP, but it’s not: you can disprove is by examining $_SERVER['QUERY_STRING'] in PHP, where you’ll find the entire query string. You could even implement your own query string handler that instead makes the first instance of each key the canonical one, if you really wanted.6
It’d be tempting to simply override the download=free parameter in the query string at “Images R Us”, e.g. making it download=free&download=valuable! But we can’t: not without breaking the hash, which is calculated based on the entire query string (minus the &key=... bit).

But with our new knowledge about appending to the input for SHA-1 first a padding string, then an extra block containing our payload (the variable we want to override and its new value), and then calculating a hash for this new block using the known output of the old final block as the IV… we’ve got everything we need to put the attack together.

Putting it all together

We have a legitimate link with the query string download=free&key=ee1cce71179386ecd1f3784144c55bc5d763afcc. This tells us that somewhere on the server, this is what’s happening:

Generation of the legitimate hash for the (unknown) secret key a string download=free, with algorithmic padding shown.
I’ve drawn the secret key actual-size (and reflected this in the length at the bottom). In reality, you might not know this, and some trial-and-error might be necessary.7
If we pre-pad the string download=free with some special characters to replicate the padding that would otherwise be added to this final8 block, we can add a second block containing an overriding value of download, specifically &download=valuable. The first value of download=, which will be the word free followed by a stack of garbage padding characters, will be discarded.

And we can calculate the hash for this new block, and therefore the entire string, by using the known output from the previous block, like this:

The previous diagram, but with the padding character manually-added and a second block containing "&download=valuable". The hash is calculated using the known output from the first block as the IV to the function run over the new block, producing a new hash value.
The URL will, of course, be pretty hideous with all of those special characters – which will require percent-encoding – on the end of the word ‘free’.

Doing it for real

Of course, you’re not going to want to do all this by hand! But an understanding of why it works is important to being able to execute it properly. In the wild, exploitable implementations are rarely as tidy as this, and a solid comprehension of exactly what’s happening behind the scenes is far more-valuable than simply knowing which tool to run and what options to pass.

That said: you’ll want to find a tool you can run and know what options to pass to it! There are plenty of choices, but I’ve bundled one called hash_extender into my example, which will do the job pretty nicely:

$ docker exec hash_extender hash_extender \
    --format=sha1 \
    --data="download=free" \
    --secret=16 \
    --signature=ee1cce71179386ecd1f3784144c55bc5d763afcc \
    --append="&download=valuable" \
    --out-data-format=html
Type: sha1
Secret length: 16
New signature: 7b315dfdbebc98ebe696a5f62430070a1651631b
New string: download%3dfree%80%00%00%00%00%00%00%00%00%00%00%00%00%00%00%00%00%00%00%00%00%00%00%00%00%00%00%00%00%00%00%00%00%00%e8%26download%3dvaluable

I’m telling hash_extender:

  1. which algorithm to use (sha1), which can usually be derived from the hash length,
  2. the existing data (download=free), so it can determine the length,
  3. the length of the secret (16 bytes), which I’ve guessed but could brute-force,
  4. the existing, valid signature (ee1cce71179386ecd1f3784144c55bc5d763afcc),
  5. the data I’d like to append to the string (&download=valuable), and
  6. the format I’d like the output in: I find html the most-useful generally, but it’s got some encoding quirks that you need to be aware of!

hash_extender outputs the new signature, which we can put into the key=... parameter, and the new string that replaces download=free, including the necessary padding to push into the next block and your new payload that follows.

Unfortunately it does over-encode a little: it’s encoded all the& and = (as %26 and %3d respectively), which isn’t what we wanted, so you need to convert them back. But eventually you end up with the URL: http://localhost:8818/?download=free%80%00%00%00%00%00%00%00%00%00%00%00%00%00%00%00%00%00%00%00%00%00%00%00%00%00%00%00%00%00%00%00%00%00%e8&download=valuable&key=7b315dfdbebc98ebe696a5f62430070a1651631b.

Browser at the resulting URL, showing the "valuable" image (a pile of money).
Disclaimer: the image you get when you successfully exploit the test site might not actually be valuable.

And that’s how you can manipulate a hash-protected string without access to its salt (in some circumstances).

Mitigating the attack

The correct way to fix the problem is by using a HMAC in place of a simple hash signature. Instead of calling sha1( SECRET_KEY . urldecode( $params ) ), the code should call hash_hmac( 'sha1', urldecode( $params ), SECRET_KEY ). HMACs are theoretically-immune to length extension attacks, so long as the output of the hash function used is functionally-random9.

Ideally, it should also use hash_equals( $validDownloadKey, $_GET['key'] ) rather than ===, to mitigate the possibility of a timing attack. But that’s another story.

Footnotes

1 This attack isn’t SHA1-specific: it works just as well on many other popular hashing algorithms too.

2 SHA-1‘s blocks are 64 bytes long; other algorithms vary.

3 For SHA-1, the padding bits consist of a 1 followed by 0s, except the final 8-bytes are a big-endian number representing the length of the message.

4 SHA-1‘s IV is 67452301 EFCDAB89 98BADCFE 10325476 C3D2E1F0, which you’ll observe is little-endian counting from 0 to F, then back from F to 0, then alternating between counting from 3 to 0 and C to F. It’s considered good practice when developing a new cryptographic system to ensure that the hard-coded cryptographic primitives are simple, logical, independently-discoverable numbers like simple sequences and well-known mathematical constants. This helps to prove that the inventor isn’t “hiding” something in there, e.g. a mathematical weakness that depends on a specific primitive for which they alone (they hope!) have pre-calculated an exploit. If that sounds paranoid, it’s worth knowing that there’s plenty of evidence that various spy agencies have deliberately done this, at various points: consider the widespread exposure of the BULLRUN programme and its likely influence on Dual EC DRBG.

5 The padding characters I’ve used aren’t accurate, just representative. But there’s the right number of them!

6 You shouldn’t do this: you’ll cause yourself many headaches in the long run. But you could.

7 It’s also not always obvious which inputs are included in hash generation and how they’re manipulated: if you’re actually using this technique adversarily, be prepared to do a little experimentation.

8 In this example, the hash operates over a single block, but the exact same principle applies regardless of the number of blocks.

9 Imagining the implementation of a nontrivial hashing algorithm, the predictability of whose output makes their HMAC vulnerable to a length extension attack, is left as an exercise for the reader.

× × × × × × × ×

Gemini Squared

How did I never think of accessing Gemini (the protocol) on my Gemini (portable computer) before today?

Of course, I recently rehomed my Gemini so instead I had to access Gemini on my Cosmo (Gemini’s successor), which isn’t nearly as cool.1

Dan's recent article, "Gemini and Spartan without a browser", displayed over Gemini on the screen of a Planet Computers Cosmo palmtop.

Footnotes

1 Still pretty cool though. Reminds me of using Lynx on my Psion 5mx last millenium…

×

Gemini and Spartan without a browser

A particular joy of the Gemini and Spartan protocols – and the Markdown-like syntax of Gemtext – is their simplicity.

Screenshot showing this blog post as viewed over the Gemini protocol in the Lagrange browser
The best way to explore Geminispace is with a browser like Lagrange browser, of course.

Even without a browser, you can usually use everyday command-line tools that you might have installed already to access relatively human-readable content.

Here are a few different command-line options that should show you a copy of this blog post (made available via CapsulePress, of course):

Gemini

Gemini communicates over a TLS-encrypted channel (like HTTPS), so we need a to use a tool that speaks the language. Luckily: unless you’re on Windows you’ve probably got one installed already1.

Using OpenSSL

This command takes the full gemini:// URL you’re looking for and the domain name it’s at. 1965 refers to the port number on which Gemini typically runs –

printf "gemini://danq.me/posts/gemini-without-a-browser\r\n" | \
  openssl s_client -ign_eof -connect danq.me:1965

Using GnuTLS

GnuTLS closes the connection when STDIN closes, so we use cat to keep it open. Note inclusion of --no-ca-verification to allow self-signed certificates (optionally add --tofu for trust-on-first-use support, per the spec).

{ printf "gemini://danq.me/posts/gemini-without-a-browser\r\n"; cat -; } | \
  gnutls-cli --no-ca-verification danq.me:1965

Using Ncat

Netcat reimplementation Ncat makes Gemini requests easy:

printf "gemini://danq.me/posts/gemini-without-a-browser\r\n" | \
  ncat --ssl danq.me 1965

Spartan

Spartan is a little like “Gemini without TLS“, but it sports an even-more-lightweight request format which makes it especially easy to fudge requests2.

Using Telnet

Note the use of cat to keep the connection open long enough to get a response, as we did for Gemini over GnuTLS.

{ printf "danq.me /posts/gemini-without-a-browser 0\r\n"; cat -; } | \
  telnet danq.me 300

Using cURL

cURL supports the telnet protocol too, which means that it can be easily coerced into talking Spartan:

printf "danq.me /posts/gemini-without-a-browser 0\r\n" | \
  curl telnet://danq.me:300

Using Ncat/Netcat

Because TLS support isn’t needed, this also works perfectly well with Netcat – just substitute nc/netcat or whatever your platform calls it in place of ncat:

printf "danq.me /posts/gemini-without-a-browser 0\r\n" | \
  ncat danq.me 300

I hope these examples are useful to somebody debugging their capsule, someday.

Footnotes

1 You can still install one on Windows, of course, it’s just less-likely that your operating system came with such a command-line tool built-in

2 Note that the domain and path are separated in a Spartan request and followed by the size of the request payload body: zero in all of my examples

×

Incredible Doom

I just finished reading Incredible Doom volumes 1 and 2, by Matthew Bogart and Jesse Holden, and man… that was a heartwarming and nostalgic tale!

Softcover bound copies of volumes 1 and 2 of Incredible Doom, on a wooden surface.
Conveniently just-over-A5 sized, each of the two volumes is light enough to read in bed without uncomfortably clonking yourself in the face.

Set in the early-to-mid-1990s world in which the BBS is still alive and kicking, and the Internet’s gaining traction but still lacks the “killer app” that will someday be the Web (which is still new and not widely-available), the story follows a handful of teenagers trying to find their place in the world. Meeting one another in the 90s explosion of cyberspace, they find online communities that provide connections that they’re unable to make out in meatspace.

A "Geek Code Block", printed in a dot-matrix style font, light-blue on black, reads: GU D-- -P+ C+L? U E M+ S-/+ N--- H-- F--(+) !G W++ T R? X?
I loved some of the contemporary nerdy references, like the fact that each chapter page sports the “Geek Code” of the character upon which that chapter focusses.1
So yeah: the whole thing feels like a trip back into the naivety of the online world of the last millenium, where small, disparate (and often local) communities flourished and early netiquette found its feet. Reading Incredible Doom provides the same kind of nostalgia as, say, an afternoon spent on textfiles.com. But it’s got more than that, too.
Partial scan from a page of Incredible Doom, showing a character typing about "needing a solution", with fragments of an IRC chat room visible in background panels.
The user interfaces of IRC, Pine, ASCII-art-laden BBS menus etc. are all produced with a good eye for accuracy, but don’t be fooled: this is a story about humans, not computers. My 9-year-old loved it too, and she’s never even heard of IRC (I hope!).

It touches on experiences of 90s cyberspace that, for many of us, were very definitely real. And while my online “scene” at around the time that the story is set might have been different from that of the protagonists, there’s enough of an overlap that it felt startlingly real and believable. The online world in which I – like the characters in the story – hung out… but which occupied a strange limbo-space: both anonymous and separate from the real world but also interpersonal and authentic; a frontier in which we were still working out the rules but within which we still found common bonds and ideals.

A humorous comic scene from Incredible Doom in which a male character wearing glasses walks with a female character he's recently met and is somewhat intimidated by, playing-out in his mind the possibility that she might be about to stab him. Or kiss him. Or kiss him THEN stab him.
Having had times in the 90s that I met up offline with relative strangers whom I first met online, I can confirm that… yeah, the fear is real!

Anyway, this is all a long-winded way of saying that Incredible Doom is a lot of fun and if it sounds like your cup of tea, you should read it.

Also: shortly after putting the second volume down, I ended up updating my Geek Code for the first time in… ooh, well over a decade. The standards have moved on a little (not entirely in a good way, I feel; also they’ve diverged somewhat), but here’s my attempt:

----- BEGIN GEEK CODE VERSION 6.0 -----
GCS^$/SS^/FS^>AT A++ B+:+:_:+:_ C-(--) D:+ CM+++ MW+++>++
ULD++ MC+ LRu+>++/js+/php+/sql+/bash/go/j/P/py-/!vb PGP++
G:Dan-Q E H+ PS++ PE++ TBG/FF+/RM+ RPG++ BK+>++ K!D/X+ R@ he/him!
----- END GEEK CODE VERSION 6.0 -----

Footnotes

1 I was amazed to discover that I could still remember most of my Geek Code syntax and only had to look up a few components to refresh my memory.

× × × ×

WCEU23 – Day 2

My second day of the main conference part of WordCamp Europe 2023 was hampered slightly by a late start on my part.

Dan, sweating, with an actively-used dancefloor in the background.
I can’t say for certain why I woke up mildly hungover and with sore knees, but I make an educated guess that it might be related to the Pride party I found myself at last night.

Still, I managed to get to all the things I’d earmarked for my attention, including:


Gutenberg collaborative editing experience

I’m sure I can’t be the only person who’s been asked “why can’t the (or ‘shouldn’t the’) WordPress post editor let multiple people edit post at the same time”. Often, people will compare it to e.g. Google Docs.

A man in a blue shirt stands on a large stage.
I can’t begin to speculate how often people must ask this supposedly-trivial question of Dawid Urbański, possibly the world’s expert on this very question.

Dawid summarised the challenging issues in any effort to implement this much-desired feature. Some of them are examples of those unsolved problems that keep rearing their heads in computer science, like the two generals’ problem, but even the solvable problems are difficult: How does one handle asynchronous (non-idempotent) commutative operations? How is the order of disparate actions determined? Which node is the source of truth? If a server is used, where is that server (with a nod to quite how awful the experience of implementing a Websockets server in PHP can be…)? And so on…

Slide showing a timeline in which two participants A and B send an update to one another, but neither can be sure whose update was made first.
Slides showing simplified timelines of parties communicating with one another in ambigous ways

I really appreciated Dawid’s reference to the various bits of academic literature that’s appeared over the last four decades (!) about how these problems might be solved. It’s a strong reminder that these things we take for granted in live-updating multi-user web applications are not trivial and every question you can answer raises more questions.

There’s some great early proof-of-concepts, so we’re “getting there”, and it’s an exciting time. Personally, I love the idea of the benefits this could provide for offline editing (perhaps just because I’m still a huge fan of a well-made PWA!).


The future of work is open

James Giroux’s goal: that we all become more curious about and more invested in our team’s experiences, from a humanistic standpoint. His experience of companies with organic growth of software companies is very, very familiar: you make a thing and give it away, then you need more people, then you’ve somehow got a company and it’s all because you just had an idea once. Sounds like Three Rings!

A man in a white t-shirt and dark jacket stands on a stage in front of a screen; the bottom line of the words on the screen can be seen to read "Work is Open".
Financial success is not team success, as Twitter shows, with their current unsustainable and unhappy developer culture, James reminds us.

James was particularly keen to share with us the results of his Team Experience Index research, and I agree that some of the result are especially exciting, in particularly the willingness of underrepresented groups, especially women, to enagage with the survey: this provides hugely valuable data about the health of teams working in the WordPress space.

A slide showing demographic details: 28% say that they represent a historically underrepresented group, 55% are in North America, 67% provided a gender that was not "male".
The statistician in me immediately wanted to know how the non-response rate to these (optional) questions varied relative to one another (if they’re very different, putting these pie charts alongside one another could be disingenuous!), but I’m tentatively excited by the diversity represented anyway.

“We have this project that we work with and contribute to, that we love,” says James, in an attempt to explain the highly-positive feedback that his survey respondents gave when asked questions about the authenticity of their purpose and satisfaction in their role.

A man on a stage stands in front of a slide listing strengths and opportunities resulting from the survey.
Again, my inner statistician wants to chirp up about the lack of a control group. The data from the survey may well help companies working within the WordPress ecosystem to identify things we’re doing well and opportunities for growth, but it’d also be cool to compare these metrics to those in companies outside of the WordPress world!

So, what do we do with these findings? How do WordPress-ey companies improve? James recommends that we:

  • Get better are showing what recognition, celebration, and career growth looks like,
  • Improve support and training for team leaders to provide them with the tools to succeed and inspire, and
  • Bridge the gap between leadership and team members with transparent, open dialogue.

Good tips, there.


The Big Photo

A WordCamp tradition is to try to squeeze every willing participant into a photo. Clearly with the size that these events are, nowadays, this requires some wrangling (and, in this case, the photographers standing atop the roof of a nearby building to get everybody into frame).

An enormous crowd shuffles tightly into a courtyard. A trio of blue-shirted photographers stands atop a building opposite them.
Like herding cats, trying to get several hundred people to line up where you want them for a photograph is an exercise in patience.

I’ll have to keep an eye out for the final picture and see if I can find myself in it.


What is new in CSS?

I always find that learning about bleeding edge CSS techniques makes me feel excited and optimistic, perhaps because CSS lends itself so well towards a progressive enhancement approach to development: often, you can start using a new technique today and it’ll only benefit, say, people using a beta version of a particular browser (and perhaps only if they opt-in to the applicable feature flag). But if you’ve designed your site right then the lack of this feature won’t impact anybody else, and eventually the feature will (hopefully) trickle-down into almost everybody’s Web experience.

Anyway, that’s what Fellyph Cintra says too, but he adds that possibly we’ve still not grown out of thinking that browsers take a long time between versions. 5 years passed between the release of Internet Explorer 6 and Internet Explorer 7, for example! But nowadays most browsers are evergreen with releases each month! (Assuming we quietly ignore that Apple don’t sent new versions of Safari to old verisons of MacOS, continuing to exacerbate a problem that we used to see with Internet Explorer on Windows, ahem.)

A man on a stage with his arm out in greeting to the crowd in front of him.
Fellyph told us about how he introduced <dialog> to his team and they responded with skepticism that they’d be able to use it within the next 5 years. But in fact it’s already stable in every major browser.

An important new development may come from Baseline, a project to establish a metric of what you can reliably use on the Web today. So a bit like Can I Use, I guess, but taken from the opposite direction: starting from the browsers and listing the features, rather than the other way around.

Anyway, Fellyph went on to share some exciting new ideas that we should be using, like:

  • object-fit and object-position, which can make the contents of any container “act like” a background
  • aspect-ratio, which I’m already using and I love, but I enjoyed how Fellyph suggested combining the two to crop images to a fluid container on the client side
  • scroll-behavior: smooth, which I’ve used before; it’s pretty good
  • clamp, which I use… but I’m still not sure I fully grok it: I always have to load some documentation with examples when I use it
  • @container queries, which can apply e.g. (max-width: ...) rules to things other than the viewport, which I’ve not found a need for yet but I can see the value of it
  • @layers, which grant an additional level of importance in the cascade: for example, you might load a framework into a layer (with @import url(...) layer(framework)) which is defined as a lower-priority than your override layer, meaning you won’t have to start slapping !important all over the shop
  • @media (400px <= width <= 600px)-style media queries, which are much easier to understand than min-width: if you’re used to thinking in a more-procedural programming language (I assume they work in container queries too!)
Fellyph Cintra stands in front of a large screen showing a slide that introduces himself to his audience: "Front-end Lead at Digitale Methode & Google Developer Expert @fellyph"

It’s also worth remembering:

  • @supports, which is badass and I love and use it already (it was especially useful as display: grid began to roll out and I wanted to start using it but needed to use a fallback method for browsers that didn’t support it yet
  • :has(), which I’ve long thought is game-changing: styling something based on what it contains is magical; not really suitable for mainstream use yet without Firefox support, though (it’s still behind a feature flag)! Fellyph sold me on the benefit of :not(:has(...)), though!
  • Nesting, which again doesn’t have Firefox support yet but provides SCSS-like nesting in CSS, which is awesome
  • Scroll-driven animations, which can e.g. do parallax effects without JavaScript (right now it’s Canary only, mind…), using e.g. animation-timeline: and animation-range: to specify that it’s the scroll position within the document that provides the timeline for the animation

And keeping an eye on upcoming things like text-balanced (which I’m already excited by), popover, selectmenu, view transitions (which I’ve been experimenting with because they’re cool), and scoped style.

Fellyph was at least as inspiring as I’d hoped.


Stop blocking my thread

For my second workshop, I joined Google’s Adam Silverstein to watch him dissect a few participants’ websites performance using Core Web Vitals as a metric. I think I already know the basics of Core Web Vitals, but when it comes to improving my score (especially on work-related sites with unpleasant reliance on heavyweight frameworks like React, in my experience).

A man stands at a podium.
In an early joke, Adam pointed out that you can reduce JavaScript thread blocking by removing JavaScript from your site. A lot of people laughed, but frankly I think it’s a great idea.

We talked a lot about render blocking (thanks to JS and CSS in the <head>), thread blocking (by scripts, especially those reacting to user input), TTFB (relating to actual network and server performance, or at least server-side processing), TBT (the time between FCP and TTI), and the upcoming change to measure INP rather than FID. That’s a lot of acronyms.

The short of it is that there are three pillars to Core Web Vitals: loading (how long until the page renders), interactivity (how long until the page responds to user interaction), and stability (how long it takes for the page to cease layout shifts as a result of post-load scripts and stylesheets). I was pleased that Adam acknowledged the major limitation of lab testing resulting from developers often using superior hardware and Internet connections to typical users, and how if you’re serious about performance metrics you’ll want to collect RUM data.

Adam explaining Render-Blocking CSS.
The fastest way to improve rendering performance is to put fewer obstacles in the way of rendering.

I came away with a few personalised tips, but they’re not much use for your site: I paid attention to the things that’ll be helpful for the sites I look after. But I’ll be taking note of his test pages so I can play with some of the tools he demonstrated later on.


Variations on a theme: 20 years of WordPress

I couldn’t liveblog this because I spent too much of the session applauding. A few highlights from memory:

  • Phase 2 (of 4) of Gutenberg is basically complete, which is cool. Some back-and-forth about the importance of phase 4 (bringing better multilingual support to WordPress) and how it feels like it’s a long way away.
  • Lots of plugging for Five for the Future, which I can get behind.
  • In the same vein as his 2016 statement that WordPress developers should “learn JavaScript deeply”, Matt leant somewhat into the idea that from today they should “watch AI carefully”; I’m not 100% convinced, but it’s not been stopping me from getting involved with a diversity of AI experiments (including some WordPress-related ones) anyway.
  • Musings about our community being a major part of why WordPress succeeded (and continues to thrive) unlike some other open source projects of its era. I agree that’s a factor, but I suspect that being in the right place at the right time was also important. Perhaps more on that another time.
  • Announcement of the next WordCamp Europe location.

Here’s looking forward to WordCamp Europe 2024 in Turin!

× × × × × × × × × × ×

WCEU23 – Day 1

The first “full” day of WordCamp Europe 2023 (which kicked-off at Contributor Day) was busy and intense, but I loved it.

This post is basically a live-blog of everything I got up to, and it’s mostly for my own benefit/notetaking. If you don’t read it, nobody will blame you.

Seen from behind, a very long queue runs through a conference centre.
Six minutes after workshop registration opened its queue snaked throughout an entire floor of the conference centre.

Here’s what I got up to:


10 things that all WordPress plugin developers should avoid

David Artiss took the courageous step of installing 36 popular plugins onto a fresh WordPress site and was, unsurprisingly, immediately bombarded by a billion banners on his dashboard. Some were merely unhelpful (“don’t forget to add your API key”), others were annoying (“thanks for installing our plugin”), and plenty more were commercial advertisements (“get the premium version”) despite the fact that WordPress.org guidelines recommend against this. It’s no surprise that this kind of “aggressive promotion” is the single biggest annoyance that people reported when David asked around on social media.

Similarly, plugins which attempt to break the standard WordPress look-and-feel by e.g. hoisting themselves to the top of the menu, showing admin popovers, putting settings sections in places other than the settings submenu, and so on are a huge annoyance to everybody. I get sufficiently frustrated by these common antifeatures of plugins I use that I actually maintain a plugin for my own use that “fixes” the ones that aggrivate me the most!

A man wearing glasses and a t-shirt with a WordPress logo stands on a stage.
David raised lots of other common gripes with WordPress plugins, too: data validation failures, leaving content behind after uninstallation (and “deactivation surveys”, ugh!), and a failure to account for accessibility.

David’s promised to put his slides online, plus to write articles about everything that came up in his Q&A.

I’m unconvinced that we can rely on plugin developers to independently fix the kinds of problems that come high on David’s list. I wonder if there’s mileage in WordPress Core reimplementing the way that the main navigation menu works such that all items in it can be (easily) re-arranged by users to their own preference? This would undermine the perceived value to plugin developers of “hoisting” their own to the top by allowing users to counteract it, and would provide a valuable feature to allow site admins to streamline their workflow: use WooCommerce but only in a way that’s secondary to your blog? Move “Products” below “Posts”! Etc.

Screenshot showing a WordPress admin interface writing this blog post, with the stage in the background.
Why yes, I’m liveblogging this. And yes, I’m not using Gutenberg yet (that’s a whole other story…)

Where did we come from?

Aaron Reimann from ClockworkWP gave us a tour of how WordPress has changed over the course of its 20-year history, starting even slightly before I started using WordPress; my blog (previously powered by some hacky PHP, previouslier powered by some hackier Perl, previousliest written in static HTML) switched to WordPress in 2004, when it hit version 1.2, so it was fun to get the opportunity to see some even older versions illustrated.

A WordPress site, circa 2004, simulated in a virtual machine.
A WordPress site from 2004 would, of course, still be perfectly usable today. How many JS-heavy/API-driven websites of today do you reckon will still function in 20 years time?

It was great to be reminded how far the Core code has come over that time. Early versions of WordPress – as was common among PHP applications at the time! – had very few files and each could reliably be expected to be a stack of SQL, wrapped in a stack of code, wrapped in what’s otherwise a HTML file: no modularity!

A man wearing a flat cap strides across a stage.
Aaron’s passion for this kind of digital archaeology really shows. I dig it.

There were very few surprises for me in this talk, as you might expect for such an “old hand”, but I really enjoyed the nostalgia of exploring WordPress history through his eyes.

I enjoyed putting him on the spot with a “spicy” question at the end of his talk, by asking him if, alongside everything we’ve gained over the years, whether there’s anything we lost along the way. He answered well, pointing out that the somewhat bloated stack of plugins that are commonplace on big sites nowadays and the ease with which admins can just “click and install” more of them. I agree with him, although personally I miss built-in XFN support…

Dan, smiling, wearing a purple t-shirt with a WordPress logo and a Pride flag, hugs a cut-out of a Wappu (itself hugging a "WP 20" balloon and wearing a party hat).
If you’d have told me in advance that hugging a Wapuu would have been a highlight of the day… yeah, that wouldn’t have been a surprise!

Networking And All That

There’s a lot of exhibitors with stands, but I tried to do a circuit or so and pay attention at least to those whose owners I’ve come into contact with in a professional capacity. Many developers who make extensions for WooCommerce, of course, sell those extensions through WooCommerce.com, which means they come into routine direct contact with my code (and it can mean that when their extension’s been initially rejected by our security scanners or linters, it’s me their developers first want to curse!).

A WordCamp Europe Athens 2023 lanyard and name badge for Dan Q, Attendee, onto which a "Woo" sticker has been affixed.
After a while, to spare some of that awkward exchange where somebody tries to sell me their product before I explain that I already sell their product for them, I slapped a “Woo” sticker on my lanyard.

It’s been great to connect with people using WordPress to power the Web in a whole variety of different contexts, but it somehow still feels strange to me that WordPress has such a commercial following! Even speaking as somebody who’s made their living at least partially out of WordPress for the last decade plus, it still feels to me like its greatest value comes from its use for personal publishing.

The feel of a WordCamp with its big shiny sponsors is enormously different from, say, the intimacy and individuality of a Homebrew Website Club meeting, and I think that’s something I still need to come to terms with. WordPress’s success story comes from many different causes, but perhaps chief among them is the fact that it’s versatile enough to power the website of a government, multinational, or household-name brand… but also to run the smallest personal indie blog. I struggle to comprehend that, even with my background.

(Side note, Sophie Koonin says that building a personal website is a radical act in 2023, and I absolutely agree.)

A "Woo" booth, staffed with a variety of people, with Dan at the centre.
My division of Automattic had a presence, of course.

I was proud of my colleagues for the “gimmick” they were using to attract people to the Woo stand: you could pick up a “credit card” and use it to make a purchase (of Greek olive oil) using a website, see your order appear on the app at the backend in real-time, and then receive your purchase as a giveaway. The “credit card” doubles as a business card from the stand, the olive oil is a real product from a real, local producer (who really uses WooCommerce to sell online!), and when you provide an email address at the checkout you can opt-in to being contacted by the team afterwards. That’s some good joined-up thinking by my buddies in marketing!


WordPress extended: build unique websites on top of WP

Petya Petkova observed that it’s commonplace to take the easy approach and make a website look like… well, every other website.  “Web deja-vu” is a real thing, and it’s fed not only by the ebbs and flows of trends in web design but by the proliferation of indistinct themes that people just install-and-use.

A woman with long hair, wearing a green t-shirt, stands before a screen on a stage.
How can we break free from web deja-vu, asks Petya. It almost makes me sad that her slides had been coalesced into the conference’s slidedeck design rather than being her own… although on second though maybe that just helps enhance the point!

Choice of colours and typography can be used to tell a story, to instil a feeling, to encourage engagement. Scrolling can be used as a metaphor for storytelling (“scrolly-telling”, Petya calls it). Animation flow can be used to direct a user’s attention and drive focus and encourage interaction.

A lot of the technical concepts she demonstrated – parts of a page that scroll at different speeds, typography that shifts or changes, videos used in a subtle way to accentuate other content, etc. – can be implemented in the frontend with WebGL, Three.js and the like. Petya observes that moving this kind of content interactivity into the frontend can produce an illusion of a performance improvement, which is an argument I’ve heard before, but personally I think it’s only valuable if it’s built as a progressive enhancement: otherwise, you’re always at risk that your site won’t look like you’d hope.

I note, for example, that Petya’s agency’s site shows only an “endless spinner” when viewed in my browser (which blocks the code.jQuery CDN by default, unless allowlisted for specific sites). All of the content is there, on the page, if you View Source, but it’s completely invisible if an external JavaScript fails to load. That doesn’t just happen when weirdos like me disable JavaScript in their browsers: it can happen if the browser interacts badly with the script, or if the user’s Internet connection is ropey, or a malware scanner misfires, or if government censorship blocks the CDN, or in any number of other conditions.

Screenshot from acceler8design.com, showing an "endless spinner" and no content.
While I agree with Petya about the value of animation and interactivity to make sites awesome, I don’t think it can take second-place to ensuring the most-widespread access and accessibility for your audience. Otherwise we’d still be making Flash sites, right?

So yeah: uniqueness and creativity are great, and I like what she’s proposing, but not the way she goes about it. The first person to ask a question wisely brought up accessibility, and Petya answered well that accessibility technologies can bridge the gap, but I’d counter that it’s preferable to build accessible in the first instance: if you have to use an aria- attribute it’s a good sign that you probably already did something wrong (not always, but it’s certainly a pointer that you ought to take a step back and check!).

Several other good questions and great answers followed: about how to showcase a preliminary design when they design is dependent upon animation and interactivity (which I’ve witnessed before!), on the value of server-side rendering of components, and about how to optimise for smaller screens. Petya clearly knows her stuff in all of these areas and had confident responses.


State of WordPress security – insights from 2022

Oliver Sild is the kind of self-taught hacker, security nerd, and community builder that I love, so I wasn’t going to miss his talk.

A man in a literal black hat stands in the centre of a large theatre stage.
The number of security vulnerability reports in the WordPress ecosystem is up +328%, Oliver opened. But the bugs being reported are increasingly old, so we’re not talking about new issues being created. And only 0.3% of bugs were in WordPress Core (and were patched before they were exploitable).

It’s good news in general in WordPress Security-land… but CSRF is on the up-and-up (overtaking XSS) in the plugin space. That, and all the broken access control we see in the admin area, are things I’ll be keeping in mind next time I’m arguing with a vendor about the importance of using nonces and security checks in their extension (I have this battle from time to time!).

But an interesting development is the growth of the supply chains in the WordPress plugin ecosystem. Nowadays a plugin might depend upon another plugin which might depend upon a library… and a patch applied to the latter of those might take time to be propagated through the chain, providing attackers with a growing window of opportunity.

Sankey chart showing 1160 submitted bugs being separated into pending, accepted, invalid, and (eventually) patched. 26% of critical bugs in 2022 received no timely patch.
I love a good Sankey chart. Even when it says scary things.

A worrying thought is that while plugin directory administrators will pull and remove plugins that have longstanding unactioned security issues. But that doesn’t help the sites that already have that plugin installed and are still using it! There’s a proposal to allow WordPress to notify admins if a plugin used on a site has been dropped for security reasons, but it was opened 9 years ago and hasn’t seen any real movement, soo…

I like that Oliver plugged for security researchers being acknowledged as equal contributors to developers on your software. But then, I would say that, as somebody who breaks into things once in a while and then tells the affected parties how to fix the problem that allowed me to do so! He also provided a whole wealth of tips for site owners and agencies to try to keep their sites safe, but little that I wasn’t aware of already.

A large audience of a few hundred people, seen from above, facing left.
Still, good to see this talk get as good an audience as it did, given the importance of the topic!

It was about this point in the day, glancing at my schedule and realising that at any given time there were up to four other sessions running simultaneously, that I really got a feel for the scale of this conference. Awesome. Meanwhile, Oliver was fielding the question that I’m sure everybody was thinking: with Gutenberg blocks powered by JavaScript that are often backed by a supply-chain of the usual billion-or-so files you find in your .node_modules directory, isn’t the risk of supply chain attacks increasing?

Spoiler: yes. Did you notice earlier in this post I mentioned that I don’t use Gutenberg on this site yet?

Animation showing Dan, wearing a pilot's hat, surrounded by cotton wool clouds, as the camera pans back and forth.
When the Jetpack team told me that they’ve been improving their cloud offering, this wasn’t what I expected.

Typographic readability in theme design & development

My first “workshop” was run by Giulia Laco, on the topic of readable content and design.

A title slide encourages designers to sit on the left (to the right of the speaker), developers to the right (on her left), and "no-coders" in the centre.
Designers to the left of me, coders to the right: here I am, stuck in the middle with you.

Giulia began by reminding us how short the attention span of Web readers is, and how important the right typographic choices are in ensuring that people actually read your content. I fully get this – I think that very few people will have the attention span to read this part of this very blog post, for example! – but I loved that she hammered the point home by presenting every slide of her presentation twice (or more), “improving” the typographic choices as she went along: an excellent and memorable quirk.

Our capacity to read and comprehend a text is affected by a combination of common (distance, lighting, environment, concentration, mood, etc.), personal (age, proficiency, motiviation, accessibility requirements, etc.), and typographic (face, style, size, line length and spacing, contrast, width, rhythm etc.) factors. To explore the impact of the typographic factors, the group dived into a pre-prepared Codepen and a shared Figma diagram. (I immediately had a TIL moment over the font-synthesis: CSS property!)

A presentation of the typography playground, in which the font is being changed.
I appreciated that Giulia stressed the importance of a fallback font. Just like the CDN issues I described above while talking about JavaScript dependencies, not specifying a fallback font puts your design at the mercy of the browser’s defaults. We don’t like to think about what happens when websites partially fail, but they do, and we should.

Things get interesting at the intersection of readability and accessibility. For example, WCAG accessibility requirements demand that you don’t use images of text (we used to do this a lot back before we could reliably use fonts on the web, and before we could easily have background images on e.g. buttons for navigation). But this accessibility requirement also aids screen readability when accounting for e.g. “retina” screens with virtual pixel ratios.

Slide showing a physical pixel and a "virtual pixel" representing a real pixel of a different size.
Do you remember when a pixel was the size of a pixel? Those days are long gone. True story.

Giulia provided a great explanation of why we may well think in pixels (as developers or digital designers) but we’re unlikely to use them everywhere: I’d internalised this lesson long ago but I appreciated a well-explained justification. The short of it is: screen zoom (that fancy zoom feature you use in your browser all the time, especially on mobile) and text zoom (the one you probably don’t use, or don’t use so much) are different things, and setting a pixel-based font size in the root node wrecks the latter, forcing some people with accessibility needs to use the former, which is likely to result in vertical scrolling. Boo!

I also enjoyed seeing this demo of how the different hyphenation-points in different languages (because of syllable stress) can impact on your wrapping points/line lengths when content is translated. This can affect any website, of course, because any website can be the target of automatic translation.

Plus, Giulia’s thoughts on the value of serifed fonts (even on digital displays) for improving typographic readability of the letters d, b, p and q which are often mirror- or rotationally-symmetric to one another in sans-serif fonts. It’s amazing to have something – in this case, a psychological letter transposition – pointed out that I’ve experienced but never pinned down the reason for, before. Neat!

It was a shame that this workshop took place late in the day, because many of the participants (including me) seemed to have flagging energy levels!


Altogether a great (but intense) day. Boggles my mind that there’s another one like it tomorrow.

× × × × × × × × × × × × × × × × ×

Breakups as HTTP Response Codes

103: Early Hints ("I'm not sure this can last forever.")
103: Early Hints (“I’m not sure this can last forever.”)
300: Multiple Choices ("There are so many ways I can do better than you.")
300: Multiple Choices (“There are so many ways I can do better than you.”)
303: See Other ("You should date other people.")
303: See Other (“You should date other people.”)
304: Not Modified ("With you, I feel like I'm stagnating.")
304: Not Modified (“With you, I feel like I’m stagnating.”)
402: Payment Required ("I am a prostitute.")
402: Payment Required (“I am a prostitute.”)
403: Forbidden ("You don't get this any more.")
403: Forbidden (“You don’t get this any more.”)
406: Not Acceptable ("I could never introduce you to my parents.")
406: Not Acceptable (“I could never introduce you to my parents.”)
408: Request Timeout ("You keep saying you'll propose but you never do.")
408: Request Timeout (“You keep saying you’ll propose but you never do.”)
409: Conflict ("We hate each other.")
409: Conflict (“We hate each other.”)
410: Gone (ghosted)
410: Gone (ghosted)
411: Length Required ("Your penis is too small.")
411: Length Required (“Your penis is too small.”)
413: Payload Too Large ("Your penis is too big.")
413: Payload Too Large (“Your penis is too big.”)
416: Range Not Satisfied ("Our sex life is boring and repretitive.")
416: Range Not Satisfied (“Our sex life is boring and repretitive.”)
425: Too Early ("Your premature ejaculation is a problem.")
425: Too Early (“Your premature ejaculation is a problem.”)
428: Precondition Failed ("You're still sleeping with your ex-!?")
428: Precondition Failed (“You’re still sleeping with your ex-!?”)
429: Too Many Requests ("You're so demanding!")
429: Too Many Requests (“You’re so demanding!”)
451: Unavailable for Legal Reasons ("I'm married to somebody else.")
451: Unavailable for Legal Reasons (“I’m married to somebody else.”)
502: Bad Gateway ("Your pussy is awful.")
502: Bad Gateway (“Your pussy is awful.”)
508: Loop Detected ("We just keep fighting.")
508: Loop Detected (“We just keep fighting.”)

With thanks to Ruth for the conversation that inspired these pictures, and apologies to the rest of the Internet for creating them.

× × × × × × × × × × × × × × × × × × ×

Oxford Geek Nights #52

On Wednesday this week, three years and two months after Oxford Geek Nights #51, Oxford Geek Night #52. Originally scheduled for 15 April 2020 and then… postponed slightly because of the pandemic, its reapparance was an epic moment that I’m glad to have been a part of.

Matt Westcott stands to the side of a stage, drinking beer, while centrestage a cross-shaped "pharmacy sign" projects an animation of an ambulance rocketing into a starfield.
A particular highlight of the night was witnessing “Gasman” Matt Westcott show off his epic demoscene contribution Pharmageddon, which is presented via a “pharmacy sign”. Here’s a video, if you’re interested.

Ben Foxall also put in a sterling performance; hearing him talk – as usual – made me say “wow, I didn’t know you could do that with a web browser”. And there was more to learn, too: Jake Howard showed us how robots see, Steve Buckley inspired us to think about how technology can make our homes more energy-smart (this is really cool and sent me down a rabbithole of reading!), and Joe Wass showed adorable pictures of his kid exploring the user interface of his lockdown electronics project.

Digital scoreboard showing Dan Q in the lead with 5,561, Nick in second place with 5,442, and RaidIndigo in third with 5,398.
Oh, and there was a quiz competition too, and guess who came out on top after an incredibly tight race.

But mostly I just loved the chance to hang out with geeks again; chat to folks, make connections, and enjoy that special Oxford Geek Nights atmosphere. Also great to meet somebody from Perspectum, who look like they’d be great to work for and – after hearing about – I had in mind somebody to suggest for a job with them… but it looks like the company isn’t looking for anybody with their particular skills on this side of the pond. Still, one to watch.

Dan, outdoors on a grassy path, wearing a grey hoodie. On his head is a "trucker cap" emblazoned with the word "GEEK" and, in smaller writing "#OGN52".
My prize for winning the competition was an extremely-limited-edition cap which I love so much I’ve barely taken it off since.

Huge thanks are due to Torchbox, Perspectum and everybody in attendance for making this magical night possible!

Oh, and for anybody who’s interested, I’ve proposed to be a speaker at the next Oxford Geek Nights, which sounds like it’ll be towards Spring 2023. My title is “Yesterday’s Internet, Today!” which – spoilers! – might have something to do with the kind of technology I’ve been playing with recently, among other things. Hope to see you there!

× × ×