Earlier this year, for reasons of privacy/love of selfhosting, I moved the DanQ.me mailing list from Mailchimp to Listmonk (there’s a blog post about how I set it up), relaying
outbound messages via an SMTP server provided by my domain registrar, Gandi.
And because I learned a few things while doing so, I wrote this blog post so that next time I have to configure Postfix + DKIM, I’ll know where to find a guide. If it helps you in the meantime, that’s just a bonus.
Postfix
Running your own mailserver is a pain. I used to do it for all of my email, but – like many other nerds – when spam reached its peak and deliverability became an issue, I gave
up and oursourced it1.
Luckily, I don’t need it to do much. I just need a mail transfer agent with an (unauthenticated, but local-only) SMTP endpoint: something that Listmonk can dump emails into, which will then reach out to the mailservers representing each of the recipients and
relay them on. A default install of Postfix does all that out-of-the-box, so I ran sudo apt install postfix, accepted all the default
options, and put the details into Listmonk.
Next, I tweaked my DNS configuration to add an SPF record, and tested it.
This ought to have been enough to achieve approximate parity with what Gandi had been providing me with. Not bad.
I sent a test email to a Gmail account, where I noticed two problems:
The first problem was that Postfix on Debian isn’t configured by-default to use opportunistic TLS when talking to other
mailservers. That’s a bit weird, but I’m sure there’s a good reason for it. The solution was to add smtp_tls_security_level = may to my
/etc/postfix/main.cf.
The second problem was that without a valid DKIM signature on them, about half of my test emails were going straight to the
spam folder. Again, it seems that since the last time I seriously ran a mailserver 20 years ago, this has become something that isn’t strictly required… but your emails aren’t
going to get through if you don’t.
I’ve put it off this long, but I think it’s finally time for me to learn some practical DKIM.
Understanding DKIM
What’s DKIM, then?
A server that wants to send email from a domain generates a cryptographic keypair.
The public part of the key is published using DNS. The private part is kept securely on the server.
When the server relays mail on behalf of a user, it uses the private key to sign the message body and a stated subset of the headers3,
and attaches the signature as an email header.
When a receiving server (or, I suppose, a client) receives mail, it can check the signature by acquiring the public key via DNS and validating the signature.
In this way, a recipient can be sure that an email received from a domain was sent with the authorisation of the owner of that domain. Properly-implemented, this is a strong mitigation
against email spoofing.
OpenDKIM
To set up my new server to sign outgoing mail, I installed OpenDKIM and its keypair generator using sudo apt install opendkim
opendkim-tools. It’s configuration file at /etc/opendkim.conf needed the following lines added to it:
# set up a socket for Postfix to connect to:
Socket inet:12301@localhost
# set up a file to specify which IPs/hosts can send through us without authentication and get their messages signed:
ExternalIgnoreList refile:/etc/opendkim/TrustedHosts
InternalHosts refile:/etc/opendkim/TrustedHosts
# set up a file to specify which selector/domain are used to each incoming email address:
SigningTable refile:/etc/opendkim/SigningTable
# set up a file to specify which signing key to use for each selector/domain:
KeyTable refile:/etc/opendkim/KeyTable
Into /etc/opendkim/TrustedHosts I put a list of local IPs/domains that would have their emails signed by this server. Mine looks like this (in this example I’m using
example.com as my domain name, and default as the selector for it: the selector can be anything you like, it only matters if you’ve got multiple
mailservers signing mail for the same domain). Note that 192.168.0.0/16 is the internal subnet on which my sending VM will
run.
/etc/opendkim/SigningTable maps email addresses (I’m using a wildcard) to the subdomain whose TXT record will hold the public key for the signature. This also goes on to
inform the KeyTable which private key to use:
*@example.com default._domainkey.example.com
And then /etc/opendkim/KeyTable says where to find the private key for that:
The public key needs publishing via DNS. Conveniently, when you create a keypair using its tools, OpenDKIM provides a sample (in
BIND-style) for you to copy-paste from or adapt: look in /etc/opendkim/keys/example.com/default.txt!
Once we’ve restarted both services (sudo service postfix restart; sudo service opendkim restart), we can test it!
So I learned something new today.
If you, too, love to spend your Saturday mornings learning something new, have a look at those subscription
options to decide how you’d like to hear about whatever I get up to next.
Footnotes
1 I still outsource my personal email, and I sing the praises of the excellent folks
behind ProtonMail.
2 My desktop email client also doubles as my newsreader, because, yes, of course
you can still find me on USENET. Which, by the way, is undergoing a mini-revival…
3 Why doesn’t DKIM sign
all the headers in an email? Because intermediary servers and email clients will probably add their own headers, thereby invalidating the signature! DKIM gets used to sign the From: header, for obvious reasons, and ought to be used for other headers whose tampering could be
significant such as the Date: and Subject:, but it’s really up to the signing server to choose a subset.
This weekend, I threw a Virtual Free Fringe
party for some friends. The party was under-attended, but it’s fine because I got to experiment
with some tech that I’d been meaning to try.
If you ever want to run something like this yourself1, here’s how I did it.
My goals were:
A web page at which any attendee could “watch together” a streaming video2,
A “chat” overlay, powered by a WhatsApp group3 (the friend group I
was inviting were all using WhatsApp anyway, so this was an obvious choice), and
To do all the above cheaply or for free.
There were two parts to this project:
Setting up a streaming server that everybody can connect to, and
Decorating the stream with a WhatsApp channel
Setting up a streaming server
Linode offers a free trial of $100 of hosting credit over 60 days and has a ready-to-go recipe for installing Owncast, an open-source streaming server I’ve
used before, so I used their recipe, opting for a 4GB dedicated server in their London datacentre: at $36/mo, there’d be no risk of running out of my free trial credit even if I failed
to shut down and delete the virtual machine in good time. If you prefer the command-line, here’s the API call for
that:
The IP address got assigned before the machine finished booting, so I had time to copy that into my DNS configuration so the domain was already pointing to the machine before it was fully running. This enabled it to get its SSL certificate set up rightaway (if not, I’d have had to finish waiting for the DNS change to propogate and then reboot it).
Out of the box, Owncast is insecure-by-default, so I wanted to jump in and change some passwords. For some reason you’re initially only able to correct this over unencrypted
HTTP! I opted to take the risk on this server (which would only be alive for a few hours) and just configure it with this
limitation, logging in at http://mydomain:8080/admin with the default username and password (admin / abc123), changing the credentials to
something more-secure. I also tweaked the configuration in general: setting the service name, URL, disabling chat features,
and so on, and generating a new stream key to replace the default one.
Now I was ready to configure OBS Studio to stream video to my new Owncast server, which would distribute it to anybody who tuned-in.
Decorating the stream
I configured OBS Studio with a “Custom…” stream service with server rtmp://mydomain:1935/live and the stream key I chose when configuring Owncast and kicked off a test
stream to ensure that I could access it via https://mydomain. I added a VLC source4
to OBS and fed it a playlist of videos, and added some branding.
With that all working, I now needed a way to display the WhatsApp chat superimposed over the video.For this, I added a Window Capture source and pointed it at a Firefox window that was
showing a WhatsApp Web view of the relevant channel. I added a Crop/Pad filter to trim off the unnecessary chrome.
Next, I used the Firefox debugger “Style Editor” to inject some extra CSS into WhatsApp Web. The class names vary frequently, so
there’s no point we re-documenting all of them here, but the essence of the changes were:
Changing the chat background to a solid bright color (I used red) that can then be removed/made transparent using OBS’s Chroma Key filter. Because you have a good
solid color you can turn the Similarity and Smoothness way down.
Making all messages appear the same (rather than making my messages appear different from everybody else’s). To do this, I added:
.message-in, .message-out { align-items: flex-start !important; } to align them all to the left
[aria-label="You:"]::after { content: "Dan Q"; height: 15px !important; display: block; color: #00f !important; padding: 8px 0 0 8px; } to force my name to appear
even on my own messages
[aria-label^="Open chat details for "] { display: none; } to remove people’s avatars
[data-testid="msg-meta"] { display: none !important; } to remove message metadata
A hacky bit of CSS to make the backgrounds all white and to remove the speech bubble “tails”
Removing all the sending/received/read etc. icons with [data-icon] { display: none; }
I aimed where possible to exploit selectors that probably won’t change frequently, like [aria-label]s; this improves the chance that I can use the same code next time. I
also manually removed “old” messages from the channel that didn’t need to be displayed on the big screen. I wasn’t able to consistently remove “X new messages” notifications, but I’ll
probably try again another time, perhaps with the help of an injected userscript.
A little bit of a shame that more people didn’t get to see the results of this experiment, but I’m sure I’ll use the techniques I’ve learned on another ocassion.
Footnotes
1 Or, let’s be honest, if you’re Future Dan and you’re trying to remember how you did it
in last time.
3 This could probably be adapted for any other chat system that has a web interface, so if
you prefer Telegram or Slack or whatever ever, that’s fine.
4 OBS’s VLC source is just amazing: not only can you give it files, but you can give it
URLs, meaning that you can set up a playlist of YouTube videos, or RTSP security camera feeds, or pretty much anything else you feel like (and have the codecs for).
When geocachers find a geocache, they typically “log” their find both in the cache’s paper logbook and on one of the online listing sites on which the cache’s coordinates can be
found.1
I’ve been finding and hiding geocaches for… a long while, so I’ve
seen lots of log entries from people who’ve found my caches (and those of others). And it feels to me like the average length of a
geocaching log entry is getting shorter.
“It feels to me like…” isn’t very scientific, though. Let’s see if we can do better.
Getting the data
To test my hypothesis, I needed a decade or so of logs. I didn’t want to compare old caches to new caches (in case people are biased by the logs before them) so I used Geocaching.com’s
own search to open the pages for the 500 caches closest to me that are each at least 10 years old.
I hacked together a quick
userscript to save all of the logs in a way that was easier than copy-pasting each of them but still didn’t involve hitting Geocaching.com’s API or automating bulk-scraping (which would violate their terms of service). Clicking each of several hundred tabs once every few minutes in
the background while I got on with other things wasn’t as much of an ordeal as you might think… but it did take a while.
I mashed that together into a CSV file and for the first time looked at the size of my sample data: ~134,000 log entries,
spanning 20 years. I filtered out everything over 10 years old (because some of the caches might have no logs that old) and stripped out everything that wasn’t a “found it” or “didn’t
find it” log.
That gave me a far more-reasonable ~80,000 records with which I could make Excel cry.2
Results
It looks like my hunch is right. The wordcount of “found” logs on traditional and multi-stage caches has generally decreased over time:
“Did not find” logs, which can be really helpful for cache owners to diagnose problems with their caches, have an even more-pronounced dip:
When I first saw that deep dip on the average length of “did not find” logs, my first thought was to wonder whether the sample might not be representative because the did-not-find rate
itself might have fallen over time. But no: the opposite is true:
Strangely, the only place that the trend is reversed is in “found” logs of virtual caches, which have seen a slight increase in verbosity.
Conclusion
Within the limitations of my research (80,000 logs from 500 caches each 10+ years old, near me), there are a handful of clear trends over the last decade:
Geocachers are leaving increasingly concise logs when they find geocaches.
That phenomenon is even more-pronounced when they don’t find them.
And they’re failing-to-find caches and giving up with significantly greater frequency.
Are these trends a sign of shortening attention spans? Increased use of mobile phones for logging? Use of emoji and acronyms to pack more detail into shorter messages? I don’t know.
I’d love to see some wider research, perhaps by somebody at Geocaching.com HQ (who has database access and is thus able to easily extract
enough data for a wider analysis!). I’m also very interested in whether the identity of the cache finder has an impact on log length: is it impacted by how long ago they
started ‘caching? Whether or not they have hidden caches of their own? How many caches they’ve found?
But personally, I’m just pleased to have been able to have a question in the back of my mind and – through a little bit of code and a little bit of data-mashing – have a pretty good go
at answering it.
Footnotes
1 I have a dream that someday cache logging could be powered by Webmentions or ActivityPub or some similar decentralised-Web technology, so that cachers can log their finds on any site on which a cache is listed or even
on their own site and have all the dots joined-up… but that’s pretty far-fetched I’m afraid. It’s not stopping some of us from experimenting
with possible future standards, though…
2 Just for fun, try asking Excel to extrapolate a second-order polynomial trendline across
80,000 pairs of datapoints. Just don’t do it if you’re hoping to use your computer for anything in the next quarter hour.
3 With stricter guidelines on how a “virtual rewards” virtual caches should work than
existed for original pre-2005 virtuals, these new virtuals are more-likely than their predecessor to encourage or require longer logs.
The week before last, Katie shared with me that article from last month, Who killed Google Reader? I’d read it before so I
didn’t bother clicking through again, but we did end up chatting about RSS a bit1.
Katie “abandoned feeds a few years ago” because they were “regularly ending up with 200+ unread items that felt overwhelming”.
Conversely: I think that dropping your feed reader because there’s too much to read is… solving the wrong problem.
I think that he, like Katie, might be looking at his reader in a different way than I do mine.
RSS is not email!
I’ve been in the position that Katie and David describe: of feeling overwhelmed by the sheer volume of unread items. And I know others have, too. So let me share something I’ve learned
sooner:
There’s nothing special about reaching Inbox Zero in your feed reader.
It’s not noble nor enlightened to get to the bottom of your “unread” list.
Your 👏 feed 👏 reader 👏 is 👏 not 👏 an 👏 email 👏 client. 👏
The idea of Inbox Zero as applied to your email inbox is about productivity. Any message in your email might be something that requires urgent action, and you
won’t know until you filter through and categorise .
But your RSS reader doesn’t (shouldn’t?) be there to add to your to-do list. Your RSS reader is a list of things you might like to read. In an ideal world, reaching “RSS Zero” would mean that you’ve seen everything on the Internet that you might
enjoy. That’s not enlightened; that’s sad!
Use RSS for joy
My RSS reader is a place of joy, never of stress. I’ve tried to boil down the principles that makes it so, and here they are:
Zero is not the target.
The numbers are to inspire about how much there is “out there” for you, not to enumerate how much work need have to do.
Group your feeds by importance.
Your feed reader probably lets you group (folder, tag…) your feeds, so you can easily check-in on what you care about and leave other feeds for a rainy day.2 This is good.
Don’t read every article.
Your feed reader gives you the convenience of keeping content in one place, but you’re not obligated to read every single one. If something doesn’t interest you, mark it
as read and move on. No judgement.
Keep things for later.
Something you want to read, but not now? Find a way to “save for later” to get it out of your main feed so you. Don’t have to scroll past it every day! Star it or tag
it3 or push it to your link-saving or note-taking app. I use a
link shortener which then feeds back into my feed reader into a “for later” group!
Let topical content expire.
Have topical/time-dependent feeds (general news media, some social media etc.)? Have reader “purge” unread articles after a time. I have my subscription to BBC News headlines expire after 5 days: if I’ve taken that long to
read a headline, it might as well disappear.4
Use your feed reader deliberately.
You don’t need popup notifications (a new article’s probably already up to an hour stale by the time it hits your reader). We’re all already slaves to
notifications! Visit your reader when it suits you. I start and end every day in mine; most days I hit it again a couple of other times. I don’t need a notification: there’s always new
content. The reader keeps track of what I’ve not looked at.
It’s not just about text.
Don’t limit your feed reader to just text. Podcasts are nothing more than RSS feeds with attached audio files;
you can keep track in your reader if you like. Most video platforms let you subscribe to a feed of new videos on a channel or playlist basis, so you can e.g. get notified about YouTube channel updates without having to fight with The
Algorithm. Features like XPath Scraping in FreshRSS let you subscribe to services that
don’t even have feeds: to watch the listings of dogs on local shelter websites when you’re looking to adopt, for example.
Do your reading in your reader.
Your reader respects your preferences: colour scheme, font size, article ordering, etc. It doesn’t nag you with newsletter signup popups, cookie notices, or ads. Make the
most of that. Some RSS feeds try to disincentivise this by providing only summary content, but a good feed reader can work
around this for you, fetching actual content in the background.5
Use offline time to catch up on your reading.
Some of the best readers support offline mode. I find this fantastic when I’m on an aeroplane, because I can catch up on all of the interesting articles I’d not
had time to yet while grounded, and my reading will get synchronised when I touch down and disable flight mode.
Make your reader work foryou.
A feed reader is a tool that works for you. If it’s causing you pain, switch to a different tool6,
or reconfigure the one you’ve got. And if the way you find joy from RSS is different from me, that’s fine: this is
a personal tool, and we don’t have to have the same answer.
2 If your feed reader doesn’t support any kind of grouping, get a better reader.
3 If your feed reader doesn’t support any kind of marking/favouriting/tagging of articles,
get a better reader.
4 If your feed reader doesn’t support customisable expiry times… well that’s not too
unusual, but you might want to consider getting a better reader.
5 FreshRSS calls the feature that fetches actual post content from the resulting page
“Article CSS selector on original website”, which is a bit of a mouthful, but you can see what it’s doing. If your feed reader doesn’t support fetching full content… well, it’s
probably not that big a deal, but it’s a good nice-to-have if you’re shopping around for a reader, in my opinion.
6 There’s so much choice in feed readers, and migrating between them is (usually)
very easy, so everybody can find the best choice for them. Feedly, Inoreader, and The Old Reader are popular, free, and easy-to-use if you’re looking to get started. I prefer a selfhosted tool so I use the amazing FreshRSS (having migrated from Tiny Tiny RSS). Here’s some more tips on getting started. You might prefer a desktop or
mobile tool, or even something exotic: part of the beauty of RSS feeds is they’re open and interoperable, so if for example
you love using Slack, you can use Slack to push feed updates to you and get almost all the features you need to do everything in my list, including grouping (using
channels) and saving for later (using Slackbot/”remind me about this”). Slack’s a perfectly acceptable feed reader for some people!
I’ve made a handful of tweaks to my RSS feed which I feel improves upon
WordPress’s default implementation, at least in my use-case.1 In case any of these improvements help
you, too, here’s a list of them:
Post Kinds in Titles
Since 2020, I’ve decorated post titles by prefixing them with the “kind” of post they are (courtesy of the Post Kinds
plugin). I’ve already written about how I do it, if you’re
interested.
RSS Only posts
A minority of my posts are – initially, at least – publicised only via my RSS feed (and places that are directly fed
by it, like email subscribers). I use a tag to identify posts to be hidden in this way. I’ve
written about my implementation before, but I’ve since made a couple of additional improvements:
Suppressing the tag from tag clouds, to make it harder to accidentally discover these posts by tag-surfing,
Tweaking the title of such posts when they appear in feeds (using the same technique as above), so that readers know when they’re seeing “exclusive” content, and
Setting a X-Robots-Tag: noindex, nofollow HTTP header when viewing such tag or a post, to discourage
search engines (code for this not shown below because it’s so very specific to my theme that it’s probably no use to anybody else!).
// 1. Suppress the "rss club" tag from tag clouds/the full tag listfunctionrss_club_suppress_tags_from_display( string $tag_list, string $before, string $sep, string $after, int $post_id ): string {
foreach(['rss-club'] as$tag_to_suppress){
$regex=sprintf( '/<li>[^<]*?<a [^>]*?href="[^"]*?\/%s\/"[^>]*?>.*?<\/a>[^<]*?<\/li>/', $tag_to_suppress );
$tag_list=preg_replace( $regex, '', $tag_list );
}
return$tag_list;
}
add_filter( 'the_tags', 'rss_club_suppress_tags_from_display', 10, 5 );
// 2. In feeds, tweak title if it's an RSS exclusivefunctionrss_club_add_rss_only_to_rss_post_title( $title ){
$post_tag_slugs=array_map(function($tag){ return$tag->slug; }, wp_get_post_tags( get_the_ID() ));
if ( !in_array( 'rss-club', $post_tag_slugs ) ) return$title; // if we don't have an rss-club tag, drop out herereturn trim( "{$title} [RSS Exclusive!]" );
return$title;
}
add_filter( 'the_title_rss', 'rss_club_add_rss_only_to_rss_post_title', 6 );
Adding a stylesheet
Adding a stylesheet to your feeds can make them much friendlier to beginner users (which helps drive adoption) without making them much less-convenient for people who know how
to use feeds already. Darek Kay and Terence Eden both wrote great articles about this just
earlier this year, but I think my implementation goes a step further.
In addition to adding some “Q” branding, I made tweaks to make it work seamlessly with both my RSS and Atom feeds by using
two<xsl:for-each> blocks and exploiting the fact that the two standards don’t overlap in their root namespaces. Here’s my full XSLT; you need to
override your feed template as Terence describes to use it, but mine can be applied to both RSS and Atom.2
I’ve still got more I’d like to do with this, for example to take advantage of the thumbnail images I attach to posts. On which note…
Thumbnail images
When I first started offering email subscription options I used Mailchimp’s RSS-to-email service, which was… okay,
but not great, and I didn’t like the privacy implications that came along with it. Mailchimp support adding thumbnails to your email template from your feed, but WordPress themes don’t
by-default provide the appropriate metadata to allow them to do that. So I installed Jordy Meow‘s RSS Featured Image plugin which did it for me.
During my little redesign earlier this year I decided to go two steps further: (1) ditching the
plugin and implementing the functionality directly into my theme (it’s really not very much code!), and (2) adding not only a <media:content medium="image" url="..."
/> element but also a <media:description> providing the default alt-text for that image. I don’t know if any feed readers (correctly) handle this
accessibility-improving feature, but my stylesheet above will, some day!
So there we have it: a little digital gardening, and four improvements to WordPress’s default feeds.
RSS may not be as hip as it once was, but little improvements can help new users find their way into this (enlightened?) way
to consume the Web.
If you’re using RSS to follow my blog, great! If it’s not for you, perhaps pick your favourite alternative way to get updates, from options including email, Telegram, the Fediverse (e.g. Mastodon), and more…
1 The changes apply to the Atom
feed too, for anybody of such an inclination. Just assume that if I say RSS I’m including Atom, okay?
2 The experience of writing this transformation/stylesheet also gave me yet another opportunity to remember how much I hate working
with XSLTs. This time around, in addition to the normal namespace issues and headscratching syntax, I
had to deal with the fact that I initially tried to use a feature from XSLT version 2.0 (a 22-year-old
version) only to discover that all major web browsers still only support version 1.0 (specified last millenium)!
It’s (approximately) our 0x10th anniversary1,
and, struggling to find a mutually-convenient window in our complex work schedules, we’d opted to spend a few days exploring the Isle of Man. Everything was fine, until we were aboard
the ‘plane.
Once everybody was seated and ready to take off, the captain stood up at the front of the ‘plane and announced that it had been cancelled2.
The Isle of Man closes, he told us (we assume he just meant the airport) and while they’d be able to get us there before it did, there wouldn’t be sufficient air traffic
control crew to allow them to get back (to, presumably, the cabin crews’ homes in London).
Back at the terminal we made our way through border control (showing my passport despite having not left the airport, never mind the country) and tried to arrange a rebooking,
only to be told that they could only manage to get us onto a flight that’d be leaving 48 hours later, most of the way through our mini-break, so instead we opted for a refund and gave
up.3
We resolved to try to do the same kinds of things that we’d hoped to do on the Isle of Man, but closer to home: some sightseeing, some walks, some spending-time-together. You know the
drill.
A particular highlight of our trip to the North Leigh Roman Villa – one of those “on your doorstep so you never go” places – was when the audio tour advised us to beware of the snails
when crossing what was once the villa’s central courtyard.
At first we thought this was an attempt at humour, but it turns out that the Romans brought with them to parts of Britain a variety of large edible snail – helix pomatia –
which can still be found in concentration in parts of the country where they were widely farmed.4
There’s a nice little geocache near the ruin, too, which we were able to find on our way back.
Before you think that I didn’t get anything out of my pointless hours at the airport, though, it turns out I’d brought home a souvenier… a stinking cold! How about that for efficiency:
I got all the airport-germs, but none of the actual air travel. By mid-afternoon on Tuesday I was feeling pretty rotten, and it only got worse from then on.
I’m confident that Ruth didn’t mind too much that I spent Wednesday mostly curled up in a sad little ball, because it let her get on with applying to a couple of jobs she’s interested
in. Because it turns out there was a third level of disaster to this week: in addition to our ‘plane being cancelled and me getting sick, this week saw Ruth made redundant as her
employer sought to dig itself out of a financial hole. A hat trick of bad luck!
As Ruth began to show symptoms (less-awful than mine, thankfully) of whatever plague had befallen me, we bundled up in bed and made not one but two abortive attempts at watching a film
together:
Spin Me Round, which looked likely to be a simple comedy that wouldn’t require much effort
by my mucus-filled brain, but turned out to be… I’ve no idea what it was supposed to be. It’s not funny. It’s not dramatic. The characters are, for the most part, profoundly
uncompelling. There’s the beginnings of what looks like it was supposed to be a romantic angle but it mostly comes across as a creepy abuse of power. We watched about half and gave
up.
Ant-Man and the Wasp:
Quantumania, because we figured “how bad can a trashy MCU sequel be anyway; we know what to expect!” But we
couldn’t connect to it at all. Characters behave in completely unrealistic ways and the whole thing feels like it was produced by somebody who wanted to be making one of the
new Star Wars films, but with more CGI. We watched about half and gave up.
As Thursday drew on and the pain in my head and throat was replaced with an unrelenting cough, I decided I needed some fresh air.
I find myself wondering if (despite three jabs and a previous infection) I’ve managed to contract covid again, but I haven’t found the inclination to take a test. What would I do differently if I do have it, now, anyway? I feel like we
might be past that point in our lives.
All in all, probably the worst anniversary celebration we’ve ever had, and hopefully the worst we’ll ever have. But a fringe benefit of a willingness to change bases is that we can
celebrate our 10th5 anniversary next year, too.
Here’s to that.
Footnotes
1 Because we’re that kind of nerds, we count our anniversaries in base 16
(0x10 is 16), or – sometimes – in whatever base is mathematically-pleasing and gives us a nice round number. It could be our 20th anniversary, if you prefer octal.
2 I’ve been on some disastrous aeroplane journeys before, including one just earlier this
year which was supposed to take me from Athens to Heathrow, got re-arranged to go to Gatwick, got
delayed, ran low on fuel, then instead had to fly to Stansted, wait on the tarmac for a couple of hours, then return to Gatwick (from which I travelled – via Heathrow –
home). But this attempt to get to the Isle of Man was somehow, perhaps, even worse.
3 Those who’ve noticed that we were flying EasyJet might rightly give a knowing nod at
this point.
4 The warning to take care not to tread on them is sound legal advice: this particular
variety of snail is protected under the Wildlife and Countryside Act 1981!
5 Next year will be our 10th anniversary… in base 17. Eww, what the hell is base 17 for
and why does it both offend and intrigue me so?
There’s a perception that a blog is a long-lived, ongoing thing. That it lives with and alongside its author.1
But that doesn’t have to be true, and I think a lot of people could benefit from “short-term” blogging. Consider:
Photoblogging your holiday, rather than posting snaps to social media
You gain the ability to add context, crosslinking, and have permanent addresses (rather than losing eveything to the depths of a feed). You can crosspost/syndicate to your favourite
socials if that’s your poison..
Blogging your studies, rather than keeping your notes to yourself
Writing what you learn helps you remember it; writing what you learn in a public space helps others learn too and makes it easy to search for your discoveries later.2
Recording your roleplaying, rather than just summarising each session to your fellow players
My D&D group does this at levellers.blog! That site won’t continue to be updated forever – the party will someday retire or, more-likely, come to a glorious but horrific end – but
it’ll always live on as a reminder of what we achieved.
One of my favourite examples of such a blog was 52 Reflect3 (now integrated into its successor The Improbable Blog). For 52 consecutive weeks my partner‘s brother Robin
blogged about adventures that took him out of his home in London and it was amazing. The project’s finished, but a blog was absolutely the right medium for it because now it’s got a
“forever home” on the Web (imagine if he’d posted instead to Twitter, only for that platform to turn into a flaming turd).
I don’t often shill for my employer, but I genuinely believe that the free tier on WordPress.com is an excellent
way to give a forever home to your short-term blog4.
Did you know that you can type new.blog (or blog.new; both work!) into your browser to start one?
What are you going to write about?
Footnotes
1This blog is, of course, an example of a long-term blog. It’s been going in
some form or another for over half my life, and I don’t see that changing. But it’s not the only kind of blog.
2 Personally, I really love the serendipity of asking a web search engine for the solution
to a problem and finding a result that turns out to be something that I myself wrote, long ago!
4 One of my favourite features of WordPress.com is the fact that it’s built atop the
world’s most-popular blogging software and you can export all your data at any time, so there’s absolutely no lock-in: if you want to migrate to a competitor or even host your own
blog, it’s really easy to do so!
Of all of the videogames I’ve ever played, perhaps the one that’s had the biggest impact on my life1
was: Werewolves and (the) Wanderer.2
This simple text-based adventure was originally written by Tim Hartnell for use in his 1983 book Creating Adventure
Games on your Computer. At the time, it was common for computing books and magazines to come with printed copies of program source code which you’d need to re-type on your own
computer, printing being significantly many orders of magnitude cheaper than computer media.3
When I first came across the source code to Werewolves, I’d already begun my journey into computer programming. This started alongside my mother and later – when her
quantity of free time was not able to keep up with my level of enthusiasm – by myself.
I’d been working my way through the operating manual for our microcomputer, trying to understand it all.5
And even though I’d typed-in dozens of programs before, both larger and smaller, it was Werewolves that finally helped so many key concepts “click” for me.
In particular, I found myself comparing Werewolves to my first attempt at a text-based adventure. Using what little I’d grokked of programming so far, I’d put together
a series of passages (blocks of PRINT statements6)
with choices (INPUT statements) that sent the player elsewhere in the story (using, of course, the long-considered-harmfulGOTO statement), Choose-Your-Own-Adventure style.
Werewolves was… better.
Werewolves and Wanderer was my first lesson in how to structure a program.
Let’s take a look at a couple of segments of code that help illustrate what I mean (here’s the full code, if you’re interested):
What’s interesting about the code above? Well…
The code for “what to do when you win the game” is very near the top. “Winning” is the default state. The rest of the adventure exists to obstruct that. In a
language with enforced line numbering and no screen editor7,
it makes sense to put fixed-length code at the top… saving space for the adventure to grow below.
Two subroutines are called (the GOSUB statements):
The first sets up the game state: initialising the screen (2610), the RNG (2620), and player
characteristics (2630 – 2660). This also makes it easy to call it again (e.g. if the player is given the option to “start over”). This subroutine
goes on to set up the adventure map (more on that later).
The second starts on line 160: this is the “main game” logic. After it runs, each time, line 40 checks IF RO<>11 THEN 30. This tests
whether the player’s location (RO) is room 11: if so, they’ve exited the castle and won the adventure. Otherwise, flow returns to line 30 and the “main
game” subroutine happens again. This broken-out loop improving the readability and maintainability of the code.8
A common subroutine is the “delay loop” (line 3520). It just counts to 900! On a known (slow) processor of fixed speed, this is a simpler way to put a delay in than
relying on a real-time clock.
The game setup gets more interesting still when it comes to setting up the adventure map. Here’s how it looks:
What’s this code doing?
Line 2690 defines an array (DIM) with two dimensions9
(19 by 7). This will store room data, an approach that allows code to be shared between all rooms: much cleaner than my first attempt at an adventure with each room
having its own INPUT handler.
The two-level loop on lines 2700 through 2730 populates the room data from the DATA blocks. Nowadays you’d probably put that data in a
separate file (probably JSON!). Each “row” represents a room, 1 to 19. Each “column” represents the room you end up
at if you travel in a given direction: North, South, East, West, Up, or Down. The seventh column – always zero – represents whether a monster (negative number) or treasure
(positive number) is found in that room. This column perhaps needn’t have been included: I imagine it’s a holdover from some previous version in which the locations of some or all of
the treasures or monsters were hard-coded.
The loop beginning on line 2850 selects seven rooms and adds a random amount of treasure to each. The loop beginning on line 2920 places each of six
monsters (numbered -1 through -6) in randomly-selected rooms. In both cases, the start and finish rooms, and any room with a treasure or monster, is
ineligible. When my 8-year-old self finally deciphered what was going on I was awestruck at this simple approach to making the game dynamic.
Rooms 4 and 16 always receive treasure (lines 2970 – 2980), replacing any treasure or monster already there: the Private Meeting Room (always
worth a diversion!) and the Treasury, respectively.
Curiously, room 9 (the lift) defines three exits, even though it’s impossible to take an action in this location: the player teleports to room 10 on arrival! Again, I assume this is
vestigal code from an earlier implementation.
The “checksum” that’s tested on line 2740 is cute, and a younger me appreciated deciphering it. I’m not convinced it’s necessary (it sums all of the values in
the DATA statements and expects 355 to limit tampering) though, or even useful: it certainly makes it harder to modify the rooms, which may undermine
the code’s value as a teaching aid!
Something you might notice is missing is the room descriptions. Arrays in this language are strictly typed: this array can only contain integers and not strings. But there are
other reasons: line length limitations would have required trimming some of the longer descriptions. Also, many rooms have dynamic content, usually based on random numbers, which would
be challenging to implement in this way.
As a child, I did once try to refactor the code so that an eighth column of data specified the line number to which control should pass to display the room description. That’s
a bit of a no-no from a “mixing data and logic” perspective, but a cool example of metaprogramming before I even knew it! This didn’t work, though: it turns out you can’t pass a
variable to a Locomotive BASIC GOTO or GOSUB. Boo!10
Werewolves and Wanderer has many faults11.
But I’m clearly not the only developer whose early skills were honed and improved by this game, or who hold a special place in their heart for it. Just while writing this post, I
discovered:
Many, many people commenting on the above or elsewhere about how instrumental the game was in their programming journey, too.
A decade or so later, I’d be taking my first steps as a professional software engineer. A couple more decades later, I’m still doing it.
And perhaps that adventure -the one that’s occupied my entire adult life – was facilitated by this text-based one from the 1980s.
Footnotes
1 The game that had the biggest impact on my life, it might surprise you to hear, is
not among the “top ten videogames that stole
my life” that I wrote about almost exactly 16 years ago nor the follow-up list I published in its incomplete form three years later. Turns out that time and
impact are not interchangable. Who knew?
2 The game is variously known as Werewolves and Wanderer, Werewolves and
Wanderers, or Werewolves and the
Wanderer. Or, on any system I’ve been on, WERE.BAS, WEREWOLF.BAS, or WEREWOLV.BAS, thanks to the CPC’s eight-point-three filename limit.
3 Additionally, it was thought that having to undertake the (painstakingly tiresome)
process of manually re-entering the source code for a program might help teach you a little about the code and how it worked, although this depended very much on how readable the code
and its comments were. Tragically, the more comprehensible some code is, the more long-winded the re-entry process.
5 One of my favourite features of home microcomputers was that seconds after you turned them on, you could start programming. Your prompt
was an interface to a programming language. That magic had begun to fade by the time DOS came to dominate
(sure, you can program using batch files, but they’re neither as elegant nor sophisticated as any BASIC dialect) and was completely lost by the era of booting
directly into graphical operating systems. One of my favourite features about the Web is that it gives you some of that magic back again: thanks to the debugger in a modern browser,
you can “tinker” with other people’s code once more, right from the same tool you load up every time. (Unfortunately, mobile devices – which have fast become the dominant way for
people to use the Internet – have reversed this trend again. Try to View Source on your mobile – if you don’t already know how, it’s not an easy job!)
6 In particular, one frustration I remember from my first text-based adventure was that
I’d been unable to work around Locomotive BASIC’s lack of string escape sequences – not that I yet knew what such a thing would be called – in order to put quote marks inside a quoted
string!
7 “Screen editors” is what we initially called what you’d nowadays call a “text editor”:
an application that lets you see a page of text at the same time, move your cursor about the place, and insert text wherever you feel like. It may also provide features like
copy/paste and optional overtyping. Screen editors require more resources (and aren’t suitable for use on a teleprinter) compared to line editors, which preceeded them. Line editors only let you view and edit a single line at a time, which is how most of my first 6
years of programming was done.
8 In a modern programming language, you might use while true or similar for a
main game loop, but this requires pushing the “outside” position to the stack… and early BASIC dialects often had strict (and small, by modern standards) limits on stack height that
would have made this a risk compared to simply calling a subroutine from one line and then jumping back to that line on the next.
9 A neat feature of Locomotive BASIC over many contemporary and older BASIC dialects was
its support for multidimensional arrays. A common feature in modern programming languages, this language feature used to be pretty rare, and programmers had to do bits of division and
modulus arithmetic to work around the limitation… which, I can promise you, becomes painful the first time you have to deal with an array of three or more dimensions!
10 In reality, this was rather unnecessary, because the ON x GOSUB command
can – and does, in this program – accept multiple jump points and selects the one
referenced by the variable x.
11 Aside from those mentioned already, other clear faults include: impenetrable
controls unless you’ve been given instuctions (although that was the way at the time); the shopkeeper will penalise you for trying to spend money you don’t have, except on food,
presumably as a result of programmer laziness; you can lose your flaming torch, but you can’t buy spares in advance (you can pay for more, and you lose the money, but you don’t get a
spare); some of the line spacing is sometimes a little wonky; combat’s a bit of a drag; lack of feedback to acknowledge the command you enterted and that it was successful; WHAT’S
WITH ALL THE CAPITALS; some rooms don’t adequately describe their exits; the map is a bit linear; etc.
The other night, Ruth and I were talking about collective nouns (y’know, like a herd of cows or a flock of sheep) and came
up with the somewhat batty idea of solitary nouns. Like collective nouns, but for a singular subject (one cow, sheep, or whatever).
Then, we tried to derive what the words could be. Some of the results write themselves.1
Some of them involve removing one or more letters from the collective noun to invent a shorter word to be the solitary noun.
For others, we really had to stretch the concept by mutating words in ways that “felt right”, using phoenetic spellings, or even inventing collective nouns so that we
could singularise them:
Did I miss any obvious ones?
Footnotes
1 Also consider “parliament of owls” ➔ “politician of owl”, “troop of monkeys” ➔ “soldier
of monkey”, “band of gorillas” ➔ “musician of gorilla”. Hey… is that where that band‘s name come from?
2 Is “cluster of stars” ➔ “luster of star” anything?
3 Ruth enjoyed the singularised “a low of old bollock”, too.
At school, our 9-year-old is currently studying the hsitory of human civilization from the late stone age through to the bronze age. The other week, the class was split into three
groups, each of which was tasked with researching a different piece of megalithic architecture:
One group researched Stonehenge, because it’s a pretty obvious iconic choice
The final group took the least-famous monument, our very own local village henge The Devil’s Quoits
And so it was that one of our eldest’s classmates was searching on the Web for information about The Devil’s Quoits when they found… my vlog on the subject! One of them recognised me and said, “Hey, isn’t that your Uncle Dan?”1
On the school run later in the day, the teacher grabbed me and asked if I’d be willing to join their school trip to the henge, seeing as I was a “local expert”. Naturally, I said yes,
went along, and told a bunch of kids
what I knew!
I was slightly intimidated because the class teacher, Miss Hutchins,
is really good! Coupled with the fact that I don’t feel like a “local expert”2, this became a
kick-off topic for my most-recent coaching session (I’ve mentioned how awesome my coach is before).
I eventually talked to the class mostly about the human geography aspects of the site’s story. The area around the Devil’s Quoits has changed so much over the millenia, and it’s a
fascinating storied history in which it’s been:
A prehistoric henge and a circle of 28 to 36 stones (plus at least one wooden building, at some point).
Medieval farms, from which most of the stones were taken (or broken up) and repurposed.
A brief (and, it turns out, incomplete) archeological survey on the remains of the henge and the handful of stones still-present.
Quarrying operations leaving a series of hollowed-out gravel pits.
More-thorough archeological excavation, backed by an understanding of the cropmarks visible from aircraft that indicate that many prehistoric people lived around this area.
Landfill use, filling in the former gravel pits (except for one, which is now a large lake).
Reconstruction of the site to a henge and stone circle again.3
It turns out that to be a good enough to pass as a “local expert”, you merely have to know enough. Enough to be able to uplift and inspire others, and the humility to know when
to say “I don’t know”.4
That’s a lesson I should take to heart. I (too) often step back from the opportunity to help others learn something new because I don’t feel like I’m that experienced at
whatever the subject is myself. But even if you’re still learning something, you can share what you’ve learned so far and help those behind you to follow the same path.
I’m forever learning new things, and I should try to be more-open to sharing “as I
learn”. And to admit where I’ve still got a long way to go.
Footnotes
1 Of course, I only made the vlog because I was doing a videography course at the time and
needed subject matter, and I’d recently been reading a lot about the Quoits because I was planning on “hiding” a virtual geocache at the site, and then I got carried away. Self-nerdsniped again!
2 What is a local expert? I don’t know, but what I feel like is just a guy who
read a couple of books because he got distracted while hiding a geocache!
3 I’ve no idea what future archeologists will make of this place when they finda
reconstructed stone circle and then, when they dig nearby, an enormous quantity of non-biodegradable waste. What was this strange stone circle for, they’ll ask themselves? Was it a
shrine to their potato-based gods, to whom they left crisp packets as a sacrifice?
4 When we’re talking about people from the neolithic, saying “I don’t know” is pretty
easy, because what we don’t know is quite a lot, it turns out!
I’ve long been a fan of Hugh Howey‘s Wool series of books (especially the first and third;
the second’s a bit weaker); in fact I’ve been enjoying re-reading them as a bedtime story for our eldest!1
Naturally, when I heard that it would become a TV series I was really excited! I’m enjoying
the series so far, especially thanks to its epic casting. It diverges a lot from the books –
sometimes in ways I love, sometimes in ways that confuse me – but that’s not what I wanted to talk about. I wanted to share how cool the opening credits sequence is!
Spoiler warning: even if you’re following the TV series there are likely to be major spoilers below based on my recollection of the books!
We open on the sun shining above a thick layer of all-obscuring clouds, tinted sickly yellow like poison gas, then descend into the darkness below. This hints at the uninhabitability of
the world above, foreshadows Lukas stargazing through gaps in the clouds2,
and foreshadows revelations about the argon gas used to flush the airlocks. The descent feels representative of humanity’s migration from the sunlit surface to the underground silos.
Looking down, we see the silo from above in a desolate landscape, introducing the world and its setting. The area around it is shrouded and hostile, reflecting the residents’ view of
the outside world as unsurvivable, but also masking our view of the other nearby silos that we might otherwise be able to see.
Descending “into” this representation of the silo, we get a view for only a split second that looks distinctly like the platter and spindle of a magnetic hard disk drive, broken-up as
if to represent corruption. This reflects a number of major plot points in the first season relating to the destruction and recovery of secret information from ancient storage devices.
Truly within the silo now, we see the spokes of landings radiating out from the great stairwell. The shape is reminiscient of a cog: a motif we’ll return to later. Humanoid shapes made
of light, like you get in a long exposure, move around, giving both the idea of a surveillance state, and setting us up to think of all such “glowing spots” as people (relevant later in
the credits).
A representation of the stairwell itself appears, with a lit gaseous substance whipping up and down it. Given that we’ve just been shown that this kind of “light” represents people,
it’s easy to see this as showing us the traffic that grinds up and down the silo, but it also feels like looking at part of a great machine, pumping gas through a condenser: notice that
there’s no landings any more: this is all about the never-ending traffic.
A landing appears, and the gaseous forms are now more-clearly humanoid, almost as if they’re ghosts (perhaps pointing to the number of generations who’ve lived before, in this place, or
else a reference to Juliette’s investigation into the lives of those who lived before her).
More swirling gas-people, this time below an empty balconette: perhaps a nod to the source of Juliette’s uncommon name (in the books, it’s taken from Romeo & Juliet, a
possibly-illicit copy of which is retained by the silo and performed prior to Juliette’s birth and for at least a short while afterwards: she writes mechanical notes on the back of a
playscript), or perhaps a reference to George’s death after “falling” from a balcony.
Seen from a different angle, the colour shifts, and the gas/ghosts become white like the argon spray of the airlock. The people are all part of a machine: a machine that sends people
outside to clean and die. But more than that, the blue comes to represent a clean/perfect view of what a silo can be: a blueprint representation of the goals of its creators to
shape the inhabitants into their vision of the future:
We refocus on the shape of the silo itself, but just for a split second the view looks more like an x-ray… of a human spine? As if to remind us that it’s people who upload the
system of the silo, just as its concrtete uploads its physical structure. Also a reminder that the silo is treated (by those who manage it, both within and beyond it) as an
organic thing that can be nurtured, grown, or if necessary killed.
This becomes the structure of the silo, but it almost looks architectural: a “clean” look, devoid of people or signs of life, like a blueprint, perhaps foreshadowing Donald’s role in
designing the structures that will eventually become the silos. The “space” between the arms is emphasised, showing how the social system that this structure imposes serves to separate
and segregate people: classism is a recurring theme in both the books and the TV series, and it eventually becomes apparent that the silos are specifically organised to reduce
communication between interdependent groups.
Returning to the “populated” silo – swirls of gas spiralling away down (or up: it’s no longer clear!), we catch a glimpse of a nautilus shell at the centre. The nautilus is a “living
fossil”, a creature from a bygone era that continues to survive in our modern world, which is an excellent metaphor for the population of the dead world who go on living beneath its
surface. The nautilus shell is a recurring image within the TV series: Gloria’s visions of the world that came before see her clutching one and tracing its shape, for example.
We cut to what appears to be a seed, representing both the eventual conclusion of the story (Juliette, Charlotte and the Silo 18 survivors’ discovery of the cache of supplies that will
allow them to begin rebuilding the world) and also the nature of the silo3. The seed we see initially
appears to fail and degrade, becoming nothing at all, before eventually growing into the beginnings of a strong new plant. This could represent the eventual and inevitable collapse of
silo 18, among others, but the eventual flourishing of those that survive, or on a broader scale the collapse of modern civilization to be replaced by the silos, or even of the silo
system to be replaced with that which follows it after the conclusion of the story. Lots of options!
It’s also possibly a reflection of the harsh and opaque eugenics/population control mechanism imposed by the “lottery”, which becomes a major plot point in the TV series much earlier
than in the books.
We cut to trees, thriving despite a yellow fog. The sky can’t be seen, which is a reminder that all of humanity’s resources must now be produced underground (trees are especially rare
and prized, leading to a shortage of paper4.
It seems to be deliberately left unclear whether the trees we see are on the surface before the fall of humanity, on the surface after the fall, or grown underground.
A fruit falls from the tree, which links back to the seed we saw geminate earlier but also seems likely to be a representation of the concept of original sin. The grand idea of the
silos was to create a better world on the other side of a man-made catastrophe, but this idea is inherently flawed because the systems that are constructed by the same people who are
complicit in the destruction of the world that came before. The structure that’s put in place through the Pact carries the weight of the sins of its creators: even though the
inhabitants of silo 1 ultimately intend to destroy themselves, they’re unable to create a new world that is both better than the one that came before and free from their
influence: it’s an impossibility.
It’s also possibly a representation of the religious beliefs of some inhabitants that the creators of the silo should be revered as gods. This was a recurring plot point in the books
but has been somewhat muted in the TV series so far.
The metaphor continues when we see that this falling fruit is already beginning to rot, degrading as it tumbles towards the earth. We don’t see it strike the ground: it almost seems to
hover in the air, uncertain and undecided, and reflective of the eventual end when the inhabitants of the silos break free from the shackles of the system that’s been constructed for
them and can choose their own destiny. Or perhaps we don’t see the collision simply because the camera continues to fall down into the earth and below the surface again?
This time, wer’e very deep: all the way down in the depths of Mechanical, at the bottom of the silo: home to our heroine and source of many aspects of the story. In the centre, a shaft
descends, connecting us back to the “spine” of the silo – the great staircase – but it’s harder to see as a wealth of machinery appears to support it, occluding our view. From down here
in Mechanical it appears that the machines keep the silo running, whereas further up it looked like humans pumped through it like blood, which reflects Juliette’s disagreements with
many of those up-top about their priorities during her time as Sheriff and, later, as Mayor.
We see a cloud of steam, like that used to drive the generator that brings life to the silo, and for a moment it’s impossible to differentiate it from the cloud of people we saw
earlier, rushing up and down the stairs. Look closely at the steam, though, and you’ll see that it too contains the ghosts of people.
Deeper still, the cog motif returns and we’re buried in an impossible number of interconnected gears. The machine that they support is impossible to comprehend from within: How big is
it? What is it for? Who made it and why?
The final cog mutates into the staircase again, winding away from us and hammering the point home.
The staircase changes again, first becoming an outline of itself (a callback to the “blueprint” design we saw earlier, reminding us that this thing was designed to be like
this)…
…but this becomes a double-helix, representing the chaos of life. Again, the metaphor is of a perfect idea constructed to achieve a goal, but the unpredictability of humans
leads to a different outcome.
Seen from above, the staircase now looks like an enormous clock, a machine of cogs each turning slower than the one beneath, counting down until the end of the silo experiment in
accordance with the whim of its creators. Except, of course, if something were to break this machine.
Seen from the side, the silo is a hive of activity, but the shape the levels form in this depiction are exactly like the rotors of a steam turbine, and this is reflected by an image of
steam, almost in the shape of a growing tree – passing behind it in the background. The generator and its rotor blades is a significant early plot point in both the books and the TV
series, and the books in particular use engine metaphors to explain Juliette’s interpretation of different situations she finds herself in, even those which are distinctly interpersonal
rather than mechanical.
Looking back up the silo, towards the light, we can now see its shape and structure for what it is: just another cog – a part of an even bigger machine that is the whole Operation Fifty
silo network. The people are the lifeblood of this machine, but they’re as replaceable and interchangable as any other part.
Finally, we crossfade to the title, looking like a stencil. Each letter is more-degraded than the one before it, representing the impossibility of building a perfect system.
The credits sequence is less than 90 seconds long, but so much is packed into it. It’s just great.
Footnotes
1 We’re into the final act of Dust now and it’s been amazing to experience the characters – loveable and hateable – of the series.
2 Curiously, in the TV series Lukas is only ever seen stargazing on clear nights, which is
one of those confusing choices I mentioned. I suspect it’s for aesthetic reasons and to help add some romance to Juliette and Lukas’s courtship.
3 A silo is, of course, a place to store something valuable through the hard times. This
is exactly what the silos in this story are for.
4 The shortage of paper shows up many times in the books but is somewhat glossed-over in
the TV series. I’m not sure how they’ll reconcile that with the impact of the discovery of the Legacy, later.
I’m sure I can’t be the only person who’s been asked “why can’t the (or ‘shouldn’t the’) WordPress post editor let multiple people edit post at the same time”. Often, people
will compare it to e.g. Google Docs.
Dawid summarised the challenging issues in any effort to implement this much-desired feature. Some of them are examples of those unsolved problems that keep rearing their heads in
computer science, like the two generals’ problem, but even the solvable problems are difficult: How does one
handle asynchronous (non-idempotent) commutative operations? How is the order of disparate actions determined? Which node is the source of truth? If a server is used, where is that
server (with a nod to quite how awful the experience of implementing a Websockets server in PHP can be…)? And so on…
I really appreciated Dawid’s reference to the various bits of academic literature that’s appeared over the last four decades (!) about how these problems might be solved. It’s a strong
reminder that these things we take for granted in live-updating multi-user web applications are not trivial and every question you can answer raises more questions.
There’s some great early proof-of-concepts, so we’re “getting there”, and it’s an exciting time. Personally, I love the idea of the benefits this could provide for offline editing
(perhaps just because I’m still a huge fan of a well-made PWA!).
James Giroux’s goal: that we all become more curious about and more invested in our team’s experiences, from a humanistic standpoint. His experience of companies with organic growth of
software companies is very, very familiar: you make a thing and give it away, then you need more people, then you’ve somehow got a company and it’s all because you just had an idea
once. Sounds like Three Rings!
James was particularly keen to share with us the results of his Team Experience Index research, and I agree that some of the result are
especially exciting, in particularly the willingness of underrepresented groups, especially women, to enagage with the survey: this provides hugely valuable data about the health of
teams working in the WordPress space.
“We have this project that we work with and contribute to, that we love,” says James, in an attempt to explain the highly-positive feedback that his survey respondents gave when asked
questions about the authenticity of their purpose and satisfaction in their role.
So, what do we do with these findings? How do WordPress-ey companies improve? James recommends that we:
Get better are showing what recognition, celebration, and career growth looks like,
Improve support and training for team leaders to provide them with the tools to succeed and inspire, and
Bridge the gap between leadership and team members with transparent, open dialogue.
Good tips, there.
The Big Photo
A WordCamp tradition is to try to squeeze every willing participant into a photo. Clearly with the size that these events are, nowadays, this requires some wrangling (and, in this case,
the photographers standing atop the roof of a nearby building to get everybody into frame).
I’ll have to keep an eye out for the final picture and see if I can find myself in it.
I always find that learning about bleeding edge CSS techniques makes me feel excited and optimistic, perhaps because
CSS lends itself so well towards a progressive enhancement approach to development: often, you can start using a new technique
today and it’ll only benefit, say, people using a beta version of a particular browser (and perhaps only if they opt-in to the applicable feature flag). But if you’ve designed
your site right then the lack of this feature won’t impact anybody else, and eventually the feature will (hopefully) trickle-down into almost everybody’s Web experience.
Anyway, that’s what Fellyph Cintra says too, but he adds that possibly we’ve still not grown out of thinking that browsers take a long
time between versions. 5 years passed between the release of Internet Explorer 6 and Internet Explorer 7, for example! But nowadays most browsers are evergreen with releases each
month! (Assuming we quietly ignore that Apple don’t sent new versions of Safari to old verisons of MacOS, continuing to exacerbate a problem that we used to see with Internet Explorer
on Windows, ahem.)
An important new development may come from Baseline, a project to establish a metric of what you can reliably use on the Web today. So a
bit like Can I Use, I guess, but taken from the opposite direction: starting from the browsers and listing the features, rather than the other way
around.
Anyway, Fellyph went on to share some exciting new ideas that we should be using, like:
object-fit and object-position, which can make the contents of any container “act like” a background
aspect-ratio, which I’m already using and I love, but I enjoyed how Fellyph suggested combining the two to crop images to a fluid container on the client side
scroll-behavior: smooth, which I’ve used before; it’s pretty good
clamp, which I use… but I’m still not sure I fully grok it: I always have to load some documentation with examples when I use it
@container queries, which can apply e.g. (max-width: ...) rules to things other than the viewport, which I’ve not found a need for yet but I can see the
value of it
@layers, which grant an additional level of importance in the cascade: for example, you might load a framework into a layer (with @import url(...)
layer(framework)) which is defined as a lower-priority than your override layer, meaning you won’t have to start slapping !important all over the shop
@media (400px <= width <= 600px)-style media queries, which are much easier to understand than min-width: if you’re used to thinking in a
more-procedural programming language (I assume they work in container queries too!)
It’s also worth remembering:
@supports, which is badass and I love and use it already (it was especially useful as display: grid began to roll out and I wanted to start using it but
needed to use a fallback method for browsers that didn’t support it yet
:has(), which I’ve long thought is game-changing: styling something based on what it contains is magical; not really suitable for mainstream use yet without
Firefox support, though (it’s still behind a feature flag)! Fellyph sold me on the benefit of :not(:has(...)), though!
Nesting, which again doesn’t have Firefox support yet but provides SCSS-like nesting in CSS, which is awesome
Scroll-driven animations, which can e.g. do parallax effects without JavaScript (right now it’s Canary only, mind…), using e.g. animation-timeline: and
animation-range: to specify that it’s the scroll position within the document that provides the timeline for the animation
And keeping an eye on upcoming things like text-balanced (which I’m already excited by), popover, selectmenu, view transitions (which I’ve been
experimenting with because they’re cool), and scoped style.
For my second workshop, I joined Google’s Adam Silverstein to watch him dissect a few participants’ websites performance using Core Web
Vitals as a metric. I think I already know the basics of Core Web Vitals, but when it comes to improving my score (especially on work-related sites with
unpleasant reliance on heavyweight frameworks like React, in my experience).
We talked a lot about render blocking (thanks to JS and CSS in the
<head>), thread blocking (by scripts, especially those reacting to user input), TTFB (relating to actual network
and server performance, or at least server-side processing), TBT (the time between FCP and TTI), and the upcoming change to measure INP rather than FID. That’s a lot of acronyms.
The short of it is that there are three pillars to Core Web Vitals: loading (how long until the page renders), interactivity (how long until the page
responds to user interaction), and stability (how long it takes for the page to cease layout shifts as a result of post-load scripts and stylesheets). I was pleased
that Adam acknowledged the major limitation of lab testing resulting from developers often using superior hardware and Internet connections to typical users, and how if you’re
serious about performance metrics you’ll want to collect RUM data.
I came away with a few personalised tips, but they’re not much use for your site: I paid attention to the things that’ll be helpful for the sites I look after. But
I’ll be taking note of his test pages so I can play with some of the tools he demonstrated later on.
I couldn’t liveblog this because I spent too much of the session applauding. A few highlights from memory:
Phase 2 (of 4) of Gutenberg is basically complete, which is cool. Some back-and-forth about the importance of phase 4 (bringing better multilingual support to WordPress) and how it
feels like it’s a long way away.
In the same vein as his 2016 statement that WordPress developers should “learn JavaScript deeply”, Matt leant somewhat into the idea that from today they
should “watch AI carefully”; I’m not 100% convinced, but it’s not been stopping me from getting involved with a diversity of AI experiments (including some WordPress-related ones)
anyway.
Musings about our community being a major part of why WordPress succeeded (and continues to thrive) unlike some other open source projects of its era. I agree that’s a
factor, but I suspect that being in the right place at the right time was also important. Perhaps more on that another time.
Announcement of the next WordCamp Europe location.
This post is basically a live-blog of everything I got up to, and it’s mostly for my own benefit/notetaking. If you don’t read it, nobody will blame you.
David Artiss took the courageous step of installing 36 popular plugins onto a fresh WordPress site and was, unsurprisingly, immediately bombarded by a
billion banners on his dashboard. Some were merely unhelpful (“don’t forget to add your API key”), others were annoying (“thanks for installing our plugin”), and plenty more were
commercial advertisements (“get the premium version”) despite the fact that WordPress.org guidelines recommend against this. It’s no surprise that this kind of “aggressive promotion” is
the single biggest annoyance that people reported when David asked around on social media.
Similarly, plugins which attempt to break the standard WordPress look-and-feel by e.g. hoisting themselves to the top of the menu, showing admin popovers, putting settings sections in
places other than the settings submenu, and so on are a huge annoyance to everybody. I get sufficiently frustrated by these common antifeatures of plugins I use that I actually maintain
a plugin for my own use that “fixes” the ones that aggrivate me the most!
I’m unconvinced that we can rely on plugin developers to independently fix the kinds of problems that come high on David’s list. I wonder if there’s mileage in WordPress Core
reimplementing the way that the main navigation menu works such that all items in it can be (easily) re-arranged by users to their own preference? This would undermine the perceived
value to plugin developers of “hoisting” their own to the top by allowing users to counteract it, and would provide a valuable feature to allow site admins to streamline their workflow:
use WooCommerce but only in a way that’s secondary to your blog? Move “Products” below “Posts”! Etc.
Aaron Reimann from ClockworkWP gave us a tour of how WordPress has changed over the course of its 20-year history, starting even slightly
before I started using WordPress; my blog (previously powered by some hacky PHP, previouslier powered by some hackier Perl, previousliest written in static HTML) switched to WordPress
in 2004, when it hit version 1.2, so it was fun to get the opportunity to see some even older versions
illustrated.
It was great to be reminded how far the Core code has come over that time. Early versions of WordPress – as was common among PHP applications at the time! – had very few files
and each could reliably be expected to be a stack of SQL, wrapped in a stack of code, wrapped in what’s otherwise a HTML file: no modularity!
There were very few surprises for me in this talk, as you might expect for such an “old hand”, but I really enjoyed the nostalgia of exploring WordPress history through his eyes.
I enjoyed putting him on the spot with a “spicy” question at the end of his talk, by asking him if, alongside everything we’ve gained over the years, whether there’s anything we
lost along the way. He answered well, pointing out that the somewhat bloated stack of plugins that are commonplace on big sites nowadays and the ease with which admins can just
“click and install” more of them. I agree with him, although personally I miss built-in XFN support…
Networking And All That
There’s a lot of exhibitors with stands, but I tried to do a circuit or so and pay attention at least to those whose owners I’ve come into contact with in a professional
capacity. Many developers who make extensions for WooCommerce, of course, sell those extensions through WooCommerce.com, which means they come
into routine direct contact with my code (and it can mean that when their extension’s been initially rejected by our security scanners or linters, it’s me their developers first want to
curse!).
It’s been great to connect with people using WordPress to power the Web in a whole variety of different contexts, but it somehow still feels strange to me that WordPress has such a
commercial following! Even speaking as somebody who’s made their living at least partially out of WordPress for the last decade plus, it still feels to me like its greatest
value comes from its use for personal publishing.
The feel of a WordCamp with its big shiny sponsors is enormously different from, say, the intimacy and individuality of a Homebrew Website
Club meeting, and I think that’s something I still need to come to terms with. WordPress’s success story comes from many different causes, but perhaps chief among them is the fact
that it’s versatile enough to power the website of a government, multinational, or household-name brand… but also to run the smallest personal indie blog. I struggle to comprehend that,
even with my background.
I was proud of my colleagues for the “gimmick” they were using to attract people to the Woo stand: you could pick up a “credit card” and use it to make a purchase (of Greek olive oil)
using a website, see your order appear on the app at the backend in real-time, and then receive your purchase as a giveaway. The “credit
card” doubles as a business card from the stand, the olive oil is a real product from a real, local producer (who really uses WooCommerce to sell online!), and when you provide an email
address at the checkout you can opt-in to being contacted by the team afterwards. That’s some good joined-up thinking by my buddies in marketing!
Petya Petkova observed that it’s commonplace to take the easy approach and make a website look like… well, every other website. “Web
deja-vu” is a real thing, and it’s fed not only by the ebbs and flows of trends in web design but by the proliferation of indistinct themes that people just install-and-use.
Choice of colours and typography can be used to tell a story, to instil a feeling, to encourage engagement. Scrolling can be used as a metaphor for storytelling (“scrolly-telling”,
Petya calls it). Animation flow can be used to direct a user’s attention and drive focus and encourage interaction.
A lot of the technical concepts she demonstrated – parts of a page that scroll at different speeds, typography that shifts or changes, videos used in a subtle way to accentuate other
content, etc. – can be implemented in the frontend with WebGL, Three.js and the like. Petya observes that moving this kind of content interactivity into the frontend can produce an
illusion of a performance improvement, which is an argument I’ve heard before, but personally I think it’s only valuable if it’s built as a progressive enhancement: otherwise, you’re
always at risk that your site won’t look like you’d hope.
I note, for example, that Petya’s agency’s site shows only an “endless spinner” when viewed in my browser (which blocks the code.jQuery CDN by
default, unless allowlisted for specific sites). All of the content is there, on the page, if you View Source, but it’s completely invisible if an external JavaScript fails to
load. That doesn’t just happen when weirdos like me disable JavaScript in their browsers: it can happen if the browser interacts badly with the script, or if the user’s Internet
connection is ropey, or a malware scanner misfires, or if government censorship blocks the CDN, or in any number of other conditions.
So yeah: uniqueness and creativity are great, and I like what she’s proposing, but not the way she goes about it. The first person to ask a question wisely brought up accessibility, and
Petya answered well that accessibility technologies can bridge the gap, but I’d counter that it’s preferable to build accessible in the first instance: if you have to
use an aria- attribute it’s a good sign that you probably already did something wrong (not always, but it’s certainly a pointer that you ought to take a step back
and check!).
Several other good questions and great answers followed: about how to showcase a preliminary design when they design is dependent upon animation and interactivity (which I’ve witnessed
before!), on the value of server-side rendering of components, and about how to optimise for smaller screens. Petya clearly knows her stuff in all of these areas and had confident
responses.
Oliver Sild is the kind of self-taught hacker, security nerd, and community builder that I love, so I wasn’t going to miss his talk.
It’s good news in general in WordPress Security-land… but CSRF is on the up-and-up (overtaking XSS) in the plugin space. That, and all the broken access control we see in the admin area, are things I’ll be keeping in mind next time I’m arguing
with a vendor about the importance of using nonces and security checks in their extension (I have this battle from time to time!).
But an interesting development is the growth of the supply chains in the WordPress plugin ecosystem. Nowadays a plugin might depend upon another plugin which might depend upon a
library… and a patch applied to the latter of those might take time to be propagated through the chain, providing attackers with a growing window of opportunity.
A worrying thought is that while plugin directory administrators will pull and remove plugins that have longstanding unactioned security issues. But that doesn’t help the sites that
already have that plugin installed and are still using it! There’s a proposal to allow WordPress to notify admins if a plugin
used on a site has been dropped for security reasons, but it was opened 9 years ago and hasn’t seen any real movement, soo…
I like that Oliver plugged for security researchers being acknowledged as equal contributors to developers on your software. But then, I would say that, as somebody who breaks into
things once in a while and then tells the affected parties how to fix the problem that allowed me to do so! He also provided a whole wealth of tips for site owners and agencies to try
to keep their sites safe, but little that I wasn’t aware of already.
It was about this point in the day, glancing at my schedule and realising that at any given time there were up to four other sessions running simultaneously, that I really got
a feel for the scale of this conference. Awesome. Meanwhile, Oliver was fielding the question that I’m sure everybody was thinking: with Gutenberg blocks powered by JavaScript that are
often backed by a supply-chain of the usual billion-or-so files you find in your .node_modules directory, isn’t the risk of supply chain attacks increasing?
Spoiler: yes. Did you notice earlier in this post I mentioned that I don’t use Gutenberg on this site yet?
My first “workshop” was run by Giulia Laco, on the topic of readable content and design.
Giulia began by reminding us how short the attention span of Web readers is, and how important the right typographic choices are in ensuring that people actually read your content. I
fully get this – I think that very few people will have the attention span to read this part of this very blog post, for example! – but I loved that she hammered the point home
by presenting every slide of her presentation twice (or more), “improving” the typographic choices as she went along: an excellent and memorable quirk.
Our capacity to read and comprehend a text is affected by a combination of common (distance, lighting, environment, concentration, mood, etc.), personal (age, proficiency, motiviation,
accessibility requirements, etc.), and typographic (face, style, size, line length and spacing, contrast, width, rhythm etc.) factors. To explore the impact of the typographic factors,
the group dived into a pre-prepared Codepen and a shared Figma diagram. (I immediately had a TIL moment over the font-synthesis: CSS property!)
Things get interesting at the intersection of readability and accessibility. For example, WCAG accessibility requirements demand that you don’t use images of text (we used to
do this a lot back before we could reliably use fonts on the web, and before we could easily have background images on e.g. buttons for navigation). But this accessibility
requirement also aids screen readability when accounting for e.g. “retina” screens with virtual pixel ratios.
Giulia provided a great explanation of why we may well think in pixels (as developers or digital designers) but we’re unlikely to use them everywhere: I’d internalised
this lesson long ago but I appreciated a well-explained justification. The short of it is: screen zoom (that fancy zoom feature you use in your browser all the time, especially on
mobile) and text zoom (the one you probably don’t use, or don’t use so much) are different things, and setting a pixel-based font size in the root node wrecks the latter, forcing some
people with accessibility needs to use the former, which is likely to result in vertical scrolling. Boo!
I also enjoyed seeing this demo of how the different hyphenation-points in different languages (because of syllable stress) can impact on
your wrapping points/line lengths when content is translated. This can affect any website, of course, because any website can be the target of automatic translation.
Plus, Giulia’s thoughts on the value of serifed fonts (even on digital displays) for improving typographic readability of the letters d, b, p and q which are often mirror- or
rotationally-symmetric to one another in sans-serif fonts. It’s amazing to have something – in this case, a psychological letter transposition – pointed out that I’ve experienced but
never pinned down the reason for, before. Neat!
It was a shame that this workshop took place late in the day, because many of the participants (including me) seemed to have flagging energy levels!
Altogether a great (but intense) day. Boggles my mind that there’s another one like it tomorrow.
Among the many perks of working for a company with a history so tightly-intertwined with that of the open-source WordPress project is that license to attend WordCamps – the biggest WordPress conferences – is basically a
given.
It’s frankly a wonder that this is, somehow, my first WordCamp. As well as using it1 and developing atop
it2,
of course, I’ve been contributing to WordPress since 2004 (albeit only in a tiny way, and not at all for most of the last decade!).
Today is Contributor Day, a pre-conference day in which folks new and old get together in person to hack on WordPress and WordPress-adjacent projects. So I met up with Cem, my Level 4 Dragonslayer friend, and we took an ultra-brief induction into WP-CLI3
before diving in to try to help write some code.
So today, as well as meeting some awesome folks, I got to write an overly-verbose justification for a
bug report being invalid and implement my first PR for WP-CLI: a bugfix for a strange quirk in output formatting.
I hope to be able to continue contributing to WP-CLI. I learned a lot about it today, and while I don’t use it as much as I used to in my multisite-management days, I still really
respect its power as a tool.
Footnotes
1 Even with the monumental stack of custom code woven into DanQ.me, a keen eye will
probably spot that it’s WordPress-powered.
3 WP-CLI is… it’s like Drush but for WordPress, if that makes sense to you? If not: it’s a
multifaceted command-line tool for installing, configuring, maintaining, and managing WordPress installations, and I’ve been in love with it for years.
I just spent a lightweight week in Rome with fellow members of Automattic‘s Team Fire.
Among our goals for the week was an attempt to strengthen the definition of who are team are, what we work on, and how and why we do so. That’s
basically a team-level identity, mission, vision, and values, right?
Normally when you play Dixit, you select a card from your hand – each shows a unique piece of artwork – and try to describe it in a way that’s precise enough that some
of the other players will later be able to pick it out of a line-up, but ambiguous enough that not all the other players will. It’s a delicate balancing act. Even when our old
Geek Night was in full swing we didn’t used to play it often because our well-established group’s cornucopia of in-jokes and references made it trivially easy to “target”
your descriptions at specific players1, but it’s still a solid icebreaker activity.
Perhaps it was the fantasy artwork that inspired us or maybe it just says something about how my team sees themselves, but what we came up with had a certain… swords-and-sorcery… even
Dungeons & Dragons… feel to it.
Ou team’s new identity isn’t finalised, but I love the fact that we’ve been able to inject a bit of fun and whimsy into it. At our last draft, my team looks to be defined as comprising:
Gareth, level 62 Pathfinder, leading the way through the wilds
Bero, Level 5 Battlesmith, currently lost in the void
Dan (me!), Level 5 Arcane Trickster, breaking locks and stealing treasure
Cem, Level 4 Dragonslayer, smashing doors and bugs alike
Lae, Level 7 Pirate, seabound rogue with eyes on the horizon
Kyle, Level 5 Apprentice Bard, master of words and magic
Simran, Level 6 Apprentice Code Witch, weaving spells from nature
I think that’s pretty awesome.
Footnotes
1 Also: I don’t own any of the expansion packs and playing with the same cards over and
over again gets a bit samey.
2 The “levels” are simply the number of years each teammate has been an Automattician,
plus one.