As I approach my first full year as an Automattician, I find myself looking back on everything I’ve learned… but also looking around at all the things I still don’t understand! I’m not
learning something new every day any more… but I’m still learning something new most weeks.
This summer I’ve been getting up-close and personal with Gutenberg components. I’d mostly managed to avoid learning the React (eww; JSX, bad documentation, and an elephantine payload…) necessary to hack Gutenberg, but in
helping to implement new tools for WooCommerce.com I’ve discovered that it’s… not quite as painful as I’d thought. There are even some bits I quite like. But I don’t expect to
fall in love with React any time soon. This autumn I’ve been mostly working on search and personalisation, integrating customer analytics data with our marketplace to help understand
what people look for on our sites and using that to guide their future experience (and that of others “like” them). There’s always something new.
My team continues to grow, with two newmatticians this month and a third starting in January. In fact, my team’s planning to fork into two closely-linked subteams; one with a focus on
customers and vendors, the other geared towards infrastructure. It’s exciting to see my role grow and change, but I worry about the risk of gradually pigeon-holing myself into an
increasingly narrow specialisation. Which wouldn’t suit me: I like to keep a finger in all the pies. Still; my manager’s reassuring that this isn’t likely to be the case and
our plans are going in the “right” direction.
On the side of my various project work, I’ve occasionally found the opportunity for more-creative things. Last month, I did some data-mining over the company’s “kudos” history of the
last five years and ran it through vis.js to try to find a new angle on understanding how Automattic’s staff, teams, and divisions interact with one
another. It lead to some interesting results: panning through time, for example, you can see the separate island of Tumblr staff who joined us
during the acquisition gradually become more-interconnected with the rest of the organisation over the course of
the last year.
The biggest disappointment of my time at Automattic so far was that I’ve not managed to go to a GM! The 2019 one – which looked awesome – took place only a couple of weeks before my contract started (despite my best efforts to wrangle
my contract dates with the Bodleian and Automattic to try to work around that), but people reassured me that it was okay because I’d make it
to the next one. Well.. 2020 makes fools of us all, I guess, because of course there’s no in-person GM this year. Maybe, hopefully, if and
when the world goes back to normal I’ll get to spend time in-person with my colleagues once in a while… but for now, we’re having to suffice with Internet-based socialisation only, just
like the rest of the world.
I scratched an itch of mine this week and wanted to share the results with you, in case you happen to be one of the few dozen other people on Earth who will cry “finally!” to discover
that this is now a thing.
I’ve used ProtonMail as my primary personal email provider for about four years, and I love it. Seamless PGP/GPG for proper end-to-end encryption, privacy as standard, etc. At first, I used their web and mobile app interfaces but over time
I’ve come to rediscover my love affair with “proper” email clients, and I’ve been mostly using Thunderbird for my desktop mail. It’s been
great: lightning-fast search, offline capabilities, and thanks to IMAP (provided by ProtonMail Bridge) my mail’s still just as accessible when I fall-back on the web or mobile clients because I’m out and about.
But the one thing this set-up lacked was the ability to easily see which emails had been delivered encrypted versus those which had merely been delivered “in the clear” (like
most emails) and then encrypted for storage on ProtonMail’s servers. So I fixed it.
I’ve just released my first ever Thunderbird plugin. If you’re using
ProtonMail Bridge, it adds a notification to the corner of every email to say whether it was encrypted in transit or not. That’s all.
And of course it’s open source with a permissive license (and a doddle to compile using your
standard operating system tools, if you want to build it yourself). If you’re using Thunderbird and ProtonMail Bridge you should give it a whirl. And if you’re not then… maybe you
should consider it?
Stupid thing of the day to try on your favourite Slack channel:
1. Make an image of yourself bordered by the edge of a speech bubble. Make the image an exact multiple of 32 pixels in each dimension (this one is 128 × 96):
2. Use ImageMagick to cut the image into 32 × 32 pixel tiles, e.g. like this: magick convert dan-qs-stupid-head.png -crop 32x32
"dan-q-says-%02d.png". Pick a sensible output filename to use as a Slack emoji shortcode.
3. Log into Slack and customise your emoji by adding each of the tiles you’ve created to it. This is where you’ll be glad you named the
file sensibly because it saves you typing the shortcode out each time.
4. Type a message using your custom emoji! Because it sits in-line with text, you can type alongside or around it (unlike normally embedded images or /giphy integration) along with
styling, mentioning, and hyperlink options. You can also copy-paste and edit on-the-fly, so you can keep a copy of the message in your self-channel and adjust whenever you need.
5. Profit!!!
Why not make a whole set of different faces showing your different emotions – perhaps from photos – so you can react appropriately to your colleagues! Slack don’t seem to impose any
limit on the number of custom emoji you can add, so the only limit is your imagination (and the tolerance of your Slack administrator for such high jinks).
Or why not cut up an animated GIF? Slack preloads emoji into the client so they play in-sync, allowing you to run
animations that span multiple emoji?
I’ve been watching the output that people machines around the Internet have been producing using GPT-3 (and its cousins), an AI model that can produce long-form “human-like”
text. Here’s some things I’ve enjoyed recently:
I played for a bit with AI Dungeon‘s (premium) Dragon engine, which came up with Dan and the
Spider’s Curse when used as a virtual DM/GM. I pitched an idea to Robin lately that one could run a vlog series based on AI Dungeon-generated adventures: coming up with a “scene”, performing it, publishing it, and taking
suggestions via the comments for the direction in which the adventure might go next (but leaving the AI to do the real
writing).
Today is Spaceship Day is a Plotagon-powered machinama based on a script written by Botnik‘s AI. So not technically GPT-3 if you’re being picky but still amusing to how and
what the AI‘s creative mind has come up with.
Language contains the map to a better world. Those that are most skilled at removing obstacles, misdirection, and lies from language, that reveal the maps that are hidden within, are
the guides that will lead us to happiness.
Yesterday, The Guardian published the op-ed piece A robot wrote this entire article.
Are you scared yet, human? It’s edited together from half a dozen or so essays produced by the AI from the same
starting prompt, but the editor insists that this took less time than the editing process on most human-authored op-eds. It’s good stuff. I found myself reminded of Nobody Knows You’re A Machine, a short story I wrote about eight years ago and was never entirely happy with but which I’ve put online in
order to allow you to see for yourself what I mean.
But my favourite so far must be GPT-3’s attempt to write its own version of Expert judgment on markers to
deter inadvertent human intrusion into the Waste Isolation Pilot Plant, which occasionally circulates the Internet retitled with its line This place is not a place of
honor…no highly esteemed deed is commemorated here… nothing valued is here. The original document was a report into how humans might mark a nuclear waste disposal site in order to
discourage deliberate or accidental tampering with the waste stored there: a massive challenge, given that the waste will remain dangerous for many thousands of years! The original
paper’s worth a read, of course, but mostly as a preface to reading a post by Janelle Shane (whose work I’ve mentioned before) about teaching GPT-3 to write nuclear waste site area denial strategies. It’s pretty special.
As effective conversational AI becomes increasingly accessible, I become increasingly convinced what we might eventually see
a sandwichware future, where it’s cheaper for an appliance developer to install an AI
into the device (to allow it to learn how to communicate with your other appliances, in a human language, just like you will) rather than rely on a static and universal underlying
computer protocol as an API. Time will tell.
Meanwhile: I promise that this post was written by a human!
Enter the latest iteration of the Android version, Firefox Daylight, which came
out last week.
First, the good: this latest version of Firefox for Android is fast. Blazingly fast. The privacy controls are clearer and easier to access. Having picture-in-picture mode
on mobile is a nice touch, as is the new generation of tracking prevention features.
But Firefox Daylight still makes me frown. And it’s a trio of smaller things that really niggle:
1. Top or bottom toolbar… but top is a second-class citizen.
In theory, I like the idea of having the address bar and its friends at the bottom of the screen where it’s more-accessible to your thumb. I’ve even tried it, independently. in years
past. But it’s too much of a mental leap for me nowadays, plus it doesn’t cleanly fit into the “scroll down and the address bar disappears” user experience that’s become commonplace.
Making bottom toolbar the default was perhaps a little radical, then, but at least Mozilla provided an option to put it back at the top. But… it’s not quite right:
Even with the toolbar moved back to the top, some controls associated with it stay at the bottom. Want to open a new tab? You have to press the “tabs” button at the top of the
screen, then the “plus” button at the bottom of the screen, then – probably – the address bar back at the top of the screen again! You’ve just covered two complete
lengths of the screen to do something that used to require none. Not a satisfactory experience.
2. Tab previews were more space-efficient before
You’ve probably already spotted the other change to the “current tabs” view. Previously, open tabs were shown as mini previews with their titles above. Now they’re shown as tiny
(sometimes absent) icon-sized previews with their titles alongside. This allows the domain name to be shown, which is nice, but not nice enough to justify reducing the instant
visual recognition the previous interface provided.
It’s not even like you can fit more tabs onto a screen. The capacity is basically the same. You’re just making smaller hit targets with less recognisable graphics. Plus: previously the
most-recent tabs were at the bottom (close to where your thumb is, which was the justification for making the address bar default to the bottom); now they’re at the top, further adding
to the distance travelled.
3. Plugin support is terrible
I know first hand that implementing backwards-compatibility is hard, but breaking most plugins and then providing a list of nine or so popular/recommended ones that
still works isn’t a great experience.
Feels a bit like this was released before it was ready.
For the time being, I’m using Fennec F-Droid as my primary mobile browser. It picks up exactly where Firefox for
Android left off, and it doesn’t break my workflow. I hope to switch back to regular Firefox for Android someday, but Daylight needs “finishing” first.
You see what that’s doing? It’s loading the stylesheet for the print medium, but then when the document finishes loading it’s switching the media type from “print” to “all”.
Because it didn’t apply to begin with the stylesheet isn’t render-blocking. You can use this to delay secondary styles so the page essentials can load at full speed.
I don’t like this approach. I mean: I love the elegance… I just don’t like the implications.
Why I don’t like lazy-loading CSS using Javascript
Using Javascript to load CSS, in order to prevent that CSS
blocking rendering, feels to me like it conceptually breaks the Web. It certainly violates the expectations of progressive enhancement, because it introduces a level of
fault-intolerance that I consider (mostly) unacceptable.
CSS and Javascript are independent of one another. A well-designed progressively-enhanced page should function with
HTML only, HTML-and-CSS only, HTML-and-JS only, or all
three.CSS adds style, and JS adds behvaiour to a page; and when
you insist that the user agent uses Javascript in order to load stylistic elements, you violate the separation of these technologies (I’m looking at you, the majority of heavyweight
front-end frameworks!).
If you’re thinking that the only people affected are nerds like me who browse with Javascript wholly or partially disabled, you’re wrong: gov.uk research shows that around 1% of your visitors have Javascript fail for some reason or another: because it’s disabled
(whether for preference, privacy, compatibility with accessibility technologies, or whaterver), blocked, firewalled, or they’re using a browser that you didn’t expect.
Can we lazy-load CSS in a way that doesn’t depend on Javascript? (spoiler: yes)
Chris’s daily tip got me thinking: could there exist a way to load CSS in a non-render-blocking way but which degraded
gracefully in the event that Javascript was unavailable? I.e. if Javascript is working, lazy-load CSS, otherwise: load
conventionally as a fallback. It turns out, there is!
In principle, it’s this:
Link your stylesheet from within a <noscript> block, thereby only exposing it where Javascript is disabled. Give it a custom attribute to make it easy to find
later, e.g. <noscript lazyload> (if you’re a standards purist, you might prefer to use a data- attribute).
Have your Javascript extract the contents of these <noscript> blocks and reinject them. In modern browsers, this is as simple as e.g.
[...document.querySelectorAll('noscript[lazyload]')].forEach(ns=>ns.outerHTML=ns.innerHTML).
If you need support for Internet Explorer, you need a little more work, because Internet Explorer doesn’t expose<noscript> blocks to the DOM in a helpful way. There are a variety of possible workarounds; I’ve implemented one but not put too much thought into it because I rarely have to
think about Internet Explorer these days.
In any case, I’ve implemented a proof of concept/demonstration if you’d like to see it in action: just take a look and view source (or read the page)
for details. Or view the source alone via this gist.
Lazy-loading CSS using my approach provides most of the benefits of other approaches… but works properly in environments without
Javascript too.
Update: Chris Ferdinandi’s refined this into an even cleaner approach that takes the best of both worlds.
This week, with help from Robin and JTA, I
built a TropicTemple Tall XXL climbing frame in the garden of our new house. Manufacturer Fatmoose provided us with a pallet-load of lumber and a sack of accessories,
delivered to our driveway, based on a design Ruth and I customised using their website, and we assembled it on-site over the course of around
three days. The video above is a timelapse taken from our kitchen window using a tablet I set up for that purpose, interspersed with close-up snippets of us assembling it and of the
children testing it out.
I’ve also built a Virtual Tour so you can explore the playframe using your computer, phone, or VR headset. Take a look!
This last month or so, my digital life has been dramatically improved by Syncthing. So much so that I want to tell you about it.
I started using it last month. Basically, what it does is keeps a pair of directories on remote systems “in sync” with one another. So far, it’s like your favourite cloud
storage service, albeit self-hosted and much-more customisable. But it’s got a handful of killer features that make it nothing short of a dream to work with:
The unique identifier for a computer can be derived from its public key. Encryption comes free as part of the verification of a computer’s identity.
You can share any number of folders with any number of other computers, point-to-point or via an intermediate proxy, and it “just works”.
It’s super transparent: you can always see what it’s up to, you can tweak the configuration to match your priorities, and it’s open source so you can look at the engine if you like.
Here are some of the ways I’m using it:
Keeping my phone camera synced to my PC
I’ve tried a lot of different solutions for this over the years. Back in the way-back-when, like everybody else in those dark times, I used to plug my phone in using a cable to copy
pictures off and sort them. Since then, I’ve tried cloud solutions from Google, Amazon, and Flickr and never found any that really “worked” for me. Their web interfaces and apps tend to
be equally terrible for organising or downloading files, and I’m rarely able to simply drag-and-drop images from them into a blog post like I can from Explorer/Finder/etc.
At first, I set this up as a one-way sync, “pushing” photos and videos from my phone to my desktop PC whenever I was on an unmetered WiFi network. But then I switched it to a two-way
sync, enabling me to more-easily tidy up my phone of old photos too, by just dragging them from the folder that’s synced with my phone to my regular picture storage.
Centralising my backups
Now I’ve got a fancy NAS device with tonnes of storage, it makes sense to use it as a central
point for backups to run fom. Instead of having many separate backup processes running on different computers, I can just have each of them sync to the NAS, and the NAS can back everything up. Computers don’t need to be “on” at a particular
time because the NAS runs all the time, so backups can use the Internet connection when it’s quietest. And in the event of a
hardware failure, there’s an up-to-date on-site backup in the first instance: the cloud backup’s only needed in the event of accidental data deletion (which could be sync’ed already, of
course!). Plus, integrating the sync with ownCloud running on the NAS gives easy access to
my files wherever in the world I am without having to fire up a VPN or otherwise remote-in to my house.
Plus: because Syncthing can share a folder between any number of devices, the same sharing mechanism that puts my phone’s photos onto my main desktop can simultaneously be
pushing them to the NAS, providing redundant connections. And it was a doddle to set up.
Maintaining my media centre’s screensaver
Since the NAS, running Jellyfin, took on most of the media management jobs previously
shared between desktop computers and the media centre computer, the household media centre’s had less to do. But one thing that it does, and that gets neglected, is showing a
screensaver of family photos (when it’s not being used for anything else). Historically, we’ve maintained the photos in that collection via a shared network folder, but then you’ve got
credential management and firewall issues to deal with, not to mention different file naming conventions by different people (and their devices).
But simply sharing the screensaver’s photo folder with the computer of anybody who wants to contribute photos means that it’s as easy as copying the picture to a particular place. It
works on whatever device they care to (computer, tablet, mobile) on any operating system, and it’s quick and seamless. I’m just using it myself, for now, but I’ll be offering it to the
rest of the family soon. It’s a trivial use-case, but once you’ve got it installed it just makes sense.
In short: this month, I’m in love with Syncthing. And maybe you should be, too.
For most of the last decade, one of my side projects has been FreeDeedPoll.org.uk, a website that helps British adults to change their name
for free and without a solicitor. Here’s a little known fact: as a British citizen, you have the right to be known by virtually any name you like, and for most people the
simplest way to change it is to write out a deed poll: basically a one-person contract on which you promise that you’re serious about adopting your new name and you’re not committing
fraud or anything.
Over that time, I’ve helped thousands of people to change their names. I don’t know exactly how many because I don’t keep any logs, but I’ve always gotten plenty of email from people
about the project. Contact spiked in 2013 after the Guardian ran an article about it, but I still correspond with two or three people in a typical week.
These people have lots of questions that come up time and time again, and if I had more free time I’d maintain an FAQ
of them or something. In any case, a common one is people asking for advice when their high street bank, almost invariably either Nationwide or Santander, disputes the legitimacy of a
“home made” deed poll and refuses to accept it.
When such people contact me, I advise them of a number of solutions and workarounds. Going to a different branch can work (training at these high street banks is internally
inconsistent, I guess?). Getting your government-issued identity documents sorted and then threatening to move your account elsewhere can sometimes work. For applicants willing to spend
a little money, paying a solicitor a couple of quid to be one of your witnesses can work. I often don’t hear back from people who email me about these banks: maybe they find
success by one of these routes, or maybe they give up and go down one an unnecessarily-expensive avenue.
But one thing I always put on the table is the possibility of fighting. I provide a playbook of strategies to try to demonstrate to their troublemaking bank that the bank is in the
wrong, along with all of the appropriate legal citations. Recent years put a new tool in the box: the GDPR/DPA2018, which contains clauses prohibiting companies from knowingly
retaining incorrect personal data about an individual. I’ve been itching for a chance to use these new weapons… and over this last month, I finally had the opportunity.
I was recently contacted by a student (who, as you might expect, has more free time than they do spare money!) who was having trouble with Santander refusing to accept their deed poll.
They were willing to go all-out to prove their bank wrong. So I gave them the toolbox and they worked through it and… Santander caved!
Not only have Santander accepted that they were wrong in the case of this student, but they’ve also committed to retraining their staff. Oh, and they’ve paid compensation to
the student who emailed me.
Even from my position on the sidelines, I couldn’t help but cheer at this news, and not just because I’ll hopefully have fewer queries to deal with.
As always seems to happen when I move house, a piece of computer hardware broke for me during my recent house move. It’s always
exactly one piece of hardware, like it’s a symbolic recognition by the universe that being lugged around, rattling around and butting up against one another, is not the natural
state of desktop computers. Nor is it a comfortable journey for the hoarder-variety of geek nervously sitting in front of them, tentatively turning their overloaded vehicle around each
and every corner. UserFriendly said it right in this comic from 2003.
This time around, it was one of the hard drives in Renegade, my primary Windows-running desktop, that failed. (At least I didn’t break
myself, this time.)
Fortunately, it failed semi-gracefully: the S.M.A.R.T. alarm went off about a week before it actually started causing real
problems, giving me at least a little time to prepare, and – better yet – the drive was part of a four-drive RAID 10 hot-swappable array, which means that every
single byte of data on that drive was already duplicated to a second drive.
Incidentally, this configuration may have indirectly contributed to its death: before I built Fox, our new household NAS, I used Renegade for many of the same purposes, but WD Blues are not really a “server grade” hard
drive and this one and its siblings will have seen more and heavier use than they might have expected over the last few years. (Fox, you’ll be glad to hear, uses much better-rated
drives for her arrays.)
So no data was lost, but my array was degraded. I could have simply repaired it and carried on by adding a replacement similarly-sized hard drive, but my needs have changed now that Fox
is on the scene, so instead I decided to downgrade to a simpler two-disk RAID 1 array for important data and an
“at-risk” unmirrored drive for other data. This retains the performance of the previous array at the expense of a reduction in redundancy (compared to, say, a three-disk RAID 5 array which would have retained redundancy at the expense of performance). As I said: my needs have changed.
Fixing Things… Fast!
In any case, the change in needs (plus the fact that nobody wants watch an array rebuild in a different configuration on a drive with system software installed!) justified a
reformat-and-reinstall, which leads to the point of this article: how I optimised my reformat-and-reinstall using Chocolatey.
Chocolatey is a package manager for Windows: think like apt for
Debian-like *nices (you know I do!) or Homebrew for MacOS. For previous Windows system
rebuilds I’ve enjoyed the simplicity of Ninite, which will build you a one-click installer for your choice of many of your favourite tools, so you can
get up-and-running faster. But Chocolatey’s package database is much more expansive and includes bonus switches for specifying particular versions of applications, so it’s a clear
winner in my mind.
So I made up a Windows installation pendrive and added to it a “script” of things to do to get Renegade back into full working order. You can read the full script here, but the essence of it was:
Reconfigure the RAID array, reformat, reinstall Windows, and create an account.
Configuration (e.g. set up my unusual keyboard mappings, register software, set up remote connections and backups, etc.).
By scripting virtually all of the above I was able to rearrange hard drives in and then completely reimage a (complex) working Windows machine with well under an hour of downtime; I can
thoroughly recommend Chocolatey next time you have to set up a new Windows PC (or just to expand what’s installed on your existing one). There’s a GUI if you’re not a fan of the command line, of course.
About three months ago, my friend Claire, in a WhatsApp group we both frequent, shared a brainteaser:
The puzzle was to be interpreted as follows: you have a three-digit combination lock with numbers 0-9; so 1,000 possible combinations in total. Bulls and Cows-style, a series of clues indicate how “close” each of several pre-established “guesses” are. In “bulls and
cows” nomenclature, a “bull” is a correctly-guessed digit in the correct location and a “cow” is a correctly-guessed digit in the wrong location, so the puzzle’s clues are:
682 – one bull
614 – one cow
206 – two cows
738 – no bulls, no cows
380 – one cow
By the time I’d solved her puzzle the conventional way I was already interested in the possibility of implementing a general-case computerised solver for this kind of puzzle, so I did.
My solver uses a simple “brute force” technique, as follows:
Put all possible combinations into a search space.
For each clue, remove from the search space all invalid combinations.
Whatever combination is left is the correct answer.
Visualising the solver as a series of bisections of a search space got me thinking about something else: wouldn’t this be a perfectly reasonable way to programatically generate
puzzles of this type, too? Something like this:
Put all possible combinations into a search space.
Randomly generate a clue such that the search space is bisected (within given parameters to ensure that neither too many nor too few clues are needed)
Repeat until only one combination is left
Interestingly, this approach is almost the opposite of what a human would probably do. A human, tasked with creating a puzzle of this sort, would probably choose the answer
first and then come up with clues that describe it. Instead, though, my solution would come up with clues, apply them, and then see what’s left-over at the end.
I expanded my generator to go beyond simple bulls-or-cows clues: it’s also capable of generating clues that make reference to the balance of odd and even digits (in a numeric lock), the
number of different digits used in the combination, the sum of the digits of the combination, and whether or not the correct combination “ascends” or “descends”. I’ve ideas for
other possible clue types too, which could be valuable to make even tougher combination locks: e.g. specifying how many numbers in the combination are adjacent to a consecutive number,
specifying the types of number that the sum of the digits adds to (e.g. “the sum of the digits is a prime number”) and so on.
Next up, I wanted to make a based interface so that people could have a go at the puzzles in their web browser, track their progress through the levels, get a “score” based on the
number and difficulty of the locks that they’d cracked (so they can compare it to their friends), and save their progress to carry on next time.
I implemented in pure vanilla HTML, CSS, SVG and JS, with no dependencies. Compressed, it delivers to your browser and is ready-to-play in a little
under 10kB, most of which is the puzzles themselves (which are pregenerated and stored in a JSON file). Naturally, it lends itself well to running offline, so it’s PWA-enhanced with a
service worker so it can be “installed” onto your device, too, and it’ll check for bonus puzzles and other updates periodically.
Honestly, the hardest bit of implementing the frontend was the “spinnable” digits: depending on your browser, these are an endless-scrolling <ul> implemented mostly in
CSS and with snap points set, and then some JS to work out “what you meant” based on
where you span to. Which feels like the right way to implement such a thing, but was a lot more work than putting together my own control, not least because of browser
inconsistencies in the implementation of snap points.
Anyway: you should go and play the game, now, and let me know what you think. Is it worth expanding and improving? Should I leave it as it is? I’m
open to ideas (and if you don’t like that I’m not implementing your suggestions, you can always fork a copy of the code and change
it yourself)!
Seven years ago, I wrote a six-part blog series (1, 2, 3, 4, 5, 6) about our Ruth, JTA and I’s experience of buying our first house. Now, though, we’re moving again, and it’s brought up all the same kinds of challenges and
stresses as last time, plus a whole lot of bonus ones to boot.
In particular, new challenges this time around have included:
As owners, rather than renters, we’ve had both directions on the ladder to deal with. Not only did we have to find somewhere to move to that we can afford but we needed to find
somebody who’d buy our current house (for enough money that we can afford the new place).
The first letting agents we appointed were pretty useless, somehow managing to get us no viewings whatsoever. Incidentally their local branch
got closed soon after we ditched them and the last time I checked, the building was still up for sale: it doesn’t bode well for them that they can’t even sell their own
building, does it?
The replacement letting agents (who sold us this house in the first place) were much better, but it still took a long time before we started
getting offers we could act on.
We finally selected some buyers, accepting a lower offer because they were cash buyers and it would allow us to act quickly on the property we wanted to buy, only for the
coronavirus lockdown to completely scupper our plans of a speedy move. And make any move a logistical nightmare.
Plus: we’re now doing this with lots more stuff (this won’t be a “rally some friends and rent a van” job like
last time!), with two kids (who’re under our feet a lot on account of the lockdown), and so on.
But it’s finally all coming together. We’ve got a house full of boxes, mind, and we can’t find anything, and somehow it still doesn’t feel like we’re prepared for when the
removals lorry comes later this week. But we’re getting there. After a half-hour period between handing over the keys to the old place and picking up the keys to the new place (during
which I guess we’ll technically be very-briefly homeless) we’ll this weekend be resident in our new home.
Our new house will:
Be out in the fabulous West Oxfordshire countryside.
Have sufficient rooms to retain an office and a “spare” bedroom while still giving the kids each their own bedroom.
Boast a fabulously-sized garden (we might have already promised the kids a climbing frame).
Have an incredible amount of storage space plus the potential for further expansion/conversion should the need ever arise. (On our second-to-last visit to the place with discovered
an entire room, albeit an unfinished one, that we hadn’t known about before!)
Get ludicrously fast Internet access.
We lose some convenient public transport links, but you can’t have everything. And with me working from home all the time, Ruth –
like many software geeks – likely working from home for the foreseeable future (except when she cycles into work), and JTA working from
home for now but probably returning to what was always a driving commute “down the line”, those links aren’t as essential to us as they once were.
Sure: we’re going to be paying for it for the rest of our lives. But right now, at least, it feels like what we’re buying is a house we could well live in for the rest of our
lives, too.
It had been a long while since our last murder mystery party: we’ve only done one or two “kit” ones since we moved in to our current
house in 2013, and we’re long-overdue a homegrown one (who can forget the joy of Murder at the Magic College?), but in
the meantime – and until I have the time and energy to write another one of my own – we thought we’d host another.
But how? Courtesy of the COVID-19 crisis and its lockdown, none of our friends could come to visit. Technology to the rescue!
I took a copy of Michael Akers‘ murder mystery party plan, Sour Grapes of Wrath, and used it as the basis for Sour Grapes, a digitally-enhanced (and generally-tweaked) version of the same story, and recruited Ruth, JTA, Jen, Matt R, Alec and Suz to perform the parts. Given that I’d had to adapt the materials
to make them suitable for our use I had to assign myself a non-suspect part and so I created police officer (investigating the murder) whose narration provided a framing device for the
scenes.
I threw together a quick Firebase backend to allow data to be synchronised across a web application, then wrote a couple of dozen lines of
Javascript to tie it together. The idea was that I’d “push” documents to each participants’ phone as they needed them, in a digital analogue of the “open envelope #3” or “turn to the
next page in your book” mechanism common in most murder mystery kits. I also reimplemented all of Akers’ artefacts, which were pretty-much text-only, as graphics, and set up a system
whereby I could give the “finder” of each clue a copy in-advance and then share it with the rest of the participants when it was appropriate, e.g. when they said, out loud “I’ve found
this newspaper clipping that seems to say…”
The party itself took place over Discord video chat, with which I’d recently had a good experience in an experimental/offshoot Abnib group (separate from our normal WhatsApp space) and
my semi-associated Dungeons & Dragons group. There were a few technical hiccups, but only what you’d expect.
The party itself rapidly descended into the usual level of chaos. Lots of blame thrown, lots of getting completely off-topic and getting distracted solving the wrong puzzles, lots of
discussion about the legitimacy of one of several red herrings, and so on. Michael Akers makes several choices in his writing that don’t appear in mine – such as not revealing the
identity of the murderer even to the murderer until the final statements – which I’m not a fan of but retained for the sake of honouring the original text, but if I were to run
a similar party again I’d adapt this, as I had a few other aspects of the setting and characters. I think it leads to a more fun game if, in the final act, the murderer knows that they
committed the crime, that all of the lies they’ve already told are part of their alibi-building, and they’re given carte blanche to lie as much as they like in an effort to
“get away with it” from then on.
Of course, Ruth felt the need to cater for the event – as she’s always done with spectacular effect at every previous murder mystery she’s hosted or we’ve collectively hosted – despite
the distributed partygoers. And so she’d arranged for a “care package” of wine and cheese to be sent to each household. The former was, as always, an excellent source of social
lubrication among people expected to start roleplaying a random character on short notice; the latter a delightful source of snacking as we all enjoyed the closest thing we’ll get to a
“night out” in many months.
This was highly experimental, and there are lessons-for-myself I’d take away from it:
If you’re expecting people to use their mobiles, remember to test thoroughly on mobiles. You’d think I’d know this, by now. It’s only, like, my job.
When delivering clues and things digitally, keep everything in one place. Switching back and forth between the timeline that supports your alibi and the new
information you’ve just learned is immersion-breaking. Better yet, look into ways to deliver physical “feelies” to people if it’s things that don’t need sharing, and consider ways to
put shared clues up on everybody’s “big screen”.
Find time to write more murder mysteries. They’re much better than kit-style ones; I’ve got a system and it works. I really shout get around to writing up
how I make them, some day; I think there’s lessons there for other people who want to make their own, too.
Meanwhile: if you want to see some moments from Sour Grapes, there’s a mini YouTube
playlist I might get around to adding to at some point. Here’s a starter if you’re interested in what we got up to (with apologies for the audio echo, which was caused by a problem
with the recording software):
When the COVID-19 lockdown forced many offices to close and their staff to work remotely, some of us saw what was unfolding as
an… opportunity in disguise. Instead of the slow-but-steady
decentralisation of work that’s very slowly become possible (technically, administratively, and politically) over the last 50 years, suddenly a torrent of people were discovering that
remote working can work.
The Future is Now
As much as I hate to be part of the “where’s my flying car?” brigade, I wrote ten years ago about my dissatisfaction that remote
working wasn’t yet commonplace, let alone mainstream. I recalled a book I’d read as a child in the 1980s that promised a then-future 2020 of:
near-universal automation of manual labour as machines become capable of an increasing diversity of human endeavours (we’re getting there, but slowly),
a three- or four-day work week becoming typical as efficiency improvements are reinvested in the interests of humans rather than of corporations (we might have lost sight of that
goal along the way, although there’s been some fresh interest
in it lately), and
widespread “teleworking”/”telecommuting”, as white-collar sectors grow and improvements in computing and telecommunications facilitate the “anywhere office”
Of those three dreams, the third soon seemed like it would become the most-immediate. Revolutionary advances in mobile telephony, miniaturisation of computers, and broadband networking
ran way ahead of the developments in AI that might precipitate the first dream… or the sociological shift required for
the second. But still… progress was slow.
At eight years old, I genuinely believed that most of my working life would be spent… wherever I happened to be. So far, most of my working life has been spent in an office, despite
personally working quite hard for that not to be the case!
I started at Automattic six months ago, an entirely distributed company. And so when friends and colleagues found themselves required to work
remotely by the lockdown they came in droves to me for advice about how to do it! I was, of course, happy to help where I could: questions often
covered running meetings and projects, maintaining morale, measuring output, and facilitating communication… and usually I think I gave good answers. Sometimes, though, the
answer was “If you’re going to make that change, you’re going to need a cultural shift and some infrastructure investment first.” Y’know: “Don’t start from here.” If you received that advice from me: sorry!
(Incidentally, if you have a question I haven’t answered yet, try these clever people first for even better
answers!)
More-recently, I was excited to see that many companies have adopted this “new normal” not as a temporary measure,
but as a possible shape of things to come. Facebook, Twitter, Shopify, Square, and Spotify have all announced that they’re going to permit or encourage remote work as standard, even
after the crisis is over.
Obviously tech companies are leading the way, here: not only are they most-likely to have the infrastructure and culture already in place to support this kind of shift. Also, they’re
often competing for the same pool of talent and need to be seen as at-least as progressive as their direct rivals. Matt
Mullenweg observes that:
What’s going to be newsworthy by the end of the year is not technology companies saying they’re embracing distributed work, but those that aren’t.
…some employers trapped in the past will force people to go to offices, but the illusion that the office was about work will be shattered forever, and companies that
hold on to that legacy will be replaced by companies who embrace the antifragile nature of distributed organizations.
Tomorrow’s Challenges
We’re all acutely familiar with the challenges companies are faced with today as they adapt to a remote-first environment. I’m more interested in the challenges that
they might face in the future, as they attempt to continue to use a distributed workforce as the pandemic recedes. It’s easy to make the mistake of assuming that what many
people are doing today is a rehearsal for the future of work, but the future will look different.
Some people, of course, prefer to spend some or all of their work hours in an office environment. Of the companies that went remote-first during the lockdown and now plan to
stay that way indefinitely, some will lose employees who
preferred the “old way”. For this and other reasons, some companies will retain their offices and go remote-optional, allowing flexible teleworking, and this has it’s own
pitfalls:
Some remote-optional offices have an inherent bias towards in-person staff. In some companies with a mixture of in-person and remote staff, remote workers don’t get
included in ad-hoc discussions, or don’t become part of the in-person social circles. They get overlooked for projects or promotions, or treated as second-class citizens. It’s easy to
do this completely by accident and create a two-tiered system, which can lead to a cascade effect that eventually collapses the “optional” aspect of remote-optional; nowhere was this
more visible that in Yahoo!’s backslide against remote-optional working in 2013.
Some remote-optional offices retain an archaic view on presenteeism and “core hours”. Does the routine you keep really matter? Remote-first working demands that
productivity is measured by output, not by attendance, but management-by-attendance is (sadly) easier to implement, and
some high-profile organisations favour this lazy but less-effective approach. It’s easy, but ineffective, for a remote-optional company to simply extend hours-counting performance
metrics to their remote staff. Instead, allowing your staff (insofar as is possible) to work the hours that suit them as individuals opens up your hiring pool to a huge number of
groups whom you might not otherwise reach (like single parents, carers, digital nomads, and international applicants) and helps you to get the best out of every one of them, whether
they’re an early bird, a night owl, or somebody who’s most-productive after their siesta!
Pastoral care doesn’t stop being important after the crisis is over. Many companies that went remote-first for the coronavirus crisis have done an excellent job of
being supportive and caring towards their employees (who, of course, are also victims of the crisis: by now, is there anybody whose life hasn’t been impacted?). But when
these companies later go remote-optional, it’ll be easy for them to regress to their old patterns. They’ll start monitoring the wellbeing only of those right in front of
them. Remote working is already challenging, but it can be made much harder if your company culture makes it hard to take a sick day, seek support on a HR issue, or make small-talk with a colleague.
These are challenges specifically for companies that go permanently remote-optional following a period of remote-first during the coronavirus crisis.
Towards a Post-Lockdown Remote-Optional Workplace
How you face those challenges will vary for every company and industry, but it seems to me that there are five lessons a company can learn as it adapts to remote-optional work in a
post-lockdown world:
Measure impact, not input. You can’t effectively manage a remote team by headcount or closely tracking hours; you need to track outputs (what is produced), not inputs
(person-hours). If your outputs aren’t measurable, make them measurable, to paraphrase probably-not-Galileo. Find metrics you can work with and rely on, keep them transparent and open, and
re-evaluate often. Use the same metrics for in-office and remote workers.
Level the playing field. Learn to spot the biases you create. Do the in-person attendees do all the talking at your semi-remote meetings? Do your remote workers have
to “call in” to access information only stored on-site (including in individual’s heads)? When
they’re small, these biases have a huge impact on productivity and morale. If they get big, they collapse your remote-optional environment.
Always think bigger. You’re already committing to a shakeup, dragging your company from the 2020 of the real world into the 2020 we once dreamed of. Can you go
further? Can you let your staff pick their own hours? Or workdays? Can your staff work in other countries? Can you switch some of your synchronous communications channels (e.g.
meetings) into asynchronous information streams (chat, blogs, etc.)? Which of your telecommunications tools
serve you, and which do you serve?
Remember the human. Your remote workers aren’t faceless (pantsless) interchangeable components in your corporate machine. Foster interpersonal relationships and don’t
let technology sever the interpersonal links between your staff. Encourage and facilitate (optional, but awesome) opportunities for networking and connection. Don’t forget to get together in-person sometimes: we’re a pack animal, and we form tribes more-easily when we
can see one another.
Support people through the change. Remote working requires a particular skillset; provide tools to help your staff adapt to it. Make training and development options
available to in-office staff too: encourage as flexible a working environment as your industry permits. Succeed, and your best staff will pay you back in productivity and loyalty.
Fail, and your best staff will leave you for your competitors.
I’m less-optimistic than Matt that effective distributed
working is the inexorable future of work. But out of the ashes of the coronavirus crisis will come its best chance yet, and I know that there’ll be companies who get left behind in
the dust. What are you doing to make sure your company isn’t one of them?
Since I accepted a job offer with Automattic last summer I’ve been writing about
my experience on a nice, round 128-day schedule. My first post described my application and recruitment process; my second post covered my induction, my initial
two weeks working alongside the Happiness team (tech support), and my first month in my role. This is the third post, running through to the end of six and a half months as an
Automattician.
Always Be Deploying
One of the things that’s quite striking about working on many of Automattic’s products, compared to places I’ve worked before, is the velocity. Their continuous integration
game is pretty spectacular. We’re not talking “move fast and break things” iteration speeds (thank heavens), but we’re still talking fast.
My team tackles a constant stream of improvements in two-week sprints, with every third sprint being a cool-down period to focus on refactoring, technical debt, quick wins, and the
like. Periodic HACK weeks – where HACK is (since
2018) a backronym for Helpful Acts in Customer Kindness – facilitate focussed efforts on improving our ecosystem and user experiences.
I’m working in a larger immediate team than I had for most of my pre-Automattic career. I’m working alongside nine other developers, typically in groups of two to four depending on the
needs of whatever project I’m on. There’s a great deal of individual autonomy: we’re all part of a greater whole and we’re all pushing in the same direction, but outside of the
requirements of the strategic goals of our division, the team’s tactical operations are very-much devolved and consensus-driven. We work out as a team how to solve the gnarly
(and fun!) problems, how to make best use of our skills, how to share our knowledge, and how to schedule our priorities.
This team-level experience echoes the experience of being an individual at Automattic, too. The level of individual responsibility and autonomy we enjoy is similar to that I’ve seen
only after accruing a couple of years of experience and authority at most other places I’ve worked. It’s amazing to see that you can give a large group of people so much self-controlled
direction… and somehow get order out of the chaos. More than elsewhere, management is more to do with shepherding people into moving in the same direction than it is about dictating how
the ultimate strategic goals might be achieved.
Na na na na na na na na VAT MAN!
Somewhere along the way, I somehow became my team’s live-in expert on tax. You know how it is: you solve a bug with VAT calculation in
Europe… then you help roll out changes to support registration with the GST in Australia… and then one day you find yourself
reading Mexican digital services tax legislation and you can’t remember where the transition
was from being a general full-stack developer to having a specialisation in tax.
Tax isn’t a major part of my work. But it’s definitely reached a point at which I’m a go-to figure. A week or so ago when somebody had a question about the application of sales taxes to
purchases on the WooCommerce.com extensions store, their first thought was “I’ll ask Dan!” There’s
something I wouldn’t have anticipated, six month ago.
Automattic’s culture lends itself to this kind of selective micro-specialisation. The company actively encourages staff to keep learning new things but mostly without providing a specific direction, and this – along with their tendency to
attract folks who, like me, could foster an interest in almost any new topic so long as they’re learning something – means that my colleagues and I always seem to be
developing some new skill or other.
I know off the top of my head who I’d talk to about if I had a question about headless browser automation, or database index performance, or email marketing impact assessment, or queer
representation, or getting the best airline fares, or whatever else. And if I didn’t, I could probably find them. None of their job descriptions mention that aspect of their work.
They’re just the kind of people who, when they see a problem, try to deepen their understanding of it as a whole rather than just solving it for today.
A lack of pigeonholing, coupled with the kind of information management that comes out of being an entirely-distributed company, means that the specialisation of individuals becomes a
Search-Don’t-Sort problem. You don’t necessarily find an internal specialist by their job title: you’re more-likely to find them by looking for previous work on particular
topics. That feels pretty dynamic and exciting… although it does necessarily lead to occasional moments of temporary panic when you discover that something important (but short of
mission-critical) doesn’t actually have anybody directly responsible for it.
Crisis response
No examination of somebody’s first 6+ months at a new company, covering Spring 2020, would be complete without mention of that company’s response to the coronavirus crisis. Because,
let’s face it, that’s what everybody’s talking about everywhere right now.
As the UK’s lockdown (eventually) took hold I found myself
treated within my social circle like some kind of expert on remote working. My inboxes filled up with queries from friends… How do I measure output? How do I run a productive
meeting? How do I maintain morale? I tried to help, but unfortunately some of my answers relied slightly on already having a distributed culture: having the information and
resource management and teleworking infrastructure in-place before the crisis. Still, I’m optimistic that companies will come out of the other side of this situation with a
better idea about how to plan for and execute remote working strategies.
I’ve been quite impressed that even though Automattic’s all sorted for how work carries on through this crisis, we’ve gone a step further and tried to organise (remote) events for
people who might be feeling more-isolated as a result of the various lockdowns around the world. I’ve seen mention of wine tasting events, toddler groups, guided meditation sessions,
yoga clubs, and even a virtual dog park (?), all of which try to leverage the company’s existing distributed infrastructure to support employees who’re affected by the pandemic. That’s
pretty cute.
In summary: Automattic’s still proving to be an adventure, I’m still loving their quirky and chaotic culture and the opportunity to learn something new every week, and
while their response to the coronavirus crisis has been as solid as you’d expect from a fully-distributed company I’ve also been impressed by the company’s efforts to support staff (in
a huge diversity of situations across many different countries) through it.