Taking a photo of our kids isn’t too hard: their fascination with screens means you just have to switch to “selfie mode” and they lock-on to the camera like some kind of narcissist
homing pigeon. Failing that, it’s easy enough to distract them with something that gets them to stay still for a few seconds and not just come out as a blur.
But compared to the generation that came before us, we have it really easy. When I was younger than our youngest is , I was obsessed with pressing buttons. So pronounced was my
fascination that we had countless photos, as a child, of my face pressed so close to the lens that it’s impossible for the camera to focus, because I’d rushed over at the last second to
try to be the one to push the shutter release button. I guess I just wanted to “help”?
In theory, exploiting this enthusiasm should have worked out well: my parents figured that if they just put me behind the camera, I could be persuaded to take a good picture
of others. Unfortunately, I’d already fixated on another aspect of the photography experience: the photographer’s stance.
When people were taking picture of me, I’d clearly noticed that, in order to bring themselves down to my height (which was especially important given that I’d imminently try to
be as close to the photographer as possible!) I’d usually see people crouching to take photos. And I must have internalised this, because I started doing it too.
Unfortunately, because I was shorter than most of my subjects, this resulted in some terrible framing, for example slicing off the tops of their heads or worse. And because this was a
pre-digital age, there was no way to be sure exactly how badly I’d mucked-up the shot until days or weeks later when the film would be developed.
In an effort to counteract this framing issue, my dad (who was always keen for his young assistant to snap pictures of him alongside whatever article of public transport history he was
most-interested in that day) at some point started crouching himself in photos. Presumably it proved easier to just duck when I did rather than to try to persuade me not to crouch in
the first place.
As you look forward in time through these old family photos, though, you can spot the moment at which I learned to use a viewfinder, because people’s heads start to feature close to the
middle of pictures.
Unfortunately, because I was still shorter than my subjects (especially if I was also crouching!), framing photos such that the subject’s face was in the middle of the frame resulted in
a lot of sky in the pictures. Also, as you’ve doubtless seem above, I was completely incapable of levelling the horizon.
I’d like to think I’ve gotten better since, but based on the photo above… maybe the problem has been me, all along!
For lunch today I taste-tested five different plant-based vegan “cheeses” from Honestly Tasty. Let’s see if they’re any good.
Prefer video?
This blog post is available as a video (here or on YouTube), for those who like that sort of thing. The content’s slightly different, but you
do get to see my face when I eat the one that doesn’t agree with me.
Background
I’ve been vegetarian or mostly-vegetarian to some degree or another for a little over ten years (for
those who have trouble keeping up: I currently eat meat only on weekends, and not including beef or lamb), principally for the environmental benefits of a reduced-meat lifestyle. But if
I’m really committed to reducing the environmental impact of my diet, the next “big” thing I still consume is dairy products.
My milk consumption is very low nowadays, but – like many people who might aspire towards dropping dairy – it’s quitting cheese that poses the biggest challenge. I’m not even the
biggest fan of cheese, and I don’t know how I’d do without it: there’s just, it seems, no satisfactory substitute.
It’s possible, though, that my thinking on this is outdated. Especially in recent years, we’re getting better and better at making convincing (or, at least, tasty!) plant-based
substitutes to animal based foods. And so, inspired by a conversation with some friends, I thought I’d try a handful of new-generation plant-based cheeses and see how I got on. I
ordered a variety pack from Honestly Tasty (who’ll give you 20% off your first order if you subscribe your favourite throwaway email address to
their newsletter) and gave it a go.
It’s supposed to taste like Brie, I guess, but it’s not convincing. The texture of the rind is surprisingly good, but the inside is somewhat homogeneous and flat. They’ve tried to use
mustard powder to provide Brie’s pepperiness and acetic acid for its subtle sourness, but it feels like there might be too much of the former (or perhaps I’m just a little oversensitive
to mustard) and too little of the latter. It’s okay, but I wouldn’t buy it again.
This was surprisingly flavoursome and really quite enjoyable. It spreads with about the consistency of pâté and has a sharp tang that really stands out. You wouldn’t mistake it for
cheese, but you might mistake it for a cheese spread: there’s a real “cheddary” flavour buried in there.
This is supposed to be modelled after Gorgonzola, and it might as well be because I don’t like either it or the cheese it’s based on. I might loathe Blue slightly less than
most blue cheeses, but that doesn’t mean I’d willingly subject myself to this again in a hurry. It’s matured with real Penicillium Roqueforti, apparently, along with seaweed, and it
tastes like both of these things are true. So yeah: I hated this one, but you shouldn’t take that as a condemnation of its quality as a cheese substitute because I’d still rather eat it
than the cheese that it’s based on. Try for yourself, I guess.
This was probably my favourite of the bunch. It’s reminiscent of garlic & herb Boursin, and feels like somebody in the
kitchen where they cooked it up said to themselves, “how about we do the Ched Spread, but with less onion and a whole load of herbs mixed-through”. It seems that it must be easier to
make convincingly-cheesy soft cheeses than hard cheeses, but I’m not complaining: this would be great on toast.
If you’d served me this and told me it was a baked Camembert… I wouldn’t be fooled. But I wouldn’t be disappointed either. It moves a lot like Camembert and it tastes… somewhat like it.
But whether or not that’s “enough” for you, it’s perfectly delicious and I’d be more than happy to eat it or serve it to others.
In summary…
Honestly Tasty’s Ched Spread, Herbi, and Shamembert are perfectly acceptable (vegan!) substitutes for cheese. Even where they don’t accurately reflect the cheese they attempt to model,
they’re still pretty good if you take them on their own merits: instead of comparing them to their counterparts, consider each as if it were a cheese spread or soft cheese in its own
right and enjoy accordingly. I’d buy them again.
Their Bree failed to capture the essence of a good ripe Brie and its flavour profile wasn’t for me something to enjoy outside of its attempts at emulation. And their “Veganzola” Blue
cheese… was pretty grim, but then that’s what I think of Gorgonzola too, so maybe it’s perfect and I just haven’t the palate for it.
Suppose you’re running an application on a Passenger + Nginx powered server and you want to add caching.
Perhaps your application has a dynamic, public endpoint but the contents don’t change super-frequently or it isn’t critically-important that the user always gets up-to-the-second
accuracy, and you’d like to improve performance with microcaching. How would you do that?
Where you’re at
Your configuration might look something like this:
1
2
3
4
5
6
7
server {
# listen, server_name, ssl, logging etc. directives go here# ...root/your/application;
passenger_enabledon;
}
What you’re looking for is proxy_cache and its sister directives, but you can’t just
insert them here because while Passenger does act act like an upstream proxy (with parallels like e.g. passenger_pass_header which mirrors the behaviour of proxy_pass_header), it doesn’t provide any of the functions you need to implement proxy caching
of non-static files.
Where you need to be
Instead, what you need to to is define a second server, mount Passenger in that, and then proxy to that second server. E.g.:
# Set up a cacheproxy_cache_path/tmp/cache/my-app-cachekeys_zone=MyAppCache:10mlevels=1:2inactive=600smax_size=100m;
# Define the actual webserver that listens for Internet traffic:server {
# listen, server_name, ssl, logging etc. directives go here# ...# You can configure different rules by location etc., but here's a simple microcache:location/ {
proxy_passhttp://127.0.0.1:4863; # Proxy all traffic to the application server defined belowproxy_cacheMyAppCache; # Use the cache defined aboveproxy_cache_valid2003s; # Treat HTTP 200 responses as valid; cache them for 3 secondsproxy_cache_use_staleupdating; # (Optional) send outdated response while background-updating cacheproxy_cache_lock on; # (Optional) only allow one process to update cache at once
}
}
# (Local-only) application server on an arbitrary port number to act as the upstream proxy:server {
listen 127.0.0.1:4863;
root/your/application;
passenger_enabledon;
}
The two key changes are:
Passenger moves to a second server block, localhost-only, on an arbitrary port number (doesn’t need HTTPS, of course, but if your application detects/”expects” HTTPS you
might need to tweak your headers).
Your main server block proxies to the second as its upstream, and you can add whatever caching directives you like.
Obviously you’ll need to be smarter if you host a mixture of public and private content (e.g. send Vary: headers from your application) and if you want different cache
durations on different addresses or types of content, but there are already great guides to help with that. I only wrote this post because I spent some time searching for (nonexistent!)
passenger_cache_ etc. rules and wanted to save the next person from the same trouble!
I really love Dungeondraft, an RPG battle map generator. It’s got great compatibility with
online platforms like Foundry VTT and Roll20, but if you’re looking to make maps for tabletop play,
there’s a few tips I can share:
Planning and designing
Dungeondraft has (or can be extended with) features to support light levels and shadow-casting obstructions, openable doors and windows, line-of sight etc… great to have when you’re
building for Internet-enabled tabletops, but pointless when you’re planning to print out your map! Instead:
Think about scale: I’m printing to A4 sheets and using inch-size squares, so every 11 x 8 squares equates to one sheet of paper. Knowing this, I can multiply-up to a
whole number of sheets of paper and this informs my decisions about how to best make use of the maps (and what will and won’t fit on my dining table!).
Focus on legibility: Your printer probably won’t have the same kind of resolution as your screen, and your players can’t “zoom in” to get details. Play with the grid
styles (under Map Settings) to find what works best for you, and try not to clash with your floor patterns. If you’re printing in monochrome, use the “Printer-Friendly” camera filter
(also under Map Settings, or in the Export Options dialog) to convert to gorgeous line-art. Make sure critical elements have sufficient contrast that they’ll stand out when printed or
your players might walk right over that chest, campfire, or bookshelf.
Think about exposure: You don’t get digital “fog of war” on the tabletop! Think about how you’re going to reveal the map to your players: plan to print in multiple
sections to put together, jigsaw style, or have card to “cover” bits of the map. Think about how the tool can help you here: e.g. if you’ve got multiple buildings the players can
explore, use a higher “level” or roof layer to put roofs on your buildings, then print the relevant parts of that level separately: now you’ve got a thematic cover-up that you can
remove to show the insides of the building. Go the other way around for secret doors: print the empty wall on your main map (so players can’t infer the location of the secret door by
the inclusion of a cover-up) and the secret door/passage on the overlay, so you can stick it onto the map when they find it.
Printing it out
There’s no “print” option in Dungeondraft, so – especially if your map spans multiple “pages” – you’ll need a multi-step process to printing it out. With a little practice, it’s not too
hard or time-consuming, though:
Export your map (level by level) from Dungeondraft as PNG files. The default settings are fine, but pay attention to the
“Overlay level” setting if you’re using smart or complex cover-ups as described above.
To easily spread your map across multiple pages, you’ll need to convert it to a PDF. I’m using Gimp to do this. Simply open the PNG in Gimp, make any post-processing/last minute changes that you couldn’t manage in Dungeondraft, then click File >
Export As… and change the filename to have a .pdf extension. You could print directly from Gimp, but in my experience PDF reader software does a much better job at multi-page printing.
Open your PDF in an appropriate reader application with good print management. I’m using Foxit, which is… okay? Print it, selecting “tile large pages” to tell it to print across multiple sheets. Assuming you’ve produced a map an appropriate size
for your printer’s margins, your preview should be perfect. If not, you can get away with reducing the zoom level by up to a percent or two without causing trouble for your miniatures.
If you’d like the page breaks to occur at specific places (for exposure/reveal reasons), go back to Gimp and pad one side of the image by increasing the canvas size.
Check the level of “overlap” specified: I like to keep mine low and use the print margins as the overlapping part of my maps when I tape them together, but you’ll want to see how your
printer behaves and adapt accordingly.
If you’re sticking together multiple pages to make a single large map, trim off the bottom and right margins of each page: if you printed with cut marks, this is easy enough even
without a guillotine. Then tape them together on the underside, taking care to line-up the features on the map (it’s not just your players who’ll appreciate a good, visible grid: it’s
useful when lining-up your printouts to stick, too!).
I keep my maps rolled-up in a box. If you do this too, just be ready with some paperweights to keep the edges from curling when you unfurl them across your gaming table. Or cut into
separate rooms and mount to stiff card for that “jigsaw” effect! Whatever works best for you!
Different games in the same style (absurdle plays adversarially like my cheating hangman
game, crosswordle involves reverse-engineering a wordle colour grid into a crossword, heardle
is like Wordle but sounding out words using the IPA…)
But you know what hasn’t been seen before today? A Wordle clone where you have to guess a creature from the Dungeons & Dragons (5e) Monster Manual by putting numeric values into a
character sheet (STR, DEX, CON, INT, WIS, CHA):
What are you waiting for: go give DNDle a try (I pronounce it “dindle”, but you can pronounce it however you like). A new monster
appears at 10:00 UTC each day.
And because it’s me, of course it’s open source and works offline.
The boring techy bit
Like Wordle, everything happens in your browser: this is a “backendless” web application.
I’ve used ReefJS for state management, because I wanted something I could throw together quickly but I didn’t want to drown myself (or my players)
in a heavyweight monster library. If you’ve not used Reef before, you should give it a go: it’s basically like React but a tenth of the footprint.
A cache-first/background-updating service worker means that it can run completely offline: you can install it to your homescreen in the
same way as Wordle, but once you’ve visited it once it can work indefinitely even if you never go online again.
I don’t like to use a buildchain that’s any more-complicated than is absolutely necessary, so the only development dependency is rollup. It
resolves my import statements and bundles a single JS file for the browser.
Off the back of my recent post about privileges I enjoy as a result of my location and first language, even at my highly-multinational employer, and inspired by my colleague Atanas‘ data-mining into where Automatticians are
located, I decided to do another treemap, this time about which countries Automatticians call home:
Where are the Automatticians?
To get a better picture of that, let’s plot a couple of cartograms. This animation cycles between showing countries at (a) their
actual (landmass) size and (b) approximately proportional to the number of Automatticians based in each country:
Another way to consider the data would be be comparing (a) the population of each country to (b) the number of Automatticians there. Let’s try that:
There’s definitely something to learn from these maps about the cultural impact of our employee diversity, but I can’t say more about that right now… primarily because I’m not smart
enough, but also at least in part because I’ve watched the map animations for too long and made myself seasick.
A note on methodology
A few quick notes on methodology, for the nerds out there who’ll want to argue with me:
Country data was extracted directly from Automattic’s internal staff directory today and is based on self-declaration by employees (this is relevant because we employ a relatively
high number of “digital nomads”, some of whom might not consider any one country their home).
Countries were mapped to continents using this dataset.
Maps are scaled using Robinson projection. Take your arguments about this over here.
The treemaps were made using Excel. The cartographs were produced based on work by Gastner MT, Seguy V, More P. [Fast flow-based algorithm for creating density-equalizing map
projections. Proc Natl Acad Sci USA 115(10):E2156–E2164 (2018)].
Some countries have multiple names or varied name spellings and I tried to detect these and line-up the data right but apologies if I made a mess of it and missed yours.
There’s a bird feeder in my garden. I’ve had it for about a decade now – Ruth got it for me, I think, as a thirtiethbirthday present – and it’s still going strong and mostly-intact, despite having been uprooted on several
occasions to move house.
I like that I can see it from my desk.
This month, though, it lost a piece, when one of its seed cages was stolen in a daring daylight heist by a duo of squirrels who climbed up the (“climb-proof”) pole, hung upside-down
from the hooks, and unscrewed the mechanism that held the feeder in place.
Not content to merely pour out and devour the contents, the miscreants made off with the entire feeder cage. It hasn’t been seen since. I’ve scoured the lawn, checked behind
the bushes, peered around bins and fence posts… it’s nowhere to be found. It’s driving me a little crazy that it’s vanished so-thoroughly.
I can only assume that the squirrels, having observed that the feeder would routinely be refilled once empty, decided that it’d be much more-convenient for them if it the feeder were
closer to their home:
“Hey, Coco!”
“Yeah, Peanut?”
“Every time we steal the nuts in this cage, more nuts appear…”
“Yeah, it’s a magic cage. Everysquirrel knows that, Peanut!”
“…but we have to come all the way down here to eat them…”
“It’s a bit of a drag, isn’t it?”
“…so I’ve been thinking, Coco: wouldn’t it be easier if the cage was… in our tree?”
I like to imagine that the squirrels who live in whatever-tree the feeder’s now hidden in are in the process of developing some kind of cargo cult around it. Once a week, squirrels sit
and pray at the foot of the cage, hoping to appease the magical god who refills it. Over time, only the elders will remember seeing the feeder ever being full, and admonish their
increasingly-sceptical youngers ones to maintain their disciplined worship. In decades to come, squirrel archaeologists will rediscover the relics of this ancient (in squirrel-years)
religion and wonder what inspired it.
Or maybe they dumped the feeder behind the shed. I’d better go check.
Take a look at the map below. I’m the pink pin here in Oxfordshire. The green pins are my immediate team – the people I work with on a
day-to-day basis – and the blue pins are people outside of my immediate team but in its parent team (Automattic’s org chart is a bit like a fractal).
Thinking about timezones, there are two big benefits to being where I am:
I’m in the median timezone, which makes times that are suitable-for-everybody pretty convenient for me (I have a lot of lunchtime/early-afternoon meetings where I get to
watch the sun rise and set, simultaneously, through my teammates’ windows).
I’m West of the mean timezone, which means that most of my immediate coworkers start their day before me so I’m unlikely to start my day blocked by anything I’m waiting on.
(Of course, this privilege is in itself a side-effect of living close to the meridian, whose arbitrary location owes a lot to British naval and political clout in the 19th century: had
France and Latin American countries gotten their way the prime median would have probably cut through the Atlantic or Pacific oceans.)
2. Language Privilege
English is Automattic’s first language (followed perhaps by PHP and Javascript!), not one of the 120 other languages spoken
by Automatticians. That’s somewhat a consequence of the first language of its founders and the language in which the keywords of most programming languages occur.
It’s also a side-effect of how widely English is spoken, which in comes from (a) British colonialism and (b) the USA using
Hollywood etc. to try to score a cultural victory.
I’ve long been a fan of the concept of an international axillary language but I appreciate that’s an idealistic dream whose war
has probably already been lost.
For now, then, I benefit from being able to think, speak, and write in my first language all day, every day, and not have the experience of e.g. my two Indonesian colleagues who
routinely speak English to one another rather than their shared tongue, just for the benefit of the rest of us in the room!
3. Passport Privilege
Despite the efforts of my government these last few years to isolate us from the world stage, a British passport holds an incredible amount of power, ranking fifth or sixth in the world depending on whose passport index you
follow. Compared to many of my colleagues, I can enjoy visa-free and/or low-effort travel to a wider diversity of destinations.
Normally I might show you a map here, but everything’s a bit screwed by COVID-19, which still bars me from travelling to many
places around the globe, but as restrictions start to lift my team have begun talking about our next in-person meetup, something we haven’t done since I first started when I met up with my colleagues in Cape Town and got
assaulted by a penguin.
But even looking back to that trip, I recall the difficulties faced by colleagues who e.g. had to travel to a different country in order tom find an embassy just to apply for the visa
they’d eventually need to travel to the meetup destination. If you’re not a holder of a privileged passport, international travel can be a lot harder, and I’ve definitely taken that for
granted in the past.
I’m going to try to be more conscious of these privileges in my industry.
You can install it as an offline-first progressive web application, which means that this could be the first ever digital currency to have an app that works without an Internet
connection. That’s probably something no other digital currency can claim to have, right?
Here’s what it looks like if I send 0.1 EGX to my friend Chris using the app:
Naturally, I wouldn’t be backing Emma Goldcoin if it didn’t represent such a brilliant step up better-known digital currencies like Bitcoin, Ripple, and Etherium. Specific features
unique to Emma Goldcoin include:
Using it doesn’t massively contribute to energy wastage and environmental damage.
It doesn’t increase the digital divide by helping early adopters at the expense of late adopters.
It’s entirely secure: it’s mathematically impossible to “steal”EGX.
Emma Goldcoin is so simple that you don’t even need a computer to use it: it “just works”.
Sure, it’s got its downsides, and I’d encourage you to read the specification if you’d like to learn more about what
those are. Or if you already know what EGX is all about and just want to try a new way to manage your portfolio, give my new site EGXchange.org a go!
A love a good Jackbox Game. There’s nothing quite like sitting around the living room playing Drawful, Champ’d Up, Job
Job, Trivia MurderParty, or Patently Stupid. But nowadays getting together in the same place isn’t as easy as it used to be, and as often as not I find
my Jackbox gaming with friends or coworkers takes place over Zoom, Around, Google Meet or Discord.
There’s lots of guides to doing this – even an official one! – but they all miss a
few pro tips that I think can turn a good party into a great party. Get all of this set up before your guests are due to arrive to make yourself look like a super-prepared
digital party master.
1. Use two computers!
Using one computer for your video call and a second one to host the game (in addition to the device you’re using to play the games, which could be your phone) is really helpful
for several reasons:
You can keep your video chat full-screen without the game window getting in the way, letting you spend more time focussed on your friends.
Your view of the main screen can be through the same screen-share that everybody else sees, helping you diagnose problems. It also means you experience similar video lag to
everybody else, keeping things fair!
You can shunt the second computer into a breakout room, giving your guests the freedom to hop in and out of a “social” space and a “gaming” space at will. (You can even set up
further computers and have multiple different “game rooms” running at the same time!)
2. Check the volume
Connect some headphones to the computer that’s running the game (or set up a virtual audio output device if you’re feeling more technical). This means you can still have the game
play sounds and transmit them over Zoom, but you’ll only hear the sounds that come through the screen share, not the sounds that come through the second computer too.
That’s helpful, because (a) it means you don’t get feedback or have to put up with an echo at your end, and (b) it means you’ll be hearing the game exactly the same as your guests hear
it, allowing you to easily tweak the volume to a level that allows for conversation over it.
3. Optimise the game settings
Jackbox games were designed first and foremost for sofa gaming, and playing with friends over the Internet benefits from a couple of changes to the default settings.
Sometimes the settings can be found in the main menu of a party pack, and sometimes they’re buried in the game itself, so do your research and know your way around before your party
starts.
Turn the volume down, especially the volume of the music, so you can have a conversation over the game. I’d also recommend disabling Full-screen Mode: this reduces the resolution of the
game, meaning there’s less data for your video-conferencing software to stream, and makes it easier to set up screen sharing without switching back and forth between your applications
(see below).
Turning on the Motion Sensitivity or Reduce Background Animations option if your game has it means there’ll be less movement in the background of the game. This can really help with the
video compression used in videoconferencing software, meaning players on lower-speed connections are less-likely to experience lag or “blockiness” in busy scenes.
It’s worth considering turning Subtitles on so that guests can work out what word they missed (which for the trivia games can be a big deal). Depending on your group, Extended Timers is
worth considering too: the lag introduced by videoconferencing can frustrate players who submit answers at the last second only to discover that – after transmission delays – they
missed the window! Extended Timers don’t solve that, but they do mean that such players are less-likely to end up waiting to the last second in the first place.
Finally: unless the vast majority or all of your guests are in the USA, you might like to flip the Filter US-Centric Content
switch so that you don’t get a bunch of people scratching their heads over a cultural reference that they just don’t get.
By the way, you can use your cursor keys and enter to operate Jackbox games menus, which is usually easier than fiddling with a mouse.
4. Optimise Zoom’s settings
Whatever videoconferencing platform you’re using, the settings for screen sharing are usually broadly similar. I suggest:
Make sure you’ve ticked “Share sound” or a similar setting that broadcasts the game’s audio: in some games, this is crucial; in others, it’s nice-to-have. Use your other computer to
test how it sounds and tweak the volume accordingly.
Check “Optimize for video clip”; this hints to your videoconferencing software that all parts of the content could be moving at once so it can use the same kind of codec it would
for sending video of your face. The alternative assumes that most of the screen will stay static (because it’s the desktop, the background of your slides, or whatever), which works
better with a different kind of codec.
Use “Portion of Screen” sharing rather than selecting the application. This ensures that you can select just the parts of the application that have content in, and not “black bars”,
window chrome and the like, which looks more-professional as well as sending less data over the connection.
If your platform allows it, consider making the mouse cursor invisible in the shared content: this means that you won’t end up with an annoying cursor sitting in the middle of the
screen and getting in the way of text, and makes menu operation look slicker if you end up using the mouse instead of the keyboard for some reason.
Don’t forget to shut down any software that might “pop up” notifications: chat applications, your email client, etc.: the last thing you want is somebody to send you a naughty picture
over WhatsApp and the desktop client to show it to everybody else in your party!
Looking for something with an “escape room” vibe for our date night this week, Ruth and I tried Tick Tock: A Tale for Two, a multiplayer simultaneous cooperative play game for two people, produced by Other Tales Interactive. It was amazing and I’d highly recommend it.
The game’s available on a variety of platforms: Windows, Mac, Android, iOS, and Nintendo Switch. We opted for the Android version because, thanks to Google Play Family Library, this meant we only had to buy one
copy (you need it installed on both devices you’re playing it on, although both devices don’t have to be of the same type: you could use an iPhone and a Nintendo Switch for
example).
The really clever bit from a technical perspective is that the two devices don’t communicate with one another. You could put your devices in flight mode and this game would
still work just fine! Instead, the gameplay functions by, at any given time, giving you either (a) a puzzle for which the other person’s device will provide the solution, or (b) a
puzzle that you both share, but for which each device only gives you half of the clues you need. By working as a team and communicating effectively (think Keep Talking and Nobody Explodes but without the time pressure), you and your partner will solve the puzzles and progress the plot.
(We’re purists for this kind of puzzle game so we didn’t look at one another’s screens, but I can see how it’d be tempting to “cheat” in this way, especially given that even the guys in
the trailer do so!)
The puzzles start easy enough, to the extent that we were worried that the entire experience might not be challenging for us. But the second of the three acts proved us wrong and we had
to step up our communication and coordination, and the final act had one puzzle that had us scratching our heads for some time! Quite an enjoyable difficulty curve, but still balanced
to make sure that we got to a solution, together, in the end. That’s a hard thing to achieve in a game, and deserves praise.
The plot is a little abstract at times and it’s hard to work out exactly what role we, the protagonists, play until right at the end. That’s a bit of a shame, but not in itself a reason
to reject this wonderful gem of a game. We spent 72 minutes playing it, although that includes a break in the middle to eat a delivery curry.
If you’re looking for something a bit different for a quiet night in with somebody special, it’s well worth a look.
Prior to 2018, Three Rings had a relatively simple approach to how it would use pronouns when referring to volunteers.
If the volunteer’s gender was specified as a “masculine” gender (which particular options are available depends on the volunteer’s organisation, but might include “male”, “man”, “cis
man”, and “trans man”), the system would use traditional masculine pronouns like “he”, “his”, “him” etc.
If the gender was specified as a “feminine” gender (e.g .”female”, “woman”, “cis women”, “trans woman”) the system would use traditional feminine pronouns like “she”, “hers”, “her” etc.
For any other answer, no specified answer, or an organisation that doesn’t track gender, we’d use singular “they” pronouns. Simple!
This selection was reflected
throughout the system. Three Rings might say:
They have done 7 shifts by themselves.
She verified her email address was hers.
Would you like to sign him up to this shift?
Unfortunately, this approach didn’t reflect the diversity of personal pronouns nor how they’re applied. It didn’t support volunteer whose gender and pronouns are not
conventionally-connected (“I am a woman and I use ‘them/they’ pronouns”), nor did it respect volunteers whose pronouns are not in one of these three sets (“I use ze/zir pronouns”)… a
position it took me an embarrassingly long time to fully comprehend.
So we took a new approach:
The New Way
From 2018 we allowed organisations to add a “Pronouns” property, allowing volunteers to select from 13 different pronoun sets. If they did so, we’d use it; failing that we’d continue to
assume based on gender if it was available, or else use the singular “they”.
Let’s take a quick linguistics break
Three Rings‘ pronoun field always shows five personal pronouns, separated by slashes, because you can’t necessarily derive one from another. That’s one for each of
five types:
the subject, used when the person you’re talking about is primary argument to a verb (“he called”),
object, for when the person you’re talking about is the secondary argument to a transitive verb (“he called her“),
dependent possessive, for talking about a noun that belongs to a person (“this is their shift”),
independent possessive, for talking about something that belongs to a person potentially would an explicit noun (“this is theirs“), and the
reflexive (and intensive), two types which are generally the same in English, used mostly in Three Rings when a person is both the subject
and indeirect of a verb (“she signed herself up to a shift”).
Let’s see what those look like – here are the 13 pronoun sets supported by Three Rings at the time of writing:
Subject
Object
Possessive
Reflexive/intensive
Dependent
Independent
he
him
his
himself
she
her
hers
herself
they
them
their
theirs
themselves
e
em
eir
eirs
emself
ey
eirself
hou
hee
hy
hine
hyself
hu
hum
hus
humself
ne
nem
nir
nirs
nemself
per
pers
perself
thon
thons
thonself
ve
ver
vis
verself
xe
xem
xyr
xyrs
xemself
ze
zir
zirs
zemself
That’s all data-driven rather than hard-coded, by the way, so adding additional pronoun sets is very easy for our developers. In fact, it’s even possible for us to apply an additional
“override” on an individual, case-by-case basis: all we need to do is specify the five requisite personal pronouns, separated by slashes, and Three Rings understands how to use
them.
Writing code that respects pronouns
Behind the scenes, the developers use a (binary-gendered, for simplicity) convenience function to produce output, and the system corrects for the pronouns appropriate to the volunteer
in question:
<%=@volunteer.his_her.capitalize %>
account has been created for
<%=@volunteer.him_her %>
so
<%=@volunteer.he_she %>
can now log in.
The code above will, dependent on the pronouns specified for the volunteer @volunteer, output something like:
His account has been created for him so he can now log in.
Her account has been created for her so she can now log in.
Their account has been created for them so they can now log in.
Eir account has been created for em so ey can now log in.
Etc.
We’ve got extended functions to automatically detect cases where the use of second person pronouns might be required (“Your account has been created for
you so you can now log in.”) as well as to help us handle the fact that we say “they are” but
“he/she/ey/ze/etc. is“.
It’s all pretty magical and “just works” from a developer’s perspective. I’m sure most of our volunteer developers don’t think about the impact of pronouns at all when they code; they
just get on with it.
Is that a complete solution?
Does this go far enough? Possibly not. This week, one of our customers contacted us to ask:
Is there any way to give the option to input your own pronouns? I ask as some people go by she/them or he/them and this option is not included…
You can probably see what’s happened here: some organisations have taken our pronouns property – which exists primarily to teach the system itself how to talk about volunteers – and are
using it to facilitate their volunteers telling one another what their pronouns are.
What’s the difference? Well:
When a human discloses that their pronouns are “she/they” to another human, they’re saying “You can refer to me using either traditional feminine pronouns (she/her/hers etc.)
or the epicene singular ‘they’ (they/their/theirs etc.)”.
But if you told Three Rings your pronouns were “she/her/their/theirs/themselves”, it would end up using a mixture of the two, even in the same sentence! Consider:
She has done 7 shifts by themselves.
She verified her email address was theirs.
That’s some pretty clunky English right there! Mixing pronoun sets for the same person within a sentence is especially ugly, but even mixing them within the same page can cause
confusion. We can’t trivially meet this customer’s request simply by adding new pronoun sets which mix things up a bit! We need to get smarter.
A Newer Way?
Ultimately, we’re probably going to need to differentiate between a more-rigid “what pronouns should Three Rings use when talking about you” and a more-flexible, perhaps
optional “what pronouns should other humans use for you”? Alternatively, maybe we could allow people to select multiple pronoun sets to display but Three Rings would
only use one of them (at least, one of them at a time!): “which of the following sets of pronouns do you use: select as many as apply”?
Even after this, there’ll always be more work to do.
For instance: I’ve met at least one person who uses no pronouns! By this, they actually
mean they use no third-person personal pronouns (if they actually used no pronouns they wouldn’t say “I”, “me”, “my”, “mine” or “myself” and wouldn’t
want others to say “you”, “your”, “yours” and “yourself” to them)! Semantics aside… for these people Three Ringsshould use the person’s name rather than a
pronoun.
Maybe we can get there one day.
But so long as Three Rings continues to remain ahead of the curve in its respect for and understanding of pronoun use then I’ll be happy.
Our mission is to focus on volunteers and make volunteering easier. At the heart of that mission is treating volunteers with
respect. Making sure our system embraces the diversity of the 65,000+ volunteers who use it by using pronouns correctly might be a small part of that, but it’s a part of it, and I for
one am glad we make the effort.
A few weeks ago, my credit card provider wrote to me to tell me that they were switching me back from paperless to postal billing because I’d “not been receiving their emails”.
This came as a surprise to me because I have been receiving their emails. Why would they think that I hadn’t?
Turns out they have a tracking pixel in their email to track that it’s been opened, as well as potentially additional data such as when it was opened (or re-opened), what email client
or clients the recipient uses, what IP address or addresses they read their mail from, and so on.
Do you have numbers on how many people opened a particular newsletter? Do you have numbers on how many people clicked a particular link?
You can call it data, or stats, or analytics, but make no mistake, that’s tracking.
Follow-on question: do you honestly think that everyone who opens a newsletter or clicks on a link in a newsletter has given their informed constent to be tracked by you?
Needless to say, I had words with my credit card provider. Paperless billing is useful to almost everybody but it’s incredibly useful for blind and partially-sighted users (who are also
the ones least-likely to have images loading in the first place, for obvious reasons) because your computer can read your communication to you which is much more-convenient
than a letter. Imagine how annoyed you’d be if your bank wrote you a letter (which you couldn’t read but had to get somebody else to read to you) to tell you that because you don’t
look at the images in their emails they’re not going to send them to you any more?
Even if you can somehow justify using tracking technologies (which don’t work reliably) to make general, statistical decisions (“fewer people open our emails when the subject
contains the word ‘overdraft’!”), you can’t make individual decisions based on them. That’s just wrong.
I’ve a long history of blogging about dreams I’ve had, and though I’ve not done so recently I don’t want you to think it’s because my dreams have gotten any
less trippy-as-fuck. Take last night for example…
I plough every penny and spare minute I can into a side-project that in my head at least qualifies as “art”. The result will be fake opening credits animation for the (non-existent)
pilot episode of an imagined 80s-style children’s television show. But it gets weirder.
Do you remember Hot Shots!? There’s this scene near the end where Topper Harley, played by Charlie Sheen, returns to the Native
American tribe he’s been living with since before the film (in sort of a clash between the “proud warrior
race” trope and a parody of Dances With Wolves, which came out the previous year). Returning to his teepee, Topper meets tribal
elder Owatonna (Rino Thunder), who asks him about the battle Topper had gone to fight in and, in a callback to an earlier joke, receives the four AA-cell batteries he’d asked Topper
to pick up for him “while he was out”.
I take the dialogue from this scene (which in reality is nonsense, only the subtitles give it any meaning),
mangle it slightly, and translate it into Japanese using an automated translation service. I find some Japanese-speaking colleagues to help verify that each line broadly makes sense,
at least in isolation.
I commission the soundtrack for my credits sequence. A bit of synth-pop about a minute long. I recruit some voice actors to read each of my Japanese lines, as if they’re characters in
an animated kids TV show. I mix it together, putting bits of Japanese dialogue in the right places so that if anybody were to sync-up my soundtrack with the correct scene in Hot
Shots!, the Japanese dialogue would closely mirror the conversation that the characters in that film were having. The scene, though, is slow-paced enough that, re-recorded, the voices
in my new soundtrack don’t sound like they’re part of the same conversation as one another. This is deliberate.
Meanwhile, I’ve had some artists put together some concept character art for me, based on some descriptions. There’s the usual eclectic mix of characters that you’d expect from 80s
cartoons: one character’s a friendly bear-like thing, another’s a cowardly robot, there’s a talking flying unicorn… you know the kind of shit. I give them descriptions, they give me
art.
Next, I send the concept art and the soundtrack to an animation team and ask them to produce a credits sequence for it, and I indicate which of the characters depicted should be
saying which lines.
Finally, I dump the credits sequence around the Internet, wait a bit, and then start asking on forums “hey, what show is this?” to see what kind of response it gets.
The thing goes viral. It scratches the itch of people who love to try to find the provenance of old TV clips, but of course there’s no payoff because the show doesn’t exist. It
doesn’t take too long before somebody translates the dialogue and notices some of the unusual phasing and suggests a connection to Hot Shots! That seems to help date the show as
post-1991, but it’s still a mystery. By the time somebody get around to posting a video where the soundtrack overlays the scene from Hot Shots!, conspiracy theories are already all
over: the dominant hypothesis is that the clips are from a series of different shows (still to be identified) but only the soundtrack is new… but that still doesn’t answer what the
different shows are!
As the phenomenon begins to expand into mainstream media I become aware that even the most meme-averse folks I know are going to hear about it, at some point. And as I ‘m likely to be
“found out” as the creator of this weird thing, sooner or later, I decide to come clean about it to people I know sooner, rather than later. I’m hanging out with Ruth and her brothers Robin and Owen and I bring it up:
“Do you remember Hot Shots!? There’s this scene near the end where Topper Harley, played by Charlie Sheen…”, I begin, hoping that the explanation of my process might somehow justify
the weird shit I’ve brought to the world. Or at least, that one of this group has already come across this latest Internet trend and will interject and give me an “in”.
Ruth interrupts: “I don’t think I’ve seen Hot Shots!”
“Really?” Realising that this’ll take some background explanation, I begin by referring to Top Gun and the tropes Hot Shots! plays into and work from there.
Some time later, I’m involved with a team who are making a documentary about the whole phenomenon and my part in it. They’re proposing to release a special edition disc with a chapter
that uses DVD video’s “multi angle” and “audio format switch” features to allow you to watch your choice of either the scene from
Hot Shots! or from my trailer with your choice of either the original audio, my soundtrack, or a commentary by me, but they’re having difficulty negotiating the relevant rights.
After I woke, I tried to tell Ruth about this most-bizarre dream, but soon got stuck in an “am I still dreaming” moment after the following exchange:
It just passed two years since I started working at Automattic, and I just made a startling
discovery: I’ve now been with the company for longer than 50% of the staff.
When you hear that from a 2-year employee at a tech company, it’s easy to assume that they have a high staff turnover, but Automattic’s churn rate is relatively low, especially for our
sector: 86% of developers stay longer than 5 years. So what’s happening? Let’s visualise it:
All that “red” at the bottom of the graph? That’s recent growth. Automattic’s expanding really rapidly right now, taking on new talent at a never-before-seen speed.
Since before I joined it’s been the case that our goals have demanded an influx of new engineers at a faster rate than we’ve been able to recruit, but it looks like things are
improving. Recent refinements to our recruitment process (of which I’ve written about my experience) have helped, but I wonder how much we’ve
also been aided by pandemic-related changes to working patterns? Many people, and especially in tech fields, have now discovered that working-from-home works for them, and a company
like Automattic that’s been built for the last decade and a half on a “distributed” model is an ideal place to see that approach work at it’s best.
We’re rolling out new induction programmes to support this growth. Because I care about our corporate culture, I’ve volunteered
myself as a Culture Buddy, so I’m going to spend some of this winter helping Newmatticians integrate into our (sometimes quirky, often chaotic) ways of working. I’m quite excited to be
at a point where I’m in the “older 50%” of the organisation and so have a responsibility for supporting the “younger 50%”, even though I’m surprised that it came around so quickly.
I wonder how that graph will look in another two years.