Normally this kind of thing would go into the ballooning dump of “things I’ve enjoyed on the Internet” that is my reposts archive. But sometimes something is
so perfect that you have to try to help it see the widest audience it can, right? And today, that thing is: Mackerelmedia
Fish.
What is Mackerelmedia Fish? I’ve had a thorough and pretty complete experience of it, now, and I’m still not sure. It’s one or more (or none)
of these, for sure, maybe:
A point-and-click, text-based, or hypertext adventure?
A statement about the fragility of proprietary technologies on the Internet?
An ARG set in a parallel universe in which the 1990s never ended?
A series of surrealist art pieces connected by a loose narrative?
Rock Paper Shotgun’s article about it opens with “I
don’t know where to begin with this—literally, figuratively, existentially?” That sounds about right.
What I can tell you with confident is what playing feels like. And what it feels like is the moment when you’ve gotten bored waiting for page 20 of Argon Zark to finish appear so you decide to reread your already-downloaded copy of the 1997 a.r.k bestof book, and for a moment you think to yourself: “Whoah; this must be what living in the future
feels like!”
Because back then you didn’t yet have any concept that “living in the future” will involve scavenging for toilet paper while complaining that you can’t stream your favourite shows in 4K on your pocket-sized
supercomputer until the weekend.
Mackerelmedia Fish is a mess of half-baked puns, retro graphics, outdated browsing paradigms and broken links. And that’s just part of what makes it great.
It’s also “a short story that’s about the loss of digital history”, its creator Nathalie Lawhead
says. If that was her goal, I think she managed it admirably.
If I wasn’t already in love with the game already I would have been when I got to the bit where you navigate through the directory indexes of a series of deepening folders,
choose-your-own-adventure style. Nathalie writes, of it:
One thing that I think is also unique about it is using an open directory as a choose your own adventure. The directories are branching. You explore them, and there’s text at the
bottom (an htaccess header) that describes the folder you’re in, treating each directory as a landscape. You interact with the files that are in each of these folders, and uncover the
story that way.
Back in the naughties I experimented with making choose-your-own-adventure games in exactly this way. I was experimenting with different media by which this kind of
branching-choice game could be presented. I envisaged a project in which I’d showcase the same (or a set of related) stories through different approaches. One was “print” (or at least
“printable”): came up with a Twee1-to-PDF
converter to make “printable” gamebooks. A second was Web hypertext. A third – and this is the one which was most-similar to what Nathalie has now so expertly made real – was
FTP! My thinking was that this would be an adventure game that could be played in a browser or even from the command line on any
(then-contemporary: FTP clients aren’t so commonplace nowadays) computer. And then, like so many of my projects, the half-made
version got put aside “for later” and forgotten about. My solution involved abusing the FTP protocol terribly, but it
worked.
(I also looked into ways to make Gopher-powered hypertext fiction and toyed with the idea of using YouTube
annotations to make an interactive story web [subsequently done amazingly by Wheezy Waiter, though the death of YouTube
annotations in 2017 killed it]. And I’ve still got a prototype I’d like to get back to, someday, of a text-based adventure played entirely through your web browser’s debug
console…! But time is not my friend… Maybe I ought to collaborate with somebody else to keep me on-course.)
In any case: Mackerelmedia Fish is fun, weird, nostalgic, inspiring, and surreal, and you should give it a go. You’ll need to be on a Windows
or OS X computer to get everything you can out of it, but there’s nothing to stop you starting out on your mobile, I imagine.
Sso long as you’re capable of at least 800 × 600 at 256 colours and have 4MB of RAM,
if you know what I mean.
Going to that page results in about 14 Mb of data being transmitted from their server to your device (which you’ll pay for
if you’re on a metered connection). For comparison, reading my recent post about pronouns results in about 356 Kb of data. In other words, their page is forty times more bandwidth-consuming, despite the fact that my page has about four times the word count. The page
you’re reading right now, thanks to its images, weighs in at about 650 Kb: you could still download it more than twenty times while
you were waiting for theirs.
Worse still, the most-heavyweight of the content they deliver is stuff that’s arguably strictly optional and doesn’t add to the message:
Eight different font files are served from three different domains (the fonts alone consume about 140 Kb) – seven more are
queued but not used.
Among the biggest JavaScript files they serve is that of Hotjar analytics: I understand the importance of measuring your impact, but making
your visitors – and the planet – pay for it is a little ironic.
The biggest JavaScript file seems to be for Mapbox, which as far as I can see is never actually used: that map on the page is a static
image which, incidentally, I was able to reduce from 0.5 Mb to 0.2 Mb just by running it through
a free online image compressor.
And because the site sets virtually no caching headers, even if you’ve visited the website before you’re likely to have to download the whole thing again. Every single
time.
It’s not just about bandwidth: all of those fonts, that JavaScript, their 60 Kb of CSS (this page sent you 13 Kb) all has to be parsed and interpreted by your device. If you’re on a mobile device or a laptop, that means you’re burning through lithium (a non-renewable resource whose extraction and disposal is highly polluting) and
regardless of your device you’re using you’re using more electricity to visit their site than you need to. Coding antipatterns like document.write() and active event
listeners that execute every time you scroll the page keep your processor working hard, turning electricity into waste heat. It took me over 12 seconds on a high-end smartphone and a
good 4G connection to load this page to the point of usability. That’s 12 seconds of a bright screen, a processor running full tilt,a data connection working its hardest, and a
battery ticking away. And I assume I’m not the only person visiting the website today.
This isn’t really about this particular website, of course (and I certainly don’t want to discourage anybody from the important cause of saving the planet!). It’s about the
bigger picture: there’s a widespread and long-standing trend in web development towards bigger, heavier, more power-hungry websites, built on top of heavyweight frameworks that push the
hard work onto the user’s device and which favour developer happiness over user experience. This is pretty terrible: it makes the Web slow, and brittle, and it increases the digital
divide as people on slower connections and older devices get left behind.
I had a bit of a realisation, this week. I’ve long sometimes found it especially challenging to maintain a mental map of the preferred personal pronouns of people who don’t use “he”,
“she”, or “they”. Further than that, it seemed to me that personal pronouns beyond these three ought to be mostly redundant in English. “Them” has been well-established for over six centuries as not just a plural but a singular pronoun, I thought: we don’t need
to invent more words.
Over time – even within my lifetime – it’s become noticeably more-commonplace to hear the singular “they”/”them” in place of
“he or she”/”him or her”, or single binary pronouns (e.g. when talking about professions which have long been dominated by a particular gender). So you might hear somebody say:
“I will make an appointment to see a doctor and ask them about my persistent cough.”
It seemed to me that “they” was a perfect general-purpose stand in for everybody who was well-served by neither “he” nor “she”.
I’ll stress, of course, that I’ve always been fully supportive of people’s preferred pronouns, tried to use them consistently, ensured they can be
represented in software I’ve implemented (and pressured others over their implementations, although that’s as-often related to my individual identity), etc. I’ve just struggled to see the need for new singular third-person pronouns like ze,
ey, sie, ve, or – heaven forbid – the linguistically-cumbersome thon, co, or peh.
I’d put it down to one of those things that I just don’t “get”, but about which I can still respect and support anyway. I don’t have to totally grok something in order to understand
that it’s important to others.
But very recently, I was suddenly struck by a comprehension of one of the reported problems with the use of the singular “they” to refer to people for whom the traditional binary
pronouns are not suitable. I’ve tried to capture in the illustration above the moment of understanding when I made the leap.
The essence of this particular problem is: the singular “they” already has a meaning that is necessarily incompatible with the singular “they” used of a nonbinary
subject! By way of example, let’s revisit my earlier example sentence:
“I will make an appointment to see a doctor and ask them about my persistent cough.”
Here, I’m saying one of two things, and it’s fundamentally unclear which of the two I mean:
I do not know which doctor I will see, so I do not know the pronoun of the doctor.
I will see the same doctor I always see, and they prefer a nonbinary pronoun.
The more widespread the adoption of “they” as the third person singular for nonbinary people becomes, the more long-winded it is to clarify specifically which of the above
interpretations is correct! The tendency to assume the former leads to nonbinary invisibility, and the (less-likely in most social circles) tendency to assume the latter leads to
misgendering.
The difference is one of specificity. Because the singular “they” is routinely used non-specifically, where the subject’s preferred pronouns are unknown (as with the doctor, above),
unknowable (“somebody wrote this anonymous message; they said…”), or a placeholder (“when I meet somebody, I shake their hand”), it quickly produces
semantic ambiguities when it’s used to refer to specific nonbinary individuals. And that makes me think: we can do better.
That said: I don’t feel able to suggest which pronoun(s) ought to replace the question mark in the diagram above. But for the first time, I’m not convinced that it ought to be “they”.
Ultimately, this changes nothing. I regularly use a diversity of different singular pronouns (he”, “she”, and “they”, mostly) based on the individual subject and I’ll continue to
acknowledge and respect their preferences. If you’ve you’ve told me that you like to be referred to by the singular “they”, I’ll continue to do so and you’re welcome and
encouraged to correct me if I get it wrong!
But perhaps this new appreciation of the limitations of the singular “they” when referring to specific individuals will help me to empathise with those for whom it doesn’t feel
right, and who might benefit from more-widespread understanding of other, newer personal pronouns.
(and on the off chance anybody’s found their way to this page looking for my pronouns: I’m not particularly fussy, so long as you’re consistent and don’t confuse your
audience, but most people refer to me with traditional masculine pronouns he/him/his)
StackOverflow‘s one of the most-popular and widely-used resources for software developers. It dominates the search results when you’re looking
for answers to techy questions. If you know how to read it, it can be invaluable.
But… I’m not sure what it is about the platform or the culture surrounding it that creates a certain… pattern to the answers that you can expect to receive on StackOverflow. To
illustrate, let’s suppose we have a question:
Here are the answers you might see:
The Golden Hammer
The top answer is often somebody answering not the question you asked, but the question they’d like to think you asked.
Never mind that you specifically said that you were using a campfire, the answer suggests that you use a toaster. Look back a few years and you’ll see countless examples of people
asking for solutions using “vanilla” JavaScript and being told to use some heavyweight, everything-but-the-kitchen sink jQuery plugin. Now we’re in a more enlightened time, those same
people are being told to use some heavyweight, everything-but-the-kitchen-sink npm module. How far we’ve come.
The Belligerent
Far often than you might expect, a perfectly reasonable “how do I do this?” question is met with an aggressive response of “why would you want to do that?”
These are particularly infuriating to read when you come to a closed thread and you know that you do want to be doing the “forbidden” thing. You’ve considered the
other options, you’ve assessed the situation… and now some arrogant bugger’s telling you that you’re wrong!
This kind of response is among the most annoying, second only to…
The Kindred Spirit
You’re getting a strange and inexplicable error message. You search for it and get exactly one result. Reading the thread, after hours of tearing your hair out, you suddenly
feel a sense of relief: you’ve found another soul in this crazy world that’s suffering in precisely the same way as you are. Every word you read reconfirms for you that you and they
have the same issue. At last, a solution is in reach!
Nope.
Not only have you not got a solution, but the saviour you thought you’d found? They do have a solution, but they were thinking only about themselves when they got it, so they
didn’t share it.
I get it: when you’re deep in focus on a problem you forget that the forum you’re on will receive search traffic indefinitely. But “NM, I’ve worked it out” is the most infuriating
sentence on the Internet. When you solve a tough problem that you’d talked about online, for the love of God put the solution online too.
The Expert
There’s always somebody who answers the question but in a way you’d need a PhD to comprehend.
StackOverflow is often used by beginners. Make your answer beginner-friendly if possible.
The Hero We Don’t Need
Like the Golden Hammer, the Hero We Don’t Need answers the question that they know the answer to rather than the question you actually asked. Unlike the Golden Hammer,
the question they answer isn’t even remotely related to the question you asked.
Perhaps some future site visitor who chose their search terms badly might benefit from this out-of-the-box look at a completely different problem. But I wouldn’t count on it.
The Correct Answer
Eventually, if you’re lucky, somebody will provide the actual answer to the question. You’ll often have to scroll about this far down the page to find it.
Still, at least there’s an answer. And it only took four hours between posting the question and it appearing. Sometimes that’s what it takes, and at least
the answer will be there for the next person, assuming that they, too, scroll down far enough.
Unfortunately hundreds of novice developers will have no way to tell that this alone is the correct answer amongst the endless stream of bullshit in which it resides.
The Echo
And finally, there’s always some idiot who repeats one of the same (useless) answers from before. Just to keep the noise-to-signal ratio up, I guess.
StackOverflow’s given me so many useful answers to so many questions, over the years. But it’s also been a great source of frustration for me at the hands of six of these seven
archetypes. Did I miss any?
In these challenging times, and especially because my work and social circles have me communicate regularly with people in
many different countries and with many different backgrounds, I’m especially grateful for the following:
My partner, her husband, and I
each have jobs that we can do remotely and so we’re not out-of-work during the crisis.
Our employers are understanding of our need to reduce and adjust our hours to fit around our new lifestyle now that schools and nurseries are (broadly) closed.
Our kids are healthy and not at significant risk of serious illness.
We’ve got the means, time, and experience to provide an adequate homeschooling environment for them in the immediate term.
(Even though we’d hoped to have moved house by now and haven’t, perhaps at least in part because of COVID-19,) we
have a place to live that mostly meets our needs.
We have easy access to a number of supermarkets with different demographics, and even where we’ve been impacted by them we’ve always been able to work-around the where
panic-buying-induced shortages have reasonably quickly.
We’re well-off enough that we were able to buy or order everything we’d need to prepare for lockdown without financial risk.
Having three adults gives us more hands on deck than most people get for childcare, self-care, etc.
(we’re “parenting on easy mode”).
We live in a country in which the government (eventually) imposed the requisite amount of lockdown necessary to limit the spread of the
virus.
We’ve “only” got the catastrophes of COVID-19 and Brexit to deal with, which is a bearable amount of
crisis, unlike my colleague in Zagreb for example.
Whenever you find the current crisis getting you down, stop and think about the things that aren’t-so-bad or are even good. Stopping and expressing your gratitude for them in
whatever form works for you is good for your happiness and mental health.
But today my heart was filled with joy when today I received a postcard – a hashcard, no less – from fellow hasher
Fippe, whose expedition to Lüneburg last week brought him
past the famous town hall shown in the postcard, as evidenced by his photo from the site.
I use the Post Kinds plugin to streamline the management of the different types of posts I make on my blog, based on the
IndieWeb post types list: articles, like this one, are “conventional” blog posts, but I also publish
notes (which are analogous to “tweets”), reposts (“shares” of things I’ve found online, sometimes with commentary), checkins (mostly chronicling my geocaching/geohashing), and others: I’ve extended Post Kinds to facilitate comics and
reviews, for example.
But for people who subscribe (either directly or indirectly) to everything I post, I imagine it must be a little frustrating to sometimes be
unable to identify the type of a post before clicking-through. So I’ve added the following code, which I’m sharing here and on GitHub in case it’s of any use to anybody else, to my theme’s functions.php:
// Make titles in RSS feed be prefixed by the Kind of the post.functionadd_kind_to_rss_post_title(){
$kinds= wp_get_post_terms( get_the_ID(), 'kind' );
if( !isset( $kinds ) ||empty( $kinds ) ) return get_the_title(); // sanity-check.$kind=$kinds[0]->name;
$title= get_the_title();
return trim( "[{$kind}] {$title}" );
}
add_filter( 'the_title_rss', 'add_kind_to_rss_post_title', 4 ); // priority 4 to ensure it happens BEFORE default escaping filters.
This decorates the titles of my posts, but only in my feeds, so it’s easier for people to tell at-a-glance what’s going on:
Down the line I might expand this so that it doesn’t show if the subscriber is, for example, asking only for articles (e.g. via this
feed); I’m coming up with a huge list of things I’d like to do at IndieWebCamp London! But for now, this feels like a nice simple
improvement to a plugin I love that helps it to fit my specific needs.
This weekend, my sister Sarah challenged me to define the difference between Virtual Reality and Augmented Reality. And the more I talked about the differences between them, the more I
realised that I don’t have a concrete definition, and I don’t think that anybody else does either.
After all: from a technical perspective, any fully-immersive AR system – for example a hypothetical future version of the Microsoft Hololens that solves the current edition’s FOV problems – exists in a theoretical
superset of any current-generation VR system. That AR augments
the reality you can genuinely see, rather than replacing it entirely, becomes irrelevant if that AR system could superimpose
a virtual environment covering your entire view. So the argument that compared to VR, AR only covers part of your vision is not a reliable definition of the difference.
This isn’t a new conundrum. Way back in 1994 back when the Sega VR-1 was our idea of cutting edge, Milgram et al. developed a series of metaphorical spectra to describe the relationship between
different kinds of “mixed reality” systems. The core difference, they argue, is whether or not the computer-generated content represents a “world” in itself (VR) is just an “overlay” (AR).
But that’s unsatisfying for the same reason as above. The HTC Vive headset can be configured to use its front-facing camera(s) to fade seamlessly from the game world to the real world
as the player gets close to the boundaries of their play space. This is a safety feature, but it doesn’t have to be: there’s no reason that a HTC Vive couldn’t be
adapted to function as what Milgram would describe as a “class 4” device, which is functionally the same as a headset-mounted AR
device. So what’s the difference?
You might argue that the difference between AR and VR is content-based:
that is, it’s the thing that you’re expected to focus on that dictates which is which. If you’re expected to look at the “real world”, it’s an augmentation, and if not then
it’s a virtualisation. But that approach fails to describe Google’s tech demo of putting artefacts in your living room via
augmented reality (which I’ve written about before), because your focus is expected to be on the artefact rather than the “real world” around it.
The real world only exists to help with the interpretation of scale: it’s not what the experience is about and your countertop is as valid a real world target as the Louvre:
Google doesn’t care.
But even if we accept this explanation, the definition gets muddied by the wider field of “extended reality” (XR). Originally an
umbrella term to cover both AR and VR (and “MR“, if you believe that’s a separate and independent thing), XR gets used to describe interactive experiences
that cover other senses, too. If I play a VR game with real-world “props” that I can pick up and move around, but that appear
differently in my vision, am I not “augmenting” reality? Is my experience, therefore, more or less “VR” than if the
interactive objects exist only on my screen? What about if – as in a recent VR escape room I attended – the experience is enhanced by
fans to simulate the movement of air around you? What about smell? (You know already that somebody’s working on bridging virtual reality with Smell-O-Vision.)
Increasingly, then, I’m beginning to feel that XR itself is a spectrum, and a pretty woolly one. Just as it’s hard to specify
in a concrete way where the boundary exists between being asleep and being awake, it’s hard to mark where “our” reality gives way to the virtual and vice-versa.
It’s based upon the addition of information to our senses, by a computer, and there can be more (as in fully-immersive VR) or
less (as in the subtle application of AR) of it… but the edges are very fuzzy. I guess that the spectrum of the visual
experience of XR might look a little like this:
Honestly, I don’t know any more. But I don’t think my sister does either.
I was visited this morning by a pair of Jehovah’s Witnesses, doing the door-to-door ministry for which they’re most-famous, and I was reminded of an interesting quirk in the practices
of the WTS. If you know anything at all about their beliefs, you’re probably aware that
Jehovah’s Witnesses generally refuse blood transfusions.
I first became aware of their policy of rejecting potentially life-saving blood when I was just a child. A school friend of mine (this one!), following
a problematic tonsillectomy, found his life at risk because of his family’s commitment to this religious principle. Because I’ve always been interested in religion and the diversity of
theological difference I ended up looking into the background of their practice… and I came to a very different scriptural interpretation.
Everything that lives and moves about will be food for you. Just as I gave you the green plants,
I now give you everything.But you must not eat meat that has its lifeblood still in it.
This is God speaking to Noah, by the way. Sexacentenarian Noah’s took a six-week cruise on a floating zoo and God’s just said “boat number 1, your time is up… and by Me you’d better be
horny ‘cos it’s time to go forth and multiply.” God invents the rainbow as a promise not to reformat-and-reinstall again, and then follows it up with a handful of rules because He’s a
big fan of rules. And even though blood transfusions wouldn’t be invented for thousands of years, the Jehovah’s Witnesses almost-uniquely feel that this prohibition on consuming blood
covers transfusions too.
That all sounds fair enough. I mean, it requires a pretty heavy-handed interpretation of what was meant but that’s par for the course for the Bible and especially the Old
Testament.
But let’s take a step back. Here’s those verses again, this time in Hebrew:
Every moving living thing is your food, like the plants you were already given. But you may not eat any creature that is still alive.
“Still alive?” That’s a very different way of reading it, right? Suddenly this strange verse about abstaining from, I don’t know, black pudding (and possibly blood
transfusions) becomes a requirement to kill your dinner before you chow down.
This is like Deuteronomy 14:21, where it says “Do not cook a
young goat in its mother’s milk.” The same directive appears in Exodus 34:26 but I
prefer Deuteronomy’s because it also has this really surreal bit about how it’s not okay to eat roadkill but you can serve it to your immigrant friends. It turns out that kid-boiled-in-mother’s-milk was an old Canaan recipe and pagan tribes used to eat it ritualistically, so
a prohibition on the practice by Noah and his descendants was not only an opposition to animal cruelty but a statement against polytheism.
Could “eating things alive”, which is specifically forbidden in Judaism, be – like goat-in-goat-juice – another pagan ritual, formerly widespread, that the early Israelites
were trying to outlaw? Quite possibly.
But there’s a further possible interpretation that I feel is worth looking at. Let’s paint a picture. Again, let’s assume despite the mountains of evidence to the contrary that the bible is literally true, which meets people who use the
covenant of Genesis as a basis for medical decisions much more than half-way:
God’s just declared bankruptcy on his first “Earth” project and wiped the slate clean. He’s had the RNG – I’m assuming that God
plays dice – roll up a new landmass, and he’s populated it with one family of humans, plus two of every kind of land animal. Possibly more of the fast breeders like the insects and some
of the small mammals, I suppose, depending on how closely they were housed in the ark. Don’t make me explain this to you.
Let’s assume that God doesn’t want the disembarking humans to immediately eat all of the animals with no concern for sustainability. This is, of course, absolutely what we humans do: if
we take a biblical-literalism viewpoint, it’s a miracle that the delicious dodo would last until the 17th century CE rather than being eaten
on the first post-flood day. God’s sort-of promised that the humans will be allowed to eat almost anything they like and that He’ll stop meddling, but He doesn’t want a mass-extinction,
so what does He do? He says:
You can eat all the plants you want. But don’t eat any of the animals that are alive right now: let them breed a bit first.
This has always seemed to me to be the obvious way to interpret the commandment not to eat living animals: don’t eat the ones that are living at the moment. Certainly
more-rational than “don’t have blood transfusions.” And if what God (allegedly) said to Noah is to be treated as a rule that still stands today, rather than just at the time, then
perhaps it’s vegetarianism for which Jehovah’s Witnesses should best be known. That way, they’d get to argue with the hosts of barbecues about what goes into their bodies rather
than with judges about
what goes into their childrens’.
But try telling them that. (Seriously: give it a go! They’re usually more than happy to talk about scripture, even if you’re a little bit sarcastic!)
I last handed in a dissertation almost 16 years ago; that one marked the cumulation of my academic work at Aberystwyth University, then the “University of Wales, Aberystwyth”. Since then I’ve studied programming, pentesting and psychology (the P-subject
Triathalon?)… before returning to university to undertake a masters degree in information security and forensics.
Today, I handed in that dissertation. Thanks to digital hand-ins, I’m able to “hand it in” and then change my mind, make changes, and hand-in a replacement version right up
until the deadline on Wednesday (I’m already on my second version!), so I’ve still got a few evenings left for last-minute proofreads and tweaks. That said, I’m mostly
happy with where it is right now.
Writing a dissertation was harder this time around. Things that made it harder included:
Writing a masters-level dissertation rather than a bachelors-level one, naturally.
Opting for a research dissertation rather than an engineering one: I had the choice, and I knew that I’d do better in engineering, but I did research anyway because I
thought that the challenge would be good for me.
Being older! It’s harder to cram information into a late-thirty-something brain than into a young-twenty-something one.
Work: going through the recruitment process for and starting at Automattic ate a lot of my time,
especially as I was used to working part-time at the Bodleian and I’d been turning a little of what would otherwise have been my “freelance work time” into “study time” (last time
around I was working part-time for SmartData, of course).
Life: the kids, our (hopefully) upcoming house move and other commitments are pretty good at getting in the way. Ruth and JTA have been amazing at carving out blocks of time for me to study, especially these last few weekends, which may have made all the
difference.
It feels like less of a bang than last time around, but still sufficient that I’ll breathe a big sigh of relief. I’ve a huge
backlog of things to get on with that I’ve been putting-off until this monster gets finished, but I’m not thinking about them quite yet.
I need a moment to get my bearings again and get used to the fact that once again – and for the first time in several years – I’ll soon be not-a-student. Fun fact, I’ve spent
very-slightly-more than half of my adult life as a registered student: apparently I’m a sucker it, for all that I complain… in fact, I’m already wondering what I can study
next (suggestions welcome!), although I’ve promised myself that I’ll take a couple of years off before I get into anything serious.
(This is, of course, assuming I pass my masters degree, otherwise I might still be a student for a little longer while I “fix” my dissertation!)
If anybody’s curious (and I shan’t blame you if you’re not), here’s my abstract… assuming I don’t go back and change it yet again in the next couple of days (it’s still a little clunky
especially in the final sentence):
Multifactor authentication (MFA), such as the use of a mobile phone in addition to a username and password when logging in to a website, is one of the strongest security enhancements
an individual can add to their online accounts. Compared to alternative enhancements like refraining from the reuse of passwords it’s been shown to be easy and effective. However: MFA
is optional for most consumer-facing Web services supporting MFA, and elective user adoption is well under 10%.
How can user adoption be increased? Delivering security awareness training to users has been shown to help, but the gold standard would be a mechanism to encourage uptake that can be
delivered at the point at which the user first creates an account on a system. This would provide strong protection to an account for its entire life.
Using realistic account signup scenarios delivered to participants’ own computers, an experiment was performed into the use of language surrounding the invitation to adopt MFA. During
the scenarios, participants were exposed to statements designed to either instil fear of hackers or to praise them for setting up an account and considering MFA. The effect on uptake
rates is compared. A follow-up questionnaire asks questions to understand user security behaviours including password and MFA choices and explain their thought processes when
considering each.
No significant difference is found between the use of “fear” and “praise” statements. However, secondary information revealed during the experiment and survey provides recommendations
for service providers to offer MFA after, rather than at, the point of account signup, and for security educators to focus their energies on dispelling user preconceptions about the
convenience, privacy implications, and necessity of MFA.
I keep my life pretty busy and don’t get as much “outside” as I’d like, but when I do I like to get out on an occasional geohashing expedition (like these
ones). I (somewhat badly) explained geohashing in the vlog attached to my expedition 2018-08-07 51 -1, but the short
version is this: an xkcd comic proposed an formula to use a stock market index to generate a pair of random coordinates, impossible to predict in
advance, for each date. Those coordinates are (broadly) repeated for each degree of latitude and longitude throughout the planet, and your challenge is to get to them and discover
what’s there. So it’s like geocaching, except you don’t get to find anything at the end and there’s no guarantee that the destination is even remotely accessible. I love it.
Most geohashers used to use a MediaWiki-powered website to coordinate their efforts and share their stories, until a different application on the server where it resided got hacked and the wiki got taken down as a precaution.
That was last September, and the community became somewhat “lost” this winter as a result. It didn’t stop us ‘hashing, of course: the algorithm’s open-source and so are many of its
implementations, so I was able to sink into a disgusting hole in November, for example. But we’d lost the digital
“village square” of our community.
So I emailed Davean, who does techy things for xkcd, and said that I’d like to take over the Geohashing wiki but that I’d first like (a) his or Randall’s blessing to do so, and ideally
(b) a backup of the pages of the site as it last-stood. Apparently I thought that my new job plus finishing my dissertation plus trying to move house plus all of the usual
things I fill my time with wasn’t enough and I needed a mini side-project, because when I finally got the go-ahead at the end of last month I (re)launched geohashing.site. Take a look, if you like. If you’ve never been Geohashing before, there’s never been a more-obscure time to start!
Luckily, it’s not been a significant time-sink for me: members of the geohashing community quickly stepped up to help me modernise content, fix bots, update hyperlinks and the like. I
took the opportunity to fix a few things that had always bugged me about the old site, like the mobile-unfriendly interface and the inability to upload GPX files, and laid the groundwork to make bigger changes down the road (like changing the way that inline maps are displayed, a popular community request).
So yeah: Geohashing’s back, not that it ever went away, and I got to be part of the mission to make it so. I feel like I am, as geohashers say… out standing in my field.
Last week I built Fox, the newest addition to our home network. Fox, whose specification
called for not one, not two, not three but four 12 terabyte hard disk drives was built principally as a souped-up NAS
device – a central place for us all to safely hold and control access to important files rather than having them spread across our various devices – but she’s got a lot more going on
that that, too.
Fox has:
Enough hard drive space to give us 36TB of storage capacity plus 12TB of parity, allowing any
one of the drives to fail without losing any data.
“Headroom” sufficient to double its capacity in the future without significant effort.
A mediumweight graphics card to assist with real-time transcoding, helping her to convert and stream audio and videos to our devices in whatever format they prefer.
A beefy processor and sufficient RAM to run a dozen virtual machines supporting a variety of functions like software
development, media ripping and cataloguing, photo rescaling, reverse-proxying, and document scanning (a planned future purpose for Fox is to have a network-enabled scanner near our
“in-trays” so that we can digitise and OCR all of our post and paperwork into a searchable, accessible, space-saving
collection).
The last time I filmed myself building a PC was when I built Cosmo, a couple of desktops ago. He
turned out to be a bit of a nightmare: he was my first fully-watercooled computer and he leaked everywhere: by the time I’d done all the fixing and re-fixing to make him behave nicely,
I wasn’t happy with the video footage and I never uploaded it. I’d been wary, almost-superstitious, about filming a build since then, but I shot a timelapse of Fox’s construction and it
turned out pretty well: you can watch it below or on YouTube or QTube.
The timelapse slows to real-time, about a minute in, to illustrate a point about the component test I did with only a CPU (and
cooler), PSU, and RAM attached. Something I routinely do when building
computers but which I only recently discovered isn’t commonly practised is shown: that the easiest way to power on a computer without attaching a power switch is just to bridge the
power switch pins using your screwdriver!
Fox is running Unraid, an operating system basically designed for exactly these kinds of purposes. I’ve been super-impressed by the
ease-of-use and versatility of Unraid and I’d recommend it if you’ve got a similar NAS project in your future! I’d also like
to sing the praises of the Fractal Design Node 804 case: it’s not got quite as many bells-and-whistles
as some cases, but its dual-chamber design is spot-on for a multipurpose NAS, giving ample room for both full-sized expansion
cards and heatsinks and lots of hard drives in a relatively compact space.
While being driven around England it struck me that humans are currently like the filling in a sandwich between one slice of machine — the satnav — and another — the car. Before the
invention of sandwiches the vehicle was simply a slice of machine with a human topping. But now it’s a sandwich, and the two machine slices are slowly squeezing out the human filling
and will eventually be stuck directly together with nothing but a thin layer of API butter. Then the human will be a superfluous thing, perhaps a little gherkin on the side of the
plate.
While we were driving I was reading the directions from a mapping app on my phone, with the sound off, checking the upcoming turns, and giving verbal directions to Mary, the driver. I
was an extra layer of human garnish — perhaps some chutney or a sliced tomato — between the satnav slice and the driver filling.
What Phil’s describing is probably familiar to you: the experience of one or more humans acting as the go-between to allow two machines to communicate. If you’ve ever re-typed a
document that was visible on another screen, read somebody a password over the phone, given directions from a digital map, used a pendrive to carry files between computers that weren’t
talking to one another properly then you’ve done it: you’ve been the soft wet meaty middleware that bridged two already semi-automated (but not quite automated enough)
systems.
This generally happens because of the lack of a common API (a communications protocol) between two systems. If your
phone and your car could just talk it out then the car would know where to go all by itself! Or, until we get self-driving cars, it could at least provide the directions in a
way that was appropriately-accessible to the driver: heads-up display, context-relative directions, or whatever.
It also sometimes happens when the computer-to-human interface isn’t good enough; for example I’ve often offered to navigate for a driver (and used my phone for the purpose) because I
can add a layer of common sense. There’s no need for me to tell my buddy to take the second exit from every roundabout in Milton Keynes (did you know that the town has 930 of them?) – I can just tell them that I’ll let them know when
they have to change road and trust that they’ll just keep going straight ahead until then.
Finally, we also sometimes find ourselves acting as a go-between to filter and improve information flow when the computers don’t have enough information to do better by themselves. I’ll
use the fact that I can see the road conditions and the lane markings and the proposed route ahead to tell a driver to get into the right lane with an appropriate amount of
warning. Or if the driver says “I can see signs to our destination now, I’ll just keep following them,” I can shut up unless something goes awry. Your in-car SatNav can’t do that
because it can’t see and interpret the road ahead of you… at least not yet!
But here’s my thought: claims of an upcoming AI
winter aside, it feels to me like we’re making faster progress in technologies related to human-computer interaction – voice and natural languages interfaces, popularised
by virtual assistants like Siri and Alexa and by chatbots – than we are in technologies related to universal computer interoperability. Voice-controller computers are hip and exciting
and attract a lot of investment but interoperable systems are hampered by two major things. The first thing holding back interoperability is business interests: for the longest while,
for example, you couldn’t use Amazon Prime Video on a Google Chromecast for a long while because the two companies couldn’t play nice. The second thing is a lack of interest by
manufacturers in developing open standards: every smart home appliance manufacturer wants you to use their app, and so your smart speaker manufacturer needs to implement code
to talk to each and every one of them, and when they stop supporting one… well, suddenly your thermostat switches jumps permanently from smart mode to dumb mode.
A thing that annoys me is that from a technical perspective making an open standard should be a much easier task than making an AI that can understand what a human is asking for or drive a car safely or whatever we’re using them for this week. That’s not to say that technical
standards aren’t difficult to get right – they absolutely are! – but we’ve been practising doing it for many, many decades! The very existence of the Internet over which you’ve been
delivered this article is proof that computer interoperability is a solvable problem. For anybody who thinks that the interoperability brought about by the Internet was inevitable or
didn’t take lots of hard work, I direct you to Darius Kazemi’s re-reading of the early standards discussions, which I first plugged a year ago; but the important thing is that people were working on it. That’s something we’re not really seeing in the Internet of
Things space.
On our current trajectory, it’s absolutely possible that our virtual assistants will reach a point of becoming perfectly “human” communicators long before we can reach agreements about
how they should communicate with one another. If that’s the case, those virtual assistants will probably fall back on using English-language voice communication as their lingua
franca. In that case, it’s not unbelievable that ten to twenty years from now, the following series of events might occur:
You want to go to your friends’ house, so you say out loud “Alexa, drive me to Bob’s house in five minutes.” Alexa responds “I’m on it; I’ll let you know more in a few
minutes.”
Alexa doesn’t know where Bob’s house is, but it knows it can get it from your netbook. It opens a voice channel over your wireless network (so you don’t have to “hear” it) and says
“Hey Google, it’s Alexa [and here’s my credentials]; can you give me the address that [your name] means when they say ‘Bob’s house’?” And your netbook responds by reading
out the address details, which Alexa then understands.
Alexa doesn’t know where your self-driving car is right now and whether anybody’s using it, but it has a voice control system and a cellular network connection, so Alexa phones
up your car and says: “Hey SmartCar, it’s Alexa [and here’s my credentials]; where are you and when were you last used?”. The car replies “I’m on the driveway, I’m
fully-charged, and I was last used three hours ago by [your name].” So Alexa says “Okay, boot up, turn on climate control, and prepare to make a journey to [Bob’s
address].” In this future world, most voice communication over telephones is done by robots: your virtual assistant calls your doctor’s virtual assistant to make you an
appointment, and you and your doctor just get events in your calendars, for example, because nobody manages to come up with a universal API for medical appointments.
Alexa responds “Okay, your SmartCar is ready to take you to Bob’s house.” And you have no idea about the conversations that your robots have been having behind your back
I’m not saying that this is a desirable state of affairs. I’m not even convinced that it’s likely. But it’s certainly possible if IoT development keeps focussing on shiny friendly conversational interfaces at the expense of practical, powerful technical standards. Our already
topsy-turvy technologies might get weirder before they get saner.
But if English does become the “universal API” for robot-to-robot communication, despite all engineering common
sense, I suggest that we call it “sandwichware”.
Update 23 November 2022: This isn’t how I consume The Far Side in my RSS reader any more. There’s an updated guide.
Prior to his retirement in 1995 I managed to amass a collection of almost all of Gary Larson’s The Far Side books as well as a couple of calendars and other thingamabobs. After
24 years of silence I didn’t expect to hear anything more from him and so I was as surprised as most of the Internet was when he re-emerged last yearwith a brand new on his
first ever website. Woah.
Larson’s hinted that there might be new and original content there someday, but for the time being I’m just loving that I can read The Far Side comments (legitimately) via the
Web for the first time! The site’s currently publishing a “Daily Dose” of classic strips, which is awesome. But… I don’t want to have to go to a website to get comics every
day. Nor do I want to have to remember which days I’ve caught-up with, yet. That’s a job for computers, right? And it’s a solved problem: RSS (which has been around for almost as long as Larson hasn’t) and similar technologies allow a website to publicise that it’s got updates
available in a way that people can “subscribe” to, so I should just use that, right?
Except… the new The Far Side website doesn’t have an RSS feed. Boo! Luckily, I’m not above automating the creation of feeds for websites that I wish had them, even (or
perhaps especially) where that involves a little reverse-engineering of online comics. So with a
little thanks to my RSS middleware RSSey… I can now read daily The Far
Side comics in the way that’s most-convenient to me: right alongside my other subscriptions in my feed reader.
I’m afraid I’m not going to publicly1-share a ready-to-go feed URL for this one, unlike my BBC News Without The Sport
feed, because a necessary side-effect of the way it works is that the ads are removed. And if I were to republish a feed containing The Far Side website cartoons but with
the ads stripped I’d be guilty of, like, all the ethical and legal faults that Larson was trying to mitigate by putting his new website up in the first place! I love The
Far Side and I certainly don’t want to violate its copyright!
But – at least until Larson’s web developer puts up a proper feed (with or without ads) – for those of us who like our comics delivered fresh to us every morning, here’s the source code (as an RSSey feed definition) you
could use to run your own personal-use-only “give me The Far Side Daily Dose as an RSS feed” middleware.
Thanks for deciding to join us on the Internet, Gary. I hear it’s going to be a big thing, someday!
Footnotes
1 Friends are welcome to contact me off-blog for an address if they like, if they promise
to be nice and ethical about it.
Since I reported last summer that I’d accepted a job offer with Automattic I’ve
been writing about my experience of joining and working with Automattic on a nice, round 128-day schedule.
My first post covered the first 128 days: starting from the day I decided (after 15 years of
watching-from-afar) that I should apply to work there through to 51 days before my start date. It described my recruitment process, which is famously comprehensive and intensive. For me
this alone was hugely broadening! My first post spanned the period up until I started getting access to Automattic’s internal systems, a month and a half before my start date. If you’re
interested in my experience of recruitment at Automattic, you should go and read that post. This post, though, focusses on my induction,
onboarding, and work during my first two months.
With a month to go before I started, I thought it time to start setting up my new “office” for my teleworking. Automattic offered to buy me a new desk and chair, but I’m not ready to
take them up on that yet: but I’m waiting until after my (hopefully-)upcoming house move so I know how much space I’ve got to work with/what I need! There’s still plenty for a new
developer to do, though: plugging in and testing my new laptop, monitor, and accessories, and doing all of the opinionated tweaks that make one’s digital environment one’s own –
preferred text editor, browser, plugins, shell, tab width, mouse sensitivity, cursor blink rate… important stuff like that.
For me, this was the cause of the first of many learning experiences, because nowadays I’m working on a MacBook! Automattic doesn’t
require you to use a Mac, but a large proportion of the company does and I figured that learning to use a Mac effectively would be easier than learning my new codebase on a
different architecture than most of my colleagues.
I’ve owned a couple of Intel Macs (and a couple of Hackintoshes) but I’ve never gotten on with them well enough to warrant
becoming an advanced user, until now. I’ll probably write in the future about my experience of making serious use of a Mac after a history of mostly *nix and Windows machines.
Automattic also encouraged me to kit myself up with a stack of freebies to show off my affiliation, so I’ve got a wardrobe-load of new t-shirts and stickers too. It’s hard to argue that
we’re a company and not a cult when we’re all dressed alike, and that’s not even mentioning a colleague of mine with two WordPress-related tattoos, but there we have it.
Role and Company
I should take a moment to say what I do. The very simple version, which I came up with to very briefly describe my new job to JTA‘s mother, is: I write software that powers an online shop that sells software that powers online shops.
You want the long version? I’m a Code Magician (you may say it’s a silly job title,
I say it’s beautiful… but I don’t necessarily disagree that it’s silly too) with Team Alpha at Automattic. We’re the engineering team behind WooCommerce.com, which provides downloads of the Web’s most-popular eCommerce platform… plus hundreds of
free and premium extensions.
There’s a lot of stuff I’d love to tell you about my role and my new employer, but there’s enough to say here about my induction so I’ll be saving following topics for a future
post:
Chaos: how Automattic produces order out of entropy, seemingly against all odds,
Transparency and communication: what it’s like to work in an environment of radical communication and a focus on transparency,
People and culture: my co-workers, our distributed team, and what is lost by not being able to “meet around the coffee machine” (and how we work to artificially
recreate that kind of experience),
Distributed working: this is my second foray into a nearly-100% remote-working environment; how’s it different to before?
To be continued, then.
Onboarding (days 1 through 12)
I wasn’t sure how my onboarding at Automattic could compare to that which I got when I started at the Bodleian. There, my then-line manager Alison‘s obsession with preparation had me arrive to a
thoroughly-planned breakdown of everything I needed to know and everybody I needed to meet over the course of my first few weeks. That’s not necessarily a bad thing, but it
leaves little breathing room in an already intense period!
By comparison, my induction at Automattic was far more self-guided: each day in my first fortnight saw me tackling an agenda of things to work on and – in a pleasing touch I’ve seen
nowhere else – a list of expectations resulting from that day. Defined expectations day-by-day are an especially good as a tool for gauging one’s progress and it’s a nice touch that
I’ll be adapting should I ever have to write another induction plan for a somebody else.
Skipping the usual induction topics of where the fire escapes and toilets are (it’s your house; you tell us!), how to dial an outside line (yeah, we don’t really do that here),
what to do to get a key to the bike shed and so on saves time, of course! But it also removes an avenue for more-casual interpersonal contact (“So how long’ve you been working here?”)
and ad-hoc learning (“So I use that login on this system, right?”). Automattic’s aware of this and has an entire culture about making information accessible, but it
takes additional work on the part of a new hire to proactively seek out the answers they need, when they need them: searching the relevant resources, or else finding out who to ask… and
being sure to check their timezone before expecting an immediate response.
Onboarding at Automattic is necessarily at least somewhat self-driven, and it’s clear in hindsight that the recruitment process is geared towards selection of individuals who can work
in this way because it’s an essential part of how we work in general. I appreciated the freedom to carve my own path as I learned the ropes, but it took me a little while to
get over my initial intimidation about pinging a stranger to ask for a video/voice chat to talk through something!
Meetup (days 14 through 21)
I’d tried to arrange my migration to Automattic to occur just before their 2019 Grand Meetup, when virtually the entire company gets together in one place for an infrequent but important gathering, but I couldn’t make it work and just barely missed it. Luckily, though, my team had
planned a smaller get-together in South Africa which coincided with my second/third week, so I jetted off to get some
facetime with my colleagues.
My colleague and fellow newbie Berislav‘s contract started a few hours after he landed in Cape Town, and it was helpful to my
journey to see how far I’d come over the last fortnight through his eyes! He was, after all, on the same adventure as me, only a couple of weeks behind, and it was reassuring
to see that I’d already learned so much as well as to be able to join in with helping him get up-to-speed, too.
By the time I left the meetup I’d learned as much again as I had in the two weeks prior about my new role and my place in the team. I’d also
learned that I’m pretty terrible at surfing, but luckily that’s not among the skills I have to master in order to become a valuable
developer to Automattic.
Happiness Rotation (days 23 through 35)
A quirk of Automattic – and indeed something that attracted me to them, philosophically – is that everybody spends two weeks early in their first year and a week in every subsequent
year working on the Happiness Team. Happiness at Automattic is what almost any other company would call “tech support”, because Automattic’s full of job titles and team names that are,
frankly, a bit silly flipping awesome. I like this “Happiness Rotation” as a concept because it keeps the entire company focussed on customer issues and the things that
really matter at the coal face. It also fosters a broader understanding of our products and how they’re used in the real world, which is particularly valuable to us developers who can
otherwise sometimes forget that the things we produce have to be usable by real people with real needs!
One of the things that made my Happiness Rotation the hardest was also one of the things that made it the most-rewarding: that I didn’t really know most of the products I was
supporting! This was a valuable experience because I was able to learn as-I-went-along, working alongside my (amazingly supportive and understanding) Happiness Team co-workers: the
people who do this stuff all the time. But simultaneously, it was immensely challenging! My background in WordPress in general, plenty of tech support practice at Three Rings, and even my experience of email support at Samaritans put me in a strong position in-general…
but I found that I could very-quickly find myself out of my depth when helping somebody with the nitty-gritty of a problem with a specific WooCommerce extension.
Portering and getting DRI (days 60 through 67)
I’ve also had the opportunity during my brief time so far with Automattic to take on a few extra responsibilities within my team. My team rotates weekly responsibility for what they
call the Porter role. The Porter is responsible for triaging pull requests and monitoring blocking issues and acting as a first point-of-call to stakeholders: you know, the
stuff that’s important for developer velocity but that few developers want to do all the time. Starting to find my feet in my team by now, I made it my mission during my first
shift as Porter to get my team to experiment with an approach for keeping momentum on long-running issues, with moderate success (as a proper continuous-integration shop, velocity is
important and measurable). It’s pleased me so far to feel like I’m part of a team where my opinion matters, even though I’m “the new guy”.
I also took on my first project as a Directly Responsible Individual, which is our fancy term for the person who makes sure the project runs to schedule, reports on progress etc.
Because Automattic more strongly than any other place I’ve ever worked subscribes to a dogfooding strategy, the
woocommerce.com online store for which I share responsibility runs on – you guessed it – WooCommerce! And so the first project for which I’m directly responsible is the upgrade of woocommerce.com to the latest version of
WooCommerce, which went into beta last month. Fingers crossed for a smooth deployment.
There’s so much I’d love to say about Automattic’s culture, approach to development, people, products, philosophy, and creed, but that’ll
have to wait for another time. For now, suffice to say that I’m enjoying this exciting and challenging new environment and I’m looking forward to reporting on them in another 128 days
or so.