Last week I built Fox, the newest addition to our home network. Fox, whose specification
called for not one, not two, not three but four 12 terabyte hard disk drives was built principally as a souped-up NAS
device – a central place for us all to safely hold and control access to important files rather than having them spread across our various devices – but she’s got a lot more going on
that that, too.
Fox has:
Enough hard drive space to give us 36TB of storage capacity plus 12TB of parity, allowing any
one of the drives to fail without losing any data.
“Headroom” sufficient to double its capacity in the future without significant effort.
A mediumweight graphics card to assist with real-time transcoding, helping her to convert and stream audio and videos to our devices in whatever format they prefer.
A beefy processor and sufficient RAM to run a dozen virtual machines supporting a variety of functions like software
development, media ripping and cataloguing, photo rescaling, reverse-proxying, and document scanning (a planned future purpose for Fox is to have a network-enabled scanner near our
“in-trays” so that we can digitise and OCR all of our post and paperwork into a searchable, accessible, space-saving
collection).
The last time I filmed myself building a PC was when I built Cosmo, a couple of desktops ago. He
turned out to be a bit of a nightmare: he was my first fully-watercooled computer and he leaked everywhere: by the time I’d done all the fixing and re-fixing to make him behave nicely,
I wasn’t happy with the video footage and I never uploaded it. I’d been wary, almost-superstitious, about filming a build since then, but I shot a timelapse of Fox’s construction and it
turned out pretty well: you can watch it below or on YouTube or QTube.
The timelapse slows to real-time, about a minute in, to illustrate a point about the component test I did with only a CPU (and
cooler), PSU, and RAM attached. Something I routinely do when building
computers but which I only recently discovered isn’t commonly practised is shown: that the easiest way to power on a computer without attaching a power switch is just to bridge the
power switch pins using your screwdriver!
Fox is running Unraid, an operating system basically designed for exactly these kinds of purposes. I’ve been super-impressed by the
ease-of-use and versatility of Unraid and I’d recommend it if you’ve got a similar NAS project in your future! I’d also like
to sing the praises of the Fractal Design Node 804 case: it’s not got quite as many bells-and-whistles
as some cases, but its dual-chamber design is spot-on for a multipurpose NAS, giving ample room for both full-sized expansion
cards and heatsinks and lots of hard drives in a relatively compact space.
I think that CSS would be greatly helped if we solemnly state that “CSS4 is here!” In this post I’ll try to convince you of my viewpoint.
I am proposing that we web developers, supported by the W3C CSS WG, start saying “CSS4 is here!” and excitedly chatter about how it will hit the market any moment now and transform
the practice of CSS.
Of course “CSS4” has no technical meaning whatsoever. All current CSS specifications have their own specific
versions ranging from 1 to 4, but CSS as a whole does not have a version, and it doesn’t need one, either.
Regardless of what we say or do, CSS 4 will not hit the market and will not transform anything. It also does not describe any technical reality.
But if you’ve got more than a little web savvy you might still be surprised to hear me say that CSS4 is here, or even
that it’s a “thing” at all. Welll… that’s because it isn’t. Not officially. Just like JavaScript’s versioning has gone all evergreen these last few years,
CSS has gone the same way, with different “modules” each making their way through the standards and implementation processes
independently. Which is great, in general, by the way – we’re seeing faster development of long-overdue features now than we have through most of the Web’s history – but it
does make it hard to keep track of what’s “current” unless you follow along watching very closely. Who’s got time for that?
When CSS2 gained prominence at around the turn of the millennium it was revolutionary, and part of the reason for that
– aside from the fact that it gave us all some features we’d wanted for a long time – was that it gave us a term to rally behind. This browser already supports it, that browser’s
getting there, this other browser supports it but has a f**ked-up box model (you all know the one I’m talking about)… we at last had an overarching term to discuss what was supported,
what was new, what was ready for people to write articles and books about. Nobody’s going to buy a book that promises to teach them “CSS3 Selectors Level 3, Fonts Level 3, Writing Modes
Level 3, and Containment Level 1”: that title’s not even going to fit on the cover. But if we wrapped up a snapshot of what’s current and called it CSS4… now that’s going to sell.
Can we show the CSS WG that there’s mileage in this idea and make it happen? Oh, I hope so. Because while the
modular approach to CSS is beautiful and elegant and progressive… I’m afraid that we can’t use it to inspire junior developers.
Also: I don’t want this joke to forever remain among the top results
when searching for CSS4…
I also ask not to discard all this nonsense right away, but at least give it a fair round of thought. My recommendations, if applied in their entirety, can radically change Git
experience. They can move Git from advanced, hacky league to the mainstream and enable adoption of VCS by very wide groups of people. While this is still a concept and obviosly
requires a hell of a job to become reality, my hope is that somebody working on a Git interface can borrow from these ideas and we all will get better Git client someday.
I love Git. But I love it more conceptually than I do practically. Everything about its technical design is elegant and clever. But most of how you have to act when
you’re using it just screams “I was developed by lots of engineers and by exactly zero UX developers.” I can’t imagine why.
Nikita proposes ways in which it can be “fixed” while retaining 100% backwards-compatibility, and that’s bloody awesome. I hope the Git core team are paying attention, because these
ideas are gold.
I lied. According to US Army Technical Manual 0, The Soldier as a System, “attack helicopter” is a gender identity, not a biological sex. My dog tags and Form 3349 say my body is an
XX-karyotope somatic female.
But, really, I didn’t lie. My body is a component in my mission, subordinate to what I truly am. If I say I am an attack helicopter, then my body, my sex, is too. I’ll prove it to
you.
When I joined the Army I consented to tactical-role gender reassignment. It was mandatory for the MOS I’d tested into. I was nervous. I’d never been anything but a woman before.
But I decided that I was done with womanhood, over what womanhood could do for me; I wanted to be something furiously new.
To the people who say a woman would’ve refused to do what I do, I say—
Isn’t that the point?
…
This short story almost-certainly isn’t what you’d expect, based on the title. What it is sits at the intersection of science fiction and gender identity, and it’s pretty damn
good.
While being driven around England it struck me that humans are currently like the filling in a sandwich between one slice of machine — the satnav — and another — the car. Before the
invention of sandwiches the vehicle was simply a slice of machine with a human topping. But now it’s a sandwich, and the two machine slices are slowly squeezing out the human filling
and will eventually be stuck directly together with nothing but a thin layer of API butter. Then the human will be a superfluous thing, perhaps a little gherkin on the side of the
plate.
While we were driving I was reading the directions from a mapping app on my phone, with the sound off, checking the upcoming turns, and giving verbal directions to Mary, the driver. I
was an extra layer of human garnish — perhaps some chutney or a sliced tomato — between the satnav slice and the driver filling.
What Phil’s describing is probably familiar to you: the experience of one or more humans acting as the go-between to allow two machines to communicate. If you’ve ever re-typed a
document that was visible on another screen, read somebody a password over the phone, given directions from a digital map, used a pendrive to carry files between computers that weren’t
talking to one another properly then you’ve done it: you’ve been the soft wet meaty middleware that bridged two already semi-automated (but not quite automated enough)
systems.
This generally happens because of the lack of a common API (a communications protocol) between two systems. If your
phone and your car could just talk it out then the car would know where to go all by itself! Or, until we get self-driving cars, it could at least provide the directions in a
way that was appropriately-accessible to the driver: heads-up display, context-relative directions, or whatever.
It also sometimes happens when the computer-to-human interface isn’t good enough; for example I’ve often offered to navigate for a driver (and used my phone for the purpose) because I
can add a layer of common sense. There’s no need for me to tell my buddy to take the second exit from every roundabout in Milton Keynes (did you know that the town has 930 of them?) – I can just tell them that I’ll let them know when
they have to change road and trust that they’ll just keep going straight ahead until then.
Finally, we also sometimes find ourselves acting as a go-between to filter and improve information flow when the computers don’t have enough information to do better by themselves. I’ll
use the fact that I can see the road conditions and the lane markings and the proposed route ahead to tell a driver to get into the right lane with an appropriate amount of
warning. Or if the driver says “I can see signs to our destination now, I’ll just keep following them,” I can shut up unless something goes awry. Your in-car SatNav can’t do that
because it can’t see and interpret the road ahead of you… at least not yet!
But here’s my thought: claims of an upcoming AI
winter aside, it feels to me like we’re making faster progress in technologies related to human-computer interaction – voice and natural languages interfaces, popularised
by virtual assistants like Siri and Alexa and by chatbots – than we are in technologies related to universal computer interoperability. Voice-controller computers are hip and exciting
and attract a lot of investment but interoperable systems are hampered by two major things. The first thing holding back interoperability is business interests: for the longest while,
for example, you couldn’t use Amazon Prime Video on a Google Chromecast for a long while because the two companies couldn’t play nice. The second thing is a lack of interest by
manufacturers in developing open standards: every smart home appliance manufacturer wants you to use their app, and so your smart speaker manufacturer needs to implement code
to talk to each and every one of them, and when they stop supporting one… well, suddenly your thermostat switches jumps permanently from smart mode to dumb mode.
A thing that annoys me is that from a technical perspective making an open standard should be a much easier task than making an AI that can understand what a human is asking for or drive a car safely or whatever we’re using them for this week. That’s not to say that technical
standards aren’t difficult to get right – they absolutely are! – but we’ve been practising doing it for many, many decades! The very existence of the Internet over which you’ve been
delivered this article is proof that computer interoperability is a solvable problem. For anybody who thinks that the interoperability brought about by the Internet was inevitable or
didn’t take lots of hard work, I direct you to Darius Kazemi’s re-reading of the early standards discussions, which I first plugged a year ago; but the important thing is that people were working on it. That’s something we’re not really seeing in the Internet of
Things space.
On our current trajectory, it’s absolutely possible that our virtual assistants will reach a point of becoming perfectly “human” communicators long before we can reach agreements about
how they should communicate with one another. If that’s the case, those virtual assistants will probably fall back on using English-language voice communication as their lingua
franca. In that case, it’s not unbelievable that ten to twenty years from now, the following series of events might occur:
You want to go to your friends’ house, so you say out loud “Alexa, drive me to Bob’s house in five minutes.” Alexa responds “I’m on it; I’ll let you know more in a few
minutes.”
Alexa doesn’t know where Bob’s house is, but it knows it can get it from your netbook. It opens a voice channel over your wireless network (so you don’t have to “hear” it) and says
“Hey Google, it’s Alexa [and here’s my credentials]; can you give me the address that [your name] means when they say ‘Bob’s house’?” And your netbook responds by reading
out the address details, which Alexa then understands.
Alexa doesn’t know where your self-driving car is right now and whether anybody’s using it, but it has a voice control system and a cellular network connection, so Alexa phones
up your car and says: “Hey SmartCar, it’s Alexa [and here’s my credentials]; where are you and when were you last used?”. The car replies “I’m on the driveway, I’m
fully-charged, and I was last used three hours ago by [your name].” So Alexa says “Okay, boot up, turn on climate control, and prepare to make a journey to [Bob’s
address].” In this future world, most voice communication over telephones is done by robots: your virtual assistant calls your doctor’s virtual assistant to make you an
appointment, and you and your doctor just get events in your calendars, for example, because nobody manages to come up with a universal API for medical appointments.
Alexa responds “Okay, your SmartCar is ready to take you to Bob’s house.” And you have no idea about the conversations that your robots have been having behind your back
I’m not saying that this is a desirable state of affairs. I’m not even convinced that it’s likely. But it’s certainly possible if IoT development keeps focussing on shiny friendly conversational interfaces at the expense of practical, powerful technical standards. Our already
topsy-turvy technologies might get weirder before they get saner.
But if English does become the “universal API” for robot-to-robot communication, despite all engineering common
sense, I suggest that we call it “sandwichware”.
It’s worth noting here that the idea that a parent should be a caretaker, educator, and entertainer rolled into one is not only historically, but also culturally specific. “There
are lots of cultures where [parent-child play is] considered absolutely inappropriate—a parent would never get down on their knees and play with the children. Playing is something
children do, not something adults do,” developmental psychologist Angeline Lillard said in an interview. “And that’s just
fine. There’s no requirement for playing.”
Differences in practices around parent-child play exist within American subgroups, too. Sociologist Annette Lareau has
observed a gap in beliefs about parent-child play between working-class/poor parents and middle-class parents in the United States. Working-class and poor parents in her study
held a view that they were responsible for “supervision in custodial matters” (Did the child get to sleep on time? Does the child have sneakers that fit?) and “autonomy in leisure
matters,” while the middle-class parents engaging in what Lareau termed “concerted cultivation” invested themselves heavily in children’s play. Ultimately, the poorer kids, Lareau
found, “tended to show more creativity, spontaneity, enjoyment, and initiative in their leisure pastimes than we saw among middle-class children at play in organized activities.”
…
Interesting article (about 10 minutes reading), so long as you come at it from at least a little bit of an academic, anthropological perspective and so aren’t expecting to come
out of it with concrete, actionable parenting advice!
Engaging in some kinds of play with your kids can be difficult. I’ve lost count of the hours spent in imaginative play with our 6-year-old, trying to follow-along with the
complex narrative and characters she’s assembled and ad-lib along (and how many times she’s told me off for my character not making the choices she’d hoped they would, because she’s at
least a little controlling over the stories she tells!). But I feel like it’s also a great way to engage with them, so it’s worth putting your devices out of sight, getting down on the
carpet, and playing along… at least some of the time. The challenge is finding the balance between being their perpetual playmate and ensuring that they’re encouraged to “make their own
fun”, which can be an important skill in being able to fight off boredom for the rest of their lives.
If I ever come up with a perfect formula, I’ll tell you; don’t hold your breath! In the meantime, reading this article might help reassure you that despite there almost-certainly not
being a “right way”, there are plenty of “pretty good ways”, and the generally-good human values of authenticity and imagination and cooperation are great starting points for
playing with your children, just like they are for so many other endeavours. Your kids are probably going to be okay.
I’m posting this on the last day of 2019. As I write it, the second post I ever made on
meyerweb says it was published “20 years, 6 days ago”. It was published on the second-to-last day of 1999, which was 20 years and one day ago.
What I realized, once the discrepancy was pointed out to me (hat tip: Eric Portis), is the five-day error is there because in the two decades
since I posted it, there have been five leap days. When I wrote the code to construct those relative-time strings, I either didn’t think about leap days, or if I did, I
decided a day or two here and there wouldn’t matter all that much.
Which is to say, I failed to think about the consequences of my code running over long periods of time. Maybe a day or two of error isn’t all that big a deal, in
human-friendly relative-time output. If a post was six years and two days ago but the code says 6 and 1, well, nobody will really care that much even if they notice. But
five days is noticeable, and what’s more, it’s a little human-unfriendly. It’s noticeable. It jars.
…
As I mentioned in my comments on a repost last week, I work to try to make the
things I publish to this site last. But that’s not to say that problems can’t creep in, either because of fundamental bugs left unnoticed until later on (such as the image
recompression problem that’s recently lead to some of my older images going wonky; I’m working on it) or else because because of environmental changes e.g. in the technologies that are
supported and the ways in which they’re used. The latter are helped by standards and by an adherence to them, but the former will trip over Web developers time and time again, and it’s
possible that there’s nothing we can do about it.
No system is perfect, and we don’t have time to engineer every system, every site, every page in a way that near-guarantees its longevity; not by a long shot. I tripped myself over just
the year before last when I added Content-Security-Policy headers to my site and promptly broke every embedded YouTube or Vimeo video
because they didn’t fit the newly (and retroactively) enforced pattern of allowable content. Such problems are easy to create when you’re maintaining a long-running system with a lot of
data. I’m only talking about my blog, but larger, older and/or more-complex systems (of which I’ve worked on a few!) come with their own multitudinous challenges.
That said, the Web has demonstrated a resilience that surpasses most of what is expected in consumer computing. If you want to run a video game from 1994 or even 2001 on a modern
computer, you’re likely to find that you have to put in considerably more work than you would have on the day it was released! But even some of the oldest webpages still-existing remain
usable today.
Occasionally, though: a “hip” modern technology without the backing of widespread browser standards comes along and creates a dark age. Flash created such a dark age; now there are
millions of Flash-dependent web pages that simply don’t work any longer. Java created another. And I worry that the unnecessary overuse of front-end rendering technologies are creating
a third that we’re living through right now, oblivious to the data we’re creating and losing.
Update 23 November 2022: This isn’t how I consume The Far Side in my RSS reader any more. There’s an updated guide.
Prior to his retirement in 1995 I managed to amass a collection of almost all of Gary Larson’s The Far Side books as well as a couple of calendars and other thingamabobs. After
24 years of silence I didn’t expect to hear anything more from him and so I was as surprised as most of the Internet was when he re-emerged last yearwith a brand new on his
first ever website. Woah.
Larson’s hinted that there might be new and original content there someday, but for the time being I’m just loving that I can read The Far Side comments (legitimately) via the
Web for the first time! The site’s currently publishing a “Daily Dose” of classic strips, which is awesome. But… I don’t want to have to go to a website to get comics every
day. Nor do I want to have to remember which days I’ve caught-up with, yet. That’s a job for computers, right? And it’s a solved problem: RSS (which has been around for almost as long as Larson hasn’t) and similar technologies allow a website to publicise that it’s got updates
available in a way that people can “subscribe” to, so I should just use that, right?
Except… the new The Far Side website doesn’t have an RSS feed. Boo! Luckily, I’m not above automating the creation of feeds for websites that I wish had them, even (or
perhaps especially) where that involves a little reverse-engineering of online comics. So with a
little thanks to my RSS middleware RSSey… I can now read daily The Far
Side comics in the way that’s most-convenient to me: right alongside my other subscriptions in my feed reader.
I’m afraid I’m not going to publicly1-share a ready-to-go feed URL for this one, unlike my BBC News Without The Sport
feed, because a necessary side-effect of the way it works is that the ads are removed. And if I were to republish a feed containing The Far Side website cartoons but with
the ads stripped I’d be guilty of, like, all the ethical and legal faults that Larson was trying to mitigate by putting his new website up in the first place! I love The
Far Side and I certainly don’t want to violate its copyright!
But – at least until Larson’s web developer puts up a proper feed (with or without ads) – for those of us who like our comics delivered fresh to us every morning, here’s the source code (as an RSSey feed definition) you
could use to run your own personal-use-only “give me The Far Side Daily Dose as an RSS feed” middleware.
Thanks for deciding to join us on the Internet, Gary. I hear it’s going to be a big thing, someday!
Footnotes
1 Friends are welcome to contact me off-blog for an address if they like, if they promise
to be nice and ethical about it.
Ah, scissors. They’re important enough that we have an emoji for them. On your device, it appears as ✂️. Unlike the real world tool it represents, the emoji’s job is to convey the
idea, especially at small sizes. It doesn’t need to be able to swing or cut things. Nevertheless, let’s judge them on that irrelevant criterion.
…
I’ve watched from afar as the Internet collapsed in on itself during a debate about what a
cheeseburger looks like and I’ve raged a little myself at the popular depiction of juggling, but this newly-identified emoji failure is a
whole new thing. Sure, emoji are supposed to be representative, not realistic. But if they have to cover sufficient diversity to include gender-neutral representations (which they absolutely should, and should have done in an early instance, but at least we’re
fixing some of the issues in hindsight, like Ido did) then perhaps they could also include sufficient attention to
detail that the tools they depict would actually, y’know, work?
West Germany’s 1974 World Cup victory happened closer to the first World Cup in 1930 than to today.
The Wonder Years aired from 1988 and 1993 and depicted the years between 1968 and 1973. When I watched the show, it felt like it was set in a time long ago. If a new Wonder
Years premiered today, it would cover the years between 2000 and 2005.
Also, remember when Jurassic Park, The Lion King, and Forrest Gump came out in theaters? Closer to the moon landing than today.
…
These things come around now and again, but I’m not sure of the universal validity of observing that a memorable event is now closer to another memorable event than it is to the present
day. I don’t think that the relevance of events is as linear as that. Instead, perhaps, it looks something like this:
Where the drop-off in relevance occurs is hard to pinpoint and it probably varies a lot by the type of event that’s being remembered: nobody seems to care about what damn terrible thing
Trump did last month or the month before when there’s some new terrible thing he did just this morning, for example (I haven’t looked at the news yet this morning, but honestly
whenever you read this post he’ll probably have done something awful).
Nonetheless, this post on Wait But Why was a fun distraction, even if it’s been done before. Maybe the last time it happened was so long ago it’s irrelevant now?
Since I reported last summer that I’d accepted a job offer with Automattic I’ve
been writing about my experience of joining and working with Automattic on a nice, round 128-day schedule.
My first post covered the first 128 days: starting from the day I decided (after 15 years of
watching-from-afar) that I should apply to work there through to 51 days before my start date. It described my recruitment process, which is famously comprehensive and intensive. For me
this alone was hugely broadening! My first post spanned the period up until I started getting access to Automattic’s internal systems, a month and a half before my start date. If you’re
interested in my experience of recruitment at Automattic, you should go and read that post. This post, though, focusses on my induction,
onboarding, and work during my first two months.
With a month to go before I started, I thought it time to start setting up my new “office” for my teleworking. Automattic offered to buy me a new desk and chair, but I’m not ready to
take them up on that yet: but I’m waiting until after my (hopefully-)upcoming house move so I know how much space I’ve got to work with/what I need! There’s still plenty for a new
developer to do, though: plugging in and testing my new laptop, monitor, and accessories, and doing all of the opinionated tweaks that make one’s digital environment one’s own –
preferred text editor, browser, plugins, shell, tab width, mouse sensitivity, cursor blink rate… important stuff like that.
For me, this was the cause of the first of many learning experiences, because nowadays I’m working on a MacBook! Automattic doesn’t
require you to use a Mac, but a large proportion of the company does and I figured that learning to use a Mac effectively would be easier than learning my new codebase on a
different architecture than most of my colleagues.
I’ve owned a couple of Intel Macs (and a couple of Hackintoshes) but I’ve never gotten on with them well enough to warrant
becoming an advanced user, until now. I’ll probably write in the future about my experience of making serious use of a Mac after a history of mostly *nix and Windows machines.
Automattic also encouraged me to kit myself up with a stack of freebies to show off my affiliation, so I’ve got a wardrobe-load of new t-shirts and stickers too. It’s hard to argue that
we’re a company and not a cult when we’re all dressed alike, and that’s not even mentioning a colleague of mine with two WordPress-related tattoos, but there we have it.
Role and Company
I should take a moment to say what I do. The very simple version, which I came up with to very briefly describe my new job to JTA‘s mother, is: I write software that powers an online shop that sells software that powers online shops.
You want the long version? I’m a Code Magician (you may say it’s a silly job title,
I say it’s beautiful… but I don’t necessarily disagree that it’s silly too) with Team Alpha at Automattic. We’re the engineering team behind WooCommerce.com, which provides downloads of the Web’s most-popular eCommerce platform… plus hundreds of
free and premium extensions.
There’s a lot of stuff I’d love to tell you about my role and my new employer, but there’s enough to say here about my induction so I’ll be saving following topics for a future
post:
Chaos: how Automattic produces order out of entropy, seemingly against all odds,
Transparency and communication: what it’s like to work in an environment of radical communication and a focus on transparency,
People and culture: my co-workers, our distributed team, and what is lost by not being able to “meet around the coffee machine” (and how we work to artificially
recreate that kind of experience),
Distributed working: this is my second foray into a nearly-100% remote-working environment; how’s it different to before?
To be continued, then.
Onboarding (days 1 through 12)
I wasn’t sure how my onboarding at Automattic could compare to that which I got when I started at the Bodleian. There, my then-line manager Alison‘s obsession with preparation had me arrive to a
thoroughly-planned breakdown of everything I needed to know and everybody I needed to meet over the course of my first few weeks. That’s not necessarily a bad thing, but it
leaves little breathing room in an already intense period!
By comparison, my induction at Automattic was far more self-guided: each day in my first fortnight saw me tackling an agenda of things to work on and – in a pleasing touch I’ve seen
nowhere else – a list of expectations resulting from that day. Defined expectations day-by-day are an especially good as a tool for gauging one’s progress and it’s a nice touch that
I’ll be adapting should I ever have to write another induction plan for a somebody else.
Skipping the usual induction topics of where the fire escapes and toilets are (it’s your house; you tell us!), how to dial an outside line (yeah, we don’t really do that here),
what to do to get a key to the bike shed and so on saves time, of course! But it also removes an avenue for more-casual interpersonal contact (“So how long’ve you been working here?”)
and ad-hoc learning (“So I use that login on this system, right?”). Automattic’s aware of this and has an entire culture about making information accessible, but it
takes additional work on the part of a new hire to proactively seek out the answers they need, when they need them: searching the relevant resources, or else finding out who to ask… and
being sure to check their timezone before expecting an immediate response.
Onboarding at Automattic is necessarily at least somewhat self-driven, and it’s clear in hindsight that the recruitment process is geared towards selection of individuals who can work
in this way because it’s an essential part of how we work in general. I appreciated the freedom to carve my own path as I learned the ropes, but it took me a little while to
get over my initial intimidation about pinging a stranger to ask for a video/voice chat to talk through something!
Meetup (days 14 through 21)
I’d tried to arrange my migration to Automattic to occur just before their 2019 Grand Meetup, when virtually the entire company gets together in one place for an infrequent but important gathering, but I couldn’t make it work and just barely missed it. Luckily, though, my team had
planned a smaller get-together in South Africa which coincided with my second/third week, so I jetted off to get some
facetime with my colleagues.
My colleague and fellow newbie Berislav‘s contract started a few hours after he landed in Cape Town, and it was helpful to my
journey to see how far I’d come over the last fortnight through his eyes! He was, after all, on the same adventure as me, only a couple of weeks behind, and it was reassuring
to see that I’d already learned so much as well as to be able to join in with helping him get up-to-speed, too.
By the time I left the meetup I’d learned as much again as I had in the two weeks prior about my new role and my place in the team. I’d also
learned that I’m pretty terrible at surfing, but luckily that’s not among the skills I have to master in order to become a valuable
developer to Automattic.
Happiness Rotation (days 23 through 35)
A quirk of Automattic – and indeed something that attracted me to them, philosophically – is that everybody spends two weeks early in their first year and a week in every subsequent
year working on the Happiness Team. Happiness at Automattic is what almost any other company would call “tech support”, because Automattic’s full of job titles and team names that are,
frankly, a bit silly flipping awesome. I like this “Happiness Rotation” as a concept because it keeps the entire company focussed on customer issues and the things that
really matter at the coal face. It also fosters a broader understanding of our products and how they’re used in the real world, which is particularly valuable to us developers who can
otherwise sometimes forget that the things we produce have to be usable by real people with real needs!
One of the things that made my Happiness Rotation the hardest was also one of the things that made it the most-rewarding: that I didn’t really know most of the products I was
supporting! This was a valuable experience because I was able to learn as-I-went-along, working alongside my (amazingly supportive and understanding) Happiness Team co-workers: the
people who do this stuff all the time. But simultaneously, it was immensely challenging! My background in WordPress in general, plenty of tech support practice at Three Rings, and even my experience of email support at Samaritans put me in a strong position in-general…
but I found that I could very-quickly find myself out of my depth when helping somebody with the nitty-gritty of a problem with a specific WooCommerce extension.
Portering and getting DRI (days 60 through 67)
I’ve also had the opportunity during my brief time so far with Automattic to take on a few extra responsibilities within my team. My team rotates weekly responsibility for what they
call the Porter role. The Porter is responsible for triaging pull requests and monitoring blocking issues and acting as a first point-of-call to stakeholders: you know, the
stuff that’s important for developer velocity but that few developers want to do all the time. Starting to find my feet in my team by now, I made it my mission during my first
shift as Porter to get my team to experiment with an approach for keeping momentum on long-running issues, with moderate success (as a proper continuous-integration shop, velocity is
important and measurable). It’s pleased me so far to feel like I’m part of a team where my opinion matters, even though I’m “the new guy”.
I also took on my first project as a Directly Responsible Individual, which is our fancy term for the person who makes sure the project runs to schedule, reports on progress etc.
Because Automattic more strongly than any other place I’ve ever worked subscribes to a dogfooding strategy, the
woocommerce.com online store for which I share responsibility runs on – you guessed it – WooCommerce! And so the first project for which I’m directly responsible is the upgrade of woocommerce.com to the latest version of
WooCommerce, which went into beta last month. Fingers crossed for a smooth deployment.
There’s so much I’d love to say about Automattic’s culture, approach to development, people, products, philosophy, and creed, but that’ll
have to wait for another time. For now, suffice to say that I’m enjoying this exciting and challenging new environment and I’m looking forward to reporting on them in another 128 days
or so.