Moving around is what we do as creatures, and for that we need horizons. COVID-19 has erased many of the spatial and temporal horizons we rely on, even if we don’t notice them very
often. We don’t know how the economy will look, how social life will go on, how our home routines will be changed, how work will be organized, how universities or the arts or local
commerce will survive.
What unsettles us is not only fear of change. It’s that, if we can no longer trust in the future, many things become irrelevant, retrospectively pointless. And by that we mean from
the perspective of a future whose basic shape we can no longer take for granted. This fundamentally disrupts how we weigh the value of what we are doing right now. It becomes
especially hard under these conditions to hold on to the value in activities that, by their very nature, are future-directed, such as education or institution-building.
That’s what many of us are feeling. That’s today’s acedia.
…
In a blog post far from his usual topics, Schneier shares a word – albeit an arguably-archaic one! – that captures the feeling of listlessness that many of us are experiencing as the
coronavirus pandemic continues to unfold.
Over the last six years I’ve been on a handful of geohashing expeditions, setting out to functionally-random GPS coordinates to see if I can get there, and documenting what I find when I do. The comic that inspired the
sport was already six years old by the time I embarked on my first outing, and I’m far from the most-active member
of the ‘hasher community, but I’ve a certain closeness to them as a result of my work to resurrect and host the “official” website. Either way: I love the sport.
But even when I’ve not been ‘hashing, it occurs to me that I’ve been tracking my location a lot. Three mechanisms in particular dominate:
Google’s somewhat-invasive monitoring of my phones’ locations (which can be exported via Google Takeout)
My personal GPSr logs (I carry the device moderately often, and it provides excellent precision)
The personal μlogger server I’ve been running for the last few years (it’s like Google’s system, but – y’know –
self-hosted, tweakable, and less-creepy)
If I could mine all of that data, I might be able to answer the question… have I ever have accidentally visited a geohashpoint?
Let’s find out.
Data mining my own movements
To begin with, I needed to get all of my data into μLogger. The Android app syncs to it automatically and uploading from my GPSr was
simple. The data from Google Takeout was a little harder.
I found a setting in Google Takeout to export past location data in KML, rather than JSON, format. KML is understood by GPSBabel which
can convert it into GPX. I can “cut up” the resulting GPX file using a little grep-fu (relevant xkcd?) to get month-long files and import them into
μLogger. Easy!
Well.. μLogger’s web interface sometimes times-out if you upload enormous files like a whole month of Google Takeout logs. So instead I wrote a Nokogiri script to convert the GPX into SQL
to inject directly into μLogger’s database.
Next, I got a set of hashpoint offsets. I only had personal positional data going back to around 2010, so I didn’t need to accommodate for the pre-2008 absence of the 30W time zone rule. I’ve had only one trip to the Southern hemisphere in that period, and I
checked that manually. A little rounding and grouping in SQL gave me each graticule I’d been in on every date.
Unsurprisingly, I spend most of my time in the 51 -1 graticule. Adding (or subtracting, for the Western
hemisphere) the offset provided the coordinates for each graticule that I visited for the date that I was in that graticule. Nice.
The correct way to find the proximity of my positions to each geohashpoint is, of course, to use WGS84. That’s an
easy thing to do if you’re using a database that supports it. My database… doesn’t. So I just used Pythagoras’ theorem to find positions I’d visited that were within 0.15° of a that
day’s hashpoint.
Using Pythagoras for geopositional geometry is, of course, wrong. Why? Because the physical length of a “degree” varies dependent on latitude, and – more importantly – a degree of
latitude is not the same distance as a degree of longitude. The ratio varies by latitude: only an idealised equatorial graticule would be square!
But for this case, I don’t care: the data’s going to be fuzzy and require some interpretation anyway. Not least because Google’s positioning has the tendency to, for example, spot a
passing train’s WiFi and assume I’ve briefly teleported to Euston Station, which is apparently where Google thinks that hotspot “lives”.
I assumed that my algorithm would detect all of my actual geohash finds, and yes: all of these appeared as-expected in my results. This was a good confirmation that my approach
worked.
And, crucially: about a dozen additional candidate points showed up in my search. Most of these – listed at the end of this post – were 50m+ away from the hashpoint and
involved me driving or cycling past on a nearby road… but one hashpoint stuck out.
Hashing by accident
In August 2015 we took a trip up to Edinburgh to see a play of Ruth‘s brother Robin‘s. I don’t remember
much about the play because I was on keeping-the-toddler-entertained duty and so had to excuse myself pretty early on. After the play we drove South, dropping Tom off at Lanark station.
We exited Lanark via the Hyndford Bridge… which is – according to the map – tantalisingly-close to the 2015-08-22 55 -3
hashpoint: only about 23 metres away!
That doesn’t feel quite close enough to justify retroactively claiming the geohash, tempting though it would be to use it as a vehicle to my easy geohash ribbon. Google doesn’t provide error bars for their exported location data so I can’t draw a circle of uncertainty,
but it seems unlikely that I passed through this very close hashpoint.
Pity. But a fun exercise. This was the nearest of my near misses, but plenty more turned up in my search, too:
2013-09-28 54 -2 (9,000m)
Near a campsite on the River Eden. I drove past on the M6 with Ruth on the way to Loch Lomond for a mini-break to celebrate our sixth anniversary. I was never more than 9,000 metres
from the hashpoint, but Google clearly had a moment when it couldn’t get good satellite signal and tries to trilaterate my position from cell masts and coincidentally guessed, for a
few seconds, that I was much closer. There are a few such erroneous points in my data but they’re pretty obvious and easy to spot, so my manual filtering process caught them.
2019-09-13 52 -0 (719m)
A600, near Cardington Airstrip, south of Bedford. I drove past on the A421 on my way to Three Rings‘ “GDPR Camp”, which was more fun than it sounds, I promise.
2014-03-29 53 -1 (630m)
Spen Farm, near Bramham Interchange on the A1(M). I drove past while heading to the Nightline Association Conference to talk about Three Rings. Curiously, I came much closer to the hashpoint the previous week when I drove a neighbouring road on my way to York for my friend
Matt’s wedding.
2020-05-06 51 -1 (346m)
Inside Kidlington Police Station! Short of getting arrested, I can’t imagine how I’d easily have gotten to this one, but it’s moot anyway because I didn’t try! I’d taken the day off
work to help with child-wrangling (as our normal childcare provisions had been scrambled by COVID-19), and at some point during the day we took a walk and came somewhat near to the
hashpoint.
2016-02-05 51 -1 (340m)
Garden of a house on The Moors, Kidlington. I drove past (twice) on my way to and from the kids’ old nursery. Bonus fact: the house directly opposite the one whose garden contained
the hashpoint is a house that I looked at buying (and visited), once, but didn’t think it was worth the asking price.
2017-08-30 51 -1 (318m)
St. Frieswide Farm, between Oxford and Kidlington. I cycled past on Banbury Road twice – once on my way to and once on my way from work.
2015-01-25 51 -1 (314m)
Templar Road, Cutteslowe, Oxford. I’ve cycled and driven along this road many times, but on the day in question the closest I came was cycling past on nearby Banbury Road while on the
way to work.
2018-01-28 51 -1 (198m)
Stratfield Brake, Kidlington. I took our youngest by bike trailer this morning to his Monkey Music class: normally at this point in history Ruth would have been the one to take him,
but she had a work-related event that she couldn’t miss in the morning. I cycled right by the entrance to this nature reserve: it could have been an ideal location for a geohash!
2014-01-24 51 -1 (114m)
On the Marston Cyclepath. I used to cycle along this route on the way to and from work most days back when I lived in Marston, but by 2014 I lived in Kidlington and so I’d only cycle
past the end of it. So it was that I cycled past the Linacre College of the path, around 114m away from the hashpoint, on this day.
2015-06-10 51 -1 (112m)
Meadow near Peartree Interchange, Oxford. I stopped at the filling station on the opposite side of the roundabout, presumably to refuel a car.
2020-02-27 51 -1 (70m)
This was a genuine attempt at a hashpoint that I failed to reach and was so sad about that I never bothered to finish writing up. The hashpoint was very close (but just out of sight
of, it turns out) a geocache I’d hidden in the vicinity, and I was hopeful that I might be able to score the most-epic/demonstrable déjà vu/hash collision
achievement ever, not least because I had pre-existing video evidence that I’d been at the
coordinates before! Unfortunately it wasn’t to be: I had inadequate footwear for the heavy rains that had fallen in the days that preceded the expedition and I was in a hurry to get
home, get changed, and go catch a train to go and see the Goo Goo Dolls in concert. So I gave up and quit the expedition. This turned out to be the right decision: going to
the concert one of the last “normal” activities I got to do before the COVID-19 lockdown made everybody’s lives weird.
2014-05-23 51 -1 (61m)
White Way, Kidlington, near the Bicester Road to Green Road footpath. I passed close by while cycling to work, but I’ve since walked through this hashpoint many times: it’s on a route
that our eldest sometimes used to take when walking home from her school! With the exception only of the very-near-miss in Lanark, this was my nearest “near miss”.
As I approach my first full year as an Automattician, I find myself looking back on everything I’ve learned… but also looking around at all the things I still don’t understand! I’m not
learning something new every day any more… but I’m still learning something new most weeks.
This summer I’ve been getting up-close and personal with Gutenberg components. I’d mostly managed to avoid learning the React (eww; JSX, bad documentation, and an elephantine payload…) necessary to hack Gutenberg, but in
helping to implement new tools for WooCommerce.com I’ve discovered that it’s… not quite as painful as I’d thought. There are even some bits I quite like. But I don’t expect to
fall in love with React any time soon. This autumn I’ve been mostly working on search and personalisation, integrating customer analytics data with our marketplace to help understand
what people look for on our sites and using that to guide their future experience (and that of others “like” them). There’s always something new.
My team continues to grow, with two newmatticians this month and a third starting in January. In fact, my team’s planning to fork into two closely-linked subteams; one with a focus on
customers and vendors, the other geared towards infrastructure. It’s exciting to see my role grow and change, but I worry about the risk of gradually pigeon-holing myself into an
increasingly narrow specialisation. Which wouldn’t suit me: I like to keep a finger in all the pies. Still; my manager’s reassuring that this isn’t likely to be the case and
our plans are going in the “right” direction.
On the side of my various project work, I’ve occasionally found the opportunity for more-creative things. Last month, I did some data-mining over the company’s “kudos” history of the
last five years and ran it through vis.js to try to find a new angle on understanding how Automattic’s staff, teams, and divisions interact with one
another. It lead to some interesting results: panning through time, for example, you can see the separate island of Tumblr staff who joined us
during the acquisition gradually become more-interconnected with the rest of the organisation over the course of
the last year.
The biggest disappointment of my time at Automattic so far was that I’ve not managed to go to a GM! The 2019 one – which looked awesome – took place only a couple of weeks before my contract started (despite my best efforts to wrangle
my contract dates with the Bodleian and Automattic to try to work around that), but people reassured me that it was okay because I’d make it
to the next one. Well.. 2020 makes fools of us all, I guess, because of course there’s no in-person GM this year. Maybe, hopefully, if and
when the world goes back to normal I’ll get to spend time in-person with my colleagues once in a while… but for now, we’re having to suffice with Internet-based socialisation only, just
like the rest of the world.
I scratched an itch of mine this week and wanted to share the results with you, in case you happen to be one of the few dozen other people on Earth who will cry “finally!” to discover
that this is now a thing.
I’ve used ProtonMail as my primary personal email provider for about four years, and I love it. Seamless PGP/GPG for proper end-to-end encryption, privacy as standard, etc. At first, I used their web and mobile app interfaces but over time
I’ve come to rediscover my love affair with “proper” email clients, and I’ve been mostly using Thunderbird for my desktop mail. It’s been
great: lightning-fast search, offline capabilities, and thanks to IMAP (provided by ProtonMail Bridge) my mail’s still just as accessible when I fall-back on the web or mobile clients because I’m out and about.
But the one thing this set-up lacked was the ability to easily see which emails had been delivered encrypted versus those which had merely been delivered “in the clear” (like
most emails) and then encrypted for storage on ProtonMail’s servers. So I fixed it.
I’ve just released my first ever Thunderbird plugin. If you’re using
ProtonMail Bridge, it adds a notification to the corner of every email to say whether it was encrypted in transit or not. That’s all.
And of course it’s open source with a permissive license (and a doddle to compile using your
standard operating system tools, if you want to build it yourself). If you’re using Thunderbird and ProtonMail Bridge you should give it a whirl. And if you’re not then… maybe you
should consider it?
Stupid thing of the day to try on your favourite Slack channel:
1. Make an image of yourself bordered by the edge of a speech bubble. Make the image an exact multiple of 32 pixels in each dimension (this one is 128 × 96):
2. Use ImageMagick to cut the image into 32 × 32 pixel tiles, e.g. like this: magick convert dan-qs-stupid-head.png -crop 32x32
"dan-q-says-%02d.png". Pick a sensible output filename to use as a Slack emoji shortcode.
3. Log into Slack and customise your emoji by adding each of the tiles you’ve created to it. This is where you’ll be glad you named the
file sensibly because it saves you typing the shortcode out each time.
4. Type a message using your custom emoji! Because it sits in-line with text, you can type alongside or around it (unlike normally embedded images or /giphy integration) along with
styling, mentioning, and hyperlink options. You can also copy-paste and edit on-the-fly, so you can keep a copy of the message in your self-channel and adjust whenever you need.
5. Profit!!!
Why not make a whole set of different faces showing your different emotions – perhaps from photos – so you can react appropriately to your colleagues! Slack don’t seem to impose any
limit on the number of custom emoji you can add, so the only limit is your imagination (and the tolerance of your Slack administrator for such high jinks).
Or why not cut up an animated GIF? Slack preloads emoji into the client so they play in-sync, allowing you to run
animations that span multiple emoji?
Hiking vlogger Dave shares his expedition around the Snowdon Horseshoe back in March. It’s a fantastic ridge walk that I’ve taken a few times myself. But on this particular expedition,
hampered by strong winds and thick cloud cover, a serious accident (very similar to the one that killed my father) occurred. Because Dave
was wearing his GoPro we’ve got amazing first-hand footage of the work he and the other climbers on the hill that day did to stabilise the casualty until mountain rescue could come and
assist. The whole thing’s pretty epic.
Speaking of which, did you see the jet-suits that are being tested by the Great North Air Ambulance Service? That’d
have made getting to my dad faster (though possibly not to any benefit)! Still: immensely cool idea to have jet-propelled paramedics zipping up Lake District slopes; I love it.
I’m sure that the graveyard of over-optimism is littered with the corpses of parents who planned to help their children learn self-moderation by showing them the wonders of nature,
but who realized too late that fields of wheat don’t stand a chance against Rocket League. I’m hoping that we can agree that computer games are good, but other things are good too,
cf fields of wheat. I don’t want to have to sneak in my own gaming time after my son has gone to bed. I also don’t want to be a hypocrite; at least, I don’t want Oscar to know that
I’m a hypocrite. Maybe we can play together and use it as father-son bonding time. This might work until he’s ten and after he’s twenty-five.
…
Robert Heaton, of Programming Projects for Advanced Beginners fame and reverse-engineering device drivers that spy on you (which I’ve talked about
before), has also been blogging lately about his experience of Dadding, with the same dry/sarcastic tone you might be used to. This long post is a great example of the meandering
thoughts of a (techie) parent in these (interesting) times, and it’s good enough for that alone. But it’s the raw, genuine “honesty and dark thoughts” section towards the end of the
article that really makes it stand out.
The most important feature of Sublime Text is that it doesn’t change. In the modern world, everything changes at a crazy pace. We get new OSes and new phones every year, Google opens and closes its products monthly, many physical devices get announced, produced, and disappear in an interval
shorter than the Sublime Text release cycle. I have two problems with that.
…
I love Sublime Text. It was the editor for which I finally broke my long, long emacs habit
(another editor that “doesn’t change”). Like emacs, Sublime is simple but powerful. Unlike Atom, it doesn’t eat all the RAM in the
universe. And unlike VS Code, I can rely on it being fundamentally the same today, tomorrow, and next year.
I’ve been watching the output that people machines around the Internet have been producing using GPT-3 (and its cousins), an AI model that can produce long-form “human-like”
text. Here’s some things I’ve enjoyed recently:
I played for a bit with AI Dungeon‘s (premium) Dragon engine, which came up with Dan and the
Spider’s Curse when used as a virtual DM/GM. I pitched an idea to Robin lately that one could run a vlog series based on AI Dungeon-generated adventures: coming up with a “scene”, performing it, publishing it, and taking
suggestions via the comments for the direction in which the adventure might go next (but leaving the AI to do the real
writing).
Today is Spaceship Day is a Plotagon-powered machinama based on a script written by Botnik‘s AI. So not technically GPT-3 if you’re being picky but still amusing to how and
what the AI‘s creative mind has come up with.
Language contains the map to a better world. Those that are most skilled at removing obstacles, misdirection, and lies from language, that reveal the maps that are hidden within, are
the guides that will lead us to happiness.
Yesterday, The Guardian published the op-ed piece A robot wrote this entire article.
Are you scared yet, human? It’s edited together from half a dozen or so essays produced by the AI from the same
starting prompt, but the editor insists that this took less time than the editing process on most human-authored op-eds. It’s good stuff. I found myself reminded of Nobody Knows You’re A Machine, a short story I wrote about eight years ago and was never entirely happy with but which I’ve put online in
order to allow you to see for yourself what I mean.
But my favourite so far must be GPT-3’s attempt to write its own version of Expert judgment on markers to
deter inadvertent human intrusion into the Waste Isolation Pilot Plant, which occasionally circulates the Internet retitled with its line This place is not a place of
honor…no highly esteemed deed is commemorated here… nothing valued is here. The original document was a report into how humans might mark a nuclear waste disposal site in order to
discourage deliberate or accidental tampering with the waste stored there: a massive challenge, given that the waste will remain dangerous for many thousands of years! The original
paper’s worth a read, of course, but mostly as a preface to reading a post by Janelle Shane (whose work I’ve mentioned before) about teaching GPT-3 to write nuclear waste site area denial strategies. It’s pretty special.
As effective conversational AI becomes increasingly accessible, I become increasingly convinced what we might eventually see
a sandwichware future, where it’s cheaper for an appliance developer to install an AI
into the device (to allow it to learn how to communicate with your other appliances, in a human language, just like you will) rather than rely on a static and universal underlying
computer protocol as an API. Time will tell.
Meanwhile: I promise that this post was written by a human!
Everything you see when you use “Inspect Element” was already downloaded to your computer, you just hadn’t asked Chrome to show it to you yet. Just like how the cogs were already in
the watch, you just hadn’t opened it up to look.
But let us dispense with frivolous cog talk. Cheap tricks such as “Inspect Element” are used by programmers to try and understand how the website works. This is ultimately futile:
Nobody can understand how websites work. Unfortunately, it kinda looks like hacking the first time you see it.
…while I practice, I have to simultaneously read, listen, think, translate. Every synapse of my brain is so utterly overwhelmed, there is no capacity left to think about the world
out there.
…
When Christoph Niemann published this piece about learning to play the piano during the most-lockdown-y parts of the Coronavirus lockdown, it rang a chord with me (hah!). I, too, have
experimented with learning to play the piano this spring/summer, and found a similar kind of Zen-like focussed calm emerge out of the frustration of staring at a piece of sheet music
and wondering why I couldn’t for the life of me get me fingers to remember to do when they got to that point.
I started out with – after following some random links off the back of finishing the last bit of work for my recent masters degree –
a free course in music theory by the OU, because I figured that coming in from a theoretical perspective would help with the way my brain thinks about this kind of thing. I supplemented that with a
book we got for the kids to use to learn to play, and now I’ve now graduated to very gradually hunt-and-pecking my way through Disney’s back catalogue. I can play Go The
Distance, Colors of the Wind and most of Can You Feel The Love Tonight barely well enough that I don’t feel the need to tear my own ears off, so I guess I’m
making progress, though I still fall over my own hands every time I try to play any bloody thing from Moana. 20 minutes at a time, here and there, and I’m getting there. I
don’t expect to ever be good at it, but I’m enjoying it nonetheless.
But anyway: this piece in the NYT Magazine really spoke to me, and to hear that somebody with far more music experience than me can
struggle with all the same things I do when getting started with the piano was really reassuring.
Oh my god I’m so excited. I’m afraid they might fuck up the story even more than David Lynch did in 1984 (not that I don’t
love that film, too, but in a very different way than the books). I mean: I’d have hoped a modern adaptation would have a bigger part for Chani than it clearly does. And I know nothing at all about the lead,
Timothée Chalamet. If only there was something I could do about these fears?
I must not fear. Fear is the mind-killer. Fear is the little-death that brings total obliteration. I will face my fear. I will permit it to pass over me and through me. And when it
has gone past I will turn the inner eye to see its path. Where the fear has gone there will be nothing. Only I will remain.
Yeah, that’s the kind of thing.
The supporting cast look excellent. I think Josh Brolin will make an awesome Gurney Halleck, Jason Momoa will rock Duncan Idaho, and I’m looking forward to seeing Stephen
McKinley Henderson play Thufir Hawat. But if there’s just one thing you should watch the trailer for… it’s to listen to fragments of Hans Zimmer’s haunting, simplistic choral
adaptation of Pink Floyd’s Eclipse.