VPNs have long been essential online tools that provide security, freedom, and most importantly, privacy. Each day, hundreds of millions of internet users connect to a VPN to
prevent their online activities from being tracked and monitored so that they can privately access web resources. In other words, the very purpose of a VPN is to prevent the
type of surveillance that Google engages in on a massive and unprecedented scale.
Google knows this, and in their whitepaper discussing VPN by Google One, Google acknowledges that VPN usage is becoming mainstream and that “up to 25% of all internet users accessed a VPN
within the last month of 2019.” Increasing VPN usage unfortunately poses a significant problem for Google, by making it more difficult to track users across the internet, mine their
data, and target them with advertisements. In short, VPNs undermine Google’s power.
So yeah, it turns out that Google are launching a VPN service. I just checked,
and it’s not available to me anyway because it’s US-only (apparently nobody explained to Google the irony of having a VPN service that’s geofenced), but that’s pretty academic because I wasn’t going to touch it with a barge pole in the first place.
Google already collect data on your browsing habits if you use their products. And I’m not just talking about Chrome, which of course continues to track you using your Google Account even after you log out and clear your cookies, and Google’s ubiquitous Web
tools, but also the tracking pixels hidden on every other website thanks to Google Analytics, AdWords, reCAPTCHA, Google Fonts, and the like. Sure, you can use e.g. uMatrix to stop all of these (although I’m in need of a
replacement), but that’s not a solution for, y’know, normal people. Container tabs help and you should
absolutely use them, but they don’t quite go far enough. It’s a challenge.
Switch to their VPN, though, and they’re suddenly able to track all of your browsing activity, in any browser on your
device. And probably many of the desktop applications you run, too, as most of them “phone home” for updates or functionality. And because it’s a paid-for VPN service, this data can be instantly linked to your real-world identity. By a company that’s demonstrated its willingness to misuse that data for their own benefit (or for the benefit of overreaching law enforcement agencies). Yeah: no deal,
Perhaps the only company I’d trust less to provide a VPN service would be Facebook, because you just know they’d be
doing so exclusively to undermine individual privacy. Oh wait;
that’s exactly what they did. Sigh.
But even when I’ve not been ‘hashing, it occurs to me that I’ve been tracking my location a lot. Three mechanisms in particular dominate:
Google’s somewhat-invasive monitoring of my phones’ locations (which can be exported via Google Takeout)
My personal GPSr logs (I carry the device moderately often, and it provides excellent precision)
The personal μlogger server I’ve been running for the last few years (it’s like Google’s system, but – y’know –
self-hosted, tweakable, and less-creepy)
If I could mine all of that data, I might be able to answer the question… have I ever have accidentally visited a geohashpoint?
Let’s find out.
Data mining my own movements
To begin with, I needed to get all of my data into μLogger. The Android app syncs to it automatically and uploading from my GPSr was
simple. The data from Google Takeout was a little harder.
I found a setting in Google Takeout to export past location data in KML, rather than JSON, format. KML is understood by GPSBabel which
can convert it into GPX. I can “cut up” the resulting GPX file using a little grep-fu (relevant xkcd?) to get month-long files and import them into
Well.. μLogger’s web interface sometimes times-out if you upload enormous files like a whole month of Google Takeout logs. So instead I wrote a Nokogiri script to convert the GPX into SQL
to inject directly into μLogger’s database.
Next, I got a set of hashpoint offsets. I only had personal positional data going back to around 2010, so I didn’t need to accommodate for the pre-2008 absence of the 30W time zone rule. I’ve had only one trip to the Southern hemisphere in that period, and I
checked that manually. A little rounding and grouping in SQL gave me each graticule I’d been in on every date.
Unsurprisingly, I spend most of my time in the 51 -1 graticule. Adding (or subtracting, for the Western
hemisphere) the offset provided the coordinates for each graticule that I visited for the date that I was in that graticule. Nice.
The correct way to find the proximity of my positions to each geohashpoint is, of course, to use WGS84. That’s an
easy thing to do if you’re using a database that supports it. My database… doesn’t. So I just used Pythagoras’ theorem to find positions I’d visited that were within 0.15° of a that
Using Pythagoras for geopositional geometry is, of course, wrong. Why? Because the physical length of a “degree” varies dependent on latitude, and – more importantly – a degree of
latitude is not the same distance as a degree of longitude. The ratio varies by latitude: only an idealised equatorial graticule would be square!
But for this case, I don’t care: the data’s going to be fuzzy and require some interpretation anyway. Not least because Google’s positioning has the tendency to, for example, spot a
passing train’s WiFi and assume I’ve briefly teleported to Euston Station, which is apparently where Google thinks that hotspot “lives”.
I assumed that my algorithm would detect all of my actual geohash finds, and yes: all of these appeared as-expected in my results. This was a good confirmation that my approach
And, crucially: about a dozen additional candidate points showed up in my search. Most of these – listed at the end of this post – were 50m+ away from the hashpoint and
involved me driving or cycling past on a nearby road… but one hashpoint stuck out.
Hashing by accident
In August 2015 we took a trip up to Edinburgh to see a play of Ruth‘s brother Robin‘s. I don’t remember
much about the play because I was on keeping-the-toddler-entertained duty and so had to excuse myself pretty early on. After the play we drove South, dropping Tom off at Lanark station.
We exited Lanark via the Hyndford Bridge… which is – according to the map – tantalisingly-close to the 2015-08-22 55 -3
hashpoint: only about 23 metres away!
That doesn’t feel quite close enough to justify retroactively claiming the geohash, tempting though it would be to use it as a vehicle to my easy geohash ribbon. Google doesn’t provide error bars for their exported location data so I can’t draw a circle of uncertainty,
but it seems unlikely that I passed through this very close hashpoint.
Pity. But a fun exercise. This was the nearest of my near misses, but plenty more turned up in my search, too:
2013-09-28 54 -2 (9,000m)
Near a campsite on the River Eden. I drove past on the M6 with Ruth on the way to Loch Lomond for a mini-break to celebrate our sixth anniversary. I was never more than 9,000 metres
from the hashpoint, but Google clearly had a moment when it couldn’t get good satellite signal and tries to trilaterate my position from cell masts and coincidentally guessed, for a
few seconds, that I was much closer. There are a few such erroneous points in my data but they’re pretty obvious and easy to spot, so my manual filtering process caught them.
2019-09-13 52 -0 (719m)
A600, near Cardington Airstrip, south of Bedford. I drove past on the A421 on my way to Three Rings‘ “GDPR Camp”, which was more fun than it sounds, I promise.
2014-03-29 53 -1 (630m)
Spen Farm, near Bramham Interchange on the A1(M). I drove past while heading to the Nightline Association Conference to talk about Three Rings. Curiously, I came much closer to the hashpoint the previous week when I drove a neighbouring road on my way to York for my friend
2020-05-06 51 -1 (346m)
Inside Kidlington Police Station! Short of getting arrested, I can’t imagine how I’d easily have gotten to this one, but it’s moot anyway because I didn’t try! I’d taken the day off
work to help with child-wrangling (as our normal childcare provisions had been scrambled by COVID-19), and at some point during the day we took a walk and came somewhat near to the
2016-02-05 51 -1 (340m)
Garden of a house on The Moors, Kidlington. I drove past (twice) on my way to and from the kids’ old nursery. Bonus fact: the house directly opposite the one whose garden contained
the hashpoint is a house that I looked at buying (and visited), once, but didn’t think it was worth the asking price.
2017-08-30 51 -1 (318m)
St. Frieswide Farm, between Oxford and Kidlington. I cycled past on Banbury Road twice – once on my way to and once on my way from work.
2015-01-25 51 -1 (314m)
Templar Road, Cutteslowe, Oxford. I’ve cycled and driven along this road many times, but on the day in question the closest I came was cycling past on nearby Banbury Road while on the
way to work.
2018-01-28 51 -1 (198m)
Stratfield Brake, Kidlington. I took our youngest by bike trailer this morning to his Monkey Music class: normally at this point in history Ruth would have been the one to take him,
but she had a work-related event that she couldn’t miss in the morning. I cycled right by the entrance to this nature reserve: it could have been an ideal location for a geohash!
2014-01-24 51 -1 (114m)
On the Marston Cyclepath. I used to cycle along this route on the way to and from work most days back when I lived in Marston, but by 2014 I lived in Kidlington and so I’d only cycle
past the end of it. So it was that I cycled past the Linacre College of the path, around 114m away from the hashpoint, on this day.
2015-06-10 51 -1 (112m)
Meadow near Peartree Interchange, Oxford. I stopped at the filling station on the opposite side of the roundabout, presumably to refuel a car.
2020-02-27 51 -1 (70m)
This was a genuine attempt at a hashpoint that I failed to reach and was so sad about that I never bothered to finish writing up. The hashpoint was very close (but just out of sight
of, it turns out) a geocache I’d hidden in the vicinity, and I was hopeful that I might be able to score the most-epic/demonstrable déjà vu/hash collision
achievement ever, not least because I had pre-existing video evidence that I’d been at the
coordinates before! Unfortunately it wasn’t to be: I had inadequate footwear for the heavy rains that had fallen in the days that preceded the expedition and I was in a hurry to get
home, get changed, and go catch a train to go and see the Goo Goo Dolls in concert. So I gave up and quit the expedition. This turned out to be the right decision: going to
the concert one of the last “normal” activities I got to do before the COVID-19 lockdown made everybody’s lives weird.
2014-05-23 51 -1 (61m)
White Way, Kidlington, near the Bicester Road to Green Road footpath. I passed close by while cycling to work, but I’ve since walked through this hashpoint many times: it’s on a route
that our eldest sometimes used to take when walking home from her school! With the exception only of the very-near-miss in Lanark, this was my nearest “near miss”.
A new email-based extortion scheme apparently is making the rounds, targeting Web site owners serving banner ads through Google’s AdSense program. In this scam, the
fraudsters demand bitcoin in exchange for a promise not to flood the publisher’s ads with so much bot and junk traffic that Google’s automated anti-fraud systems suspend the user’s
AdSense account for suspicious traffic.
The shape of our digital world grows increasingly strange. As anti-DoS techniques grow better and more and more uptime-critical
websites hide behind edge caches, zombie network operators remain one step ahead and find new and imaginative ways to extort money from their victims. In this new attack, the criminal
demands payment (in cryptocurrency) under threat that, if it’s not delivered, they’ll unleash an army of bots to act like the victim trying to scam their advertising network,
thereby getting the victim’s site demonetised.
#TodayILearned that if you get a perfect 100-100-100-100 + #PWA score from @Google‘s site performance/accessibility/best practice/SEO tool “@____lighthouse” then you’re treated
to an animated fireworks display alongside your scores. (Yes, this is for danq.me.)
Big news! This site is no longer using Google Analytics and I’ve switched to a self-hosted version of brand new analytics product Fathom.
Fathom is very simple. It only tracks 4 things: Unique Visitors, Page Views, Time on Site, and Bounce Rate. It shows me a chart of
page views and visitors and then gives me a break down of referrers and top performing content. That’s it. And to be quite honest, that’s about all I need from my blog analytics.
You know what, Dave:me too! I’ve been running Google Analytics since forever and Piwik/Matomo (in parallel with it) for about a year and honestly: I
get more than enough of what I need from the latter. So you’ve inspired me to cut the line with Google: after all, all I was doing was selling them my friends’ data in exchange for some
analytics I wasn’t really paying attention to… and I’d frankly rather not.
So: for the first time in a decade or so, there’s no Google Analytics on this site. Woop!
Update 2023-12-13: I eventually went further still and dropped all analytics, even self-hosted variants, and it feels great.
We here at unlike kinds decided that we had to implement Google AMP. We have to be in the Top Stories section because otherwise we’re punted down the page and away from potential
readers. We didn’t really want to; our site is already fast because we made it fast, largely with a combination of clever caching and minimal code. But hey, maybe AMP would speed
things up. Maybe Google’s new future is bright.
It isn’t. According to Google’s own Page Speed Insights audit (which Google recommends to check your performance), the AMP version of articles got an average performance score of 87.
The non-AMP versions? 95. (Note: I updated these numbers recently with an average after running the test 6 times per version.)
I’ve complained about AMP before plenty – starting here, for example – but it’s even harder to
try to see the alleged “good sides” of the technology when it doesn’t even deliver the one thing it was supposed to. The Internet should be boycotting this shit, not drinking
How could things be better? For a start, Google could make a better commitment to open-source and developing standards rather than platforms. But if you don’t think you can trust them
to do that – and you can’t – then the only solution for individuals is to use fewer Google products to break the Google-monoculture. Encourage the competition to weaken their
position, and break free from silos in general where it’s possible to do so.
148+ projects and services dead. But hey, we’re getting Stadia so everything’s okay, right? <sigh>
Microsoft is officially giving up on an independent shared platform for the internet. By adopting Chromium, Microsoft hands over control of even more of online life to Google.
This may sound melodramatic, but it’s not. The “browser engines” — Chromium from Google and Gecko Quantum from Mozilla — are “inside baseball” pieces of software that actually
determine a great deal of what each of us can do online. They determine core capabilities such as which content we as consumers can see, how secure we are when we watch content, and
how much control we have over what websites and services can do to us. Microsoft’s decision gives Google more ability to single-handedly decide what possibilities are available to
each one of us.
From a business point of view Microsoft’s decision may well make sense. Google is so close to almost complete control of the infrastructure of our online lives that it may not be
profitable to continue to fight this. The interests of Microsoft’s shareholders may well be served by giving up on the freedom and choice that the internet once offered us. Google is
a fierce competitor with highly talented employees and a monopolistic hold on unique assets. Google’s dominance across search, advertising, smartphones, and data capture creates a
vastly tilted playing field that works against the rest of us.
From a social, civic and individual empowerment perspective ceding control of fundamental online infrastructure to a single company is terrible. This is why Mozilla exists. We compete with Google not because it’s a good business opportunity. We compete with Google because the health
of the internet and online life depend on competition and choice. They depend on consumers being able to decide we want something better and to take action.
Will Microsoft’s decision make it harder for Firefox to prosper? It could. Making Google more powerful is risky on many fronts. And a big part of the answer depends on what the web
developers and businesses who create services and websites do. If one product like Chromium has enough market share, then it becomes easier for web developers and businesses to decide
not to worry if their services and sites work with anything other than Chromium. That’s what happened when Microsoft had a monopoly on browsers in the early 2000s before Firefox was
released. And it could happen again.
If you care about what’s happening with online life today, take another look at Firefox. It’s radically better than it was 18 months ago — Firefox once again holds its own when it
comes to speed and performance. Try Firefox as your default browser for a week and then decide. Making Firefox
stronger won’t solve all the problems of online life — browsers are only one part of the equation. But if you find Firefox is a good product for you, then your use makes Firefox
stronger. Your use helps web developers and businesses think beyond Chrome. And this helps Firefox and Mozilla make overall life on the internet better — more choice, more security
options, more competition.
Scathing but well-deserved dig at Microsoft by Mozilla, following on from the Edge-switch-to-Chromium I’ve been going on about. Chris is right:
more people should try Firefox (it’s been my general-purpose browser on desktop and mobile ever since Opera threw in the towel and joined the Chromium hivemind in 2013, and on-and-off
plenty before then) – not just because it’s a great browser (and it is!) but also now because it’s important for the diversity and
health of the Web.
We used to have much more diversity in terms of browser engines years ago than we do today. This is easy to understand as the Web in 2018 is far more complex than it was in the early
noughties. It is very costly to develop and maintain a Web engine and few companies have the necessary talent and cash to do it. Microsoft is one of those companies but the fact that
it might be throwing in the towel on its engine signals a bad development for all
Further evaluation of the dangers of the disappearing diversity on the Web, following in the theme of my thoughts the other day about
Microsoft’s adoption of Chromium instead of EdgeHTML in its browser.
Andre raises a real point: how will we fight for a private and decentralised Web when it becomes “the Google Web”?
I don’t think Microsoft using Chromium is the end of the world, but it is another step down a slippery slope. It’s one more way of bolstering the influence Google currently has on the
We need Google to keep pushing the web forward. But it’s critical that we have other voices, with different viewpoints, to maintain some sense of balance. Monocultures don’t benefit
This essay follows-up nicely on my concerns about Microsoft’s move from EdgeHTML to Chromium in Edge, but goes further to discuss some of the
bigger problems of a homogeneous web, especially one under Google’s influence.
The younger generation of web developers are likely to hail this as good news: one fewer engine to develop for and test in, they’re all already using Chrome or
something similar (and certainly not Edge) for development and debugging anyway, etc. The problem comes perhaps because they’re too young to remember the First Browser War and its aftermath. Let me summarise:
Once upon a time – let’s call it the mid-1990s – there were several web browsers: Netscape Navigator, Internet Explorer, Opera, etc. They all used different rendering
engines and so development was sometimes a bit of a pain, but only if you wanted to use the latest most cutting-edge features: if you were happy with the standard, established
features of the Web then your site would work anywhere, as has always been the case.
Then, everybody starting using just one browser: following some shady dealings and monopoly abuse, 90%+ of Web users started using just one web browser, Internet Explorer. By the time anybody took
notice, their rivals had been economically crippled beyond any reasonable chance of recovery, but the worst had yet to come…
Developers started targeting only that one browser: instead of making websites, developers started making “Internet Explorer sites” which were only tested in that one
browser or, worse yet, only worked at all in that browser, actively undermining the Web’s position as an open platform. As the grip of the monopoly grew tighter,
technological innovation was centred around this single platform, leading to decade-long knock-on
The Web ceased to grow new features: from the release of Internet Explorer 6 there were no significant developments in the technology of the Web for many years.
The lack of competition pushed us into a period of stagnation. A
decade and a half later, we’re only just (finally) finishing shaking off this unpleasant bit of our history.
History looks set to repeat itself. Substitute Chrome in place of Internet Explorer and update the references to other web browsers and the steps above could be our future history, too.
Right now, we’re somewhere in or around step #2 – Chrome is the dominant browser – and we’re starting to see the beginnings of step #3: more and more “Chrome only” sites.
More-alarmingly this time around, Google’s position in providing many major Web services allows them to “push” even harder for this kind of change, even just subtly: if you make the
switch from Chrome to e.g. Firefox (and you absolutely should) you might find that
YouTube runs slower for you because YouTube’s (Google) engineers favour Google’s web browser.
So these are the three browser engines we have: WebKit/Blink, Gecko, and EdgeHTML. We are unlikely to get any brand new bloodlines in the foreseeable future. This is it.
If we lose one of those browser engines, we lose its lineage, every permutation of that engine that would follow, and the unique takes on the Web it could allow for.
And it’s not likely to be replaced.
Imagine a planet populated only by hummingbirds, dolphins, and horses. Say all the dolphins died out. In the far, far future, hummingbirds or horses could evolve into something that
could swim in the ocean like a dolphin. Indeed, ichthyosaurs in the era of dinosaurs looked much like dolphins. But that creature would be very different from a true dolphin: even
ichthyosaurs never developed echolocation. We would wait a very long time (possibly forever) for a bloodline to evolve the traits we already have present in other bloodlines today.
So, why is it ok to stand by or even encourage the extinction of one of these valuable, unique lineages?
We have already lost one.
We used to have four major rendering engines, but Opera halted development of its own rendering engine Presto before adopting Blink.
Three left. Spend them wisely.
As much as I don’t like having to work-around the quirks in all of the different browsers I test in, daily, it’s way preferable to a return to the dark days of the Web circa
most of the first decade of this century. Please help keep browsers diverse: nobody wants to start seeing this shit –
This article is a follow-up to my article “Why Google AMP is a threat to the Open
Web”. In the comments of that article I promised I’d soon provide a follow-up, and for reasons I’ll get into, that has not been possible until now – but now I’m finally
Back in February I wrote an article saying how I believed Google AMP has been imposed on the web by Google as a ‘standard’ for developing fast webpages, and my dismay about that.
Google apparently developed this as an internal project without any open collaboration, and avoiding the W3C standardization processes. Google made implementation of Google AMP a
requirement to show at the top of the search results for common news searches.
To many of us open web folk, Google’s AMP violated the widely held principle of search engines not putting bias into search results, and/or the principle of web standards (take your
pick – it would not be bias if it was a standardized approach that the wider web community had agreed upon).
You know how I feel about AMP. I’m not alone, and others are doing a pretty good job of talking to Google about our concerns. Unfortunately, Google aren’t
Now that Google+ has been shuttered, I should air my dirty laundry on how awful the project and exec team was.
I’m still pissed about the bait and switch they pulled by telling me I’d be working on Chrome, then putting me on this god forsaken piece of shit on day one.
This will be a super slow burn that goes back many years. I’ll continue to add to over the next couple of days. I’ll preface it with a bunch of backstory and explain what I had left
behind, which made me more unhappy about the culture I had come into.
I spent most of my early career working for two radical sister non-profit orgs. I was the only designer working on anywhere from 4-5 different products at the same time. All centered
around activism and used by millions of people.
It’s how I cut my teeth. Learned to be the designer that I am today. Most importantly, the people I worked for are imho some of the greatest people on the planet. Highly intelligent,
empathetic, caring, and true role models for a young me. I adore them.
You might not know who they are, but if you’re reading this then you have definitely seen their work. Maybe OpenCongress, or Miro, or maybe Amara which is Vimeo’s partner
transcription service. Definitely Fight for the Future, our internet defenders, which was shortly after me.
I married the love of my life in 2008, started a family, and at some point realized that I simply needed to make a better living. No matter how prolific, non-profits usually can’t
provide the type of income that you need for a growing family with huge ambitions.
So as I gained visibility – via @dribbble – I started to field recruiters and consider new opportunities. Mostly little startups. I interviewed at one (Rockmelt) and they passed on me
(hi, @iamxande ?).
Got an email from Kickstarter (hi, @amotion ?). Schlepped to New York and wasted days of time to be passed on by their founders. Then they unfollowed me on twitter. At least I ate
some deli. ?
Then Google reached out. I remember that ”holy shit” moment. “Me!? Are they kidding?? The schmuck who tested out of high school and dropped out of college??” They told me I’d
interview to work on Chrome. I was over the moon. I remember Manda tearing up. God I love her.
They gave me a little bit of time for a design exercise. You can see it here in all it’s dated glory: morganallanknutson.com/google/ Click and hold for the overlay. More schlepping
from LA and an interview at their silly college-like campus. I was a nervous wreck.
The process felt very haphazard. At one point a front-end dev with a bow-tie grilled me on CSS and asked some super dumb questions. My advocate (a sweetheart named Peter) seemed to be
rushing people through, quelling their fears. I still appreciate his belief in me to this day.
I felt like I had done ok. The last two interviews that I failed at were real shots to the heart. I took this one incredibly seriously. I wanted this job so badly. I wanted to prove I
Weeks went by and I heard nothing. I accepted the inevitable and started responding to other recruiters. It was ok. I wasn’t joining the big leagues. I could play triple-A ball for
longer. As long as I got up to SF where the opportunities were.
I took a gig with a failing news startup (lol) called Ongo (hi, @bethdean ?). They got me up here. I guess it was a bit of a Hail Mary for them. In a couple of months I knocked out
more work than they could have built in a year with their eng team. Then…
Google got back in touch almost 3-4 months after the interview (who does this??).
I got the job.
To be continued…
If you ever thought that Google+, despite its laudable aims, was rolled out like a shitstorm… image what it must have been like behind the scenes. Actually, image no more –
read this thread for a taste of the full horror.