Why using Google VPN is a terrible idea

This article is a repost promoting content originally published elsewhere. See more things Dan's reposted.

VPNs have long been essential online tools that provide security, freedom, and most importantly, privacy. Each day, hundreds of millions of internet users connect to a VPN to prevent their online activities from being tracked and monitored so that they can privately access web resources. In other words, the very purpose of a VPN is to prevent the type of surveillance that Google engages in on a massive and unprecedented scale.

Google knows this, and in their whitepaper discussing VPN by Google One, Google acknowledges that VPN usage is becoming mainstream and that “up to 25% of all internet users accessed a VPN within the last month of 2019.” Increasing VPN usage unfortunately poses a significant problem for Google, by making it more difficult to track users across the internet, mine their data, and target them with advertisements. In short, VPNs undermine Google’s power.

So yeah, it turns out that Google are launching a VPN service. I just checked, and it’s not available to me anyway because it’s US-only (apparently nobody explained to Google the irony of having a VPN service that’s geofenced), but that’s pretty academic because I wasn’t going to touch it with a barge pole in the first place.

Google One VPN announcement, featuring the words "US Only"
Is it 1 April already, Google?

Google already collect data on your browsing habits if you use their products. And I’m not just talking about Chrome, which of course continues to track you using your Google Account even after you log out and clear your cookies, and Google’s ubiquitous Web tools, but also the tracking pixels hidden on every other website thanks to Google Analytics, AdWords, reCAPTCHA, Google Fonts, and the like. Sure, you can use e.g. uMatrix to stop all of these (although I’m in need of a replacement), but that’s not a solution for, y’know, normal people. Container tabs help and you should absolutely use them, but they don’t quite go far enough. It’s a challenge.

Switch to their VPN, though, and they’re suddenly able to track all of your browsing activity, in any browser on your device. And probably many of the desktop applications you run, too, as most of them “phone home” for updates or functionality. And because it’s a paid-for VPN service, this data can be instantly linked to your real-world identity. By a company that’s demonstrated its willingness to misuse that data for their own benefit (or for the benefit of overreaching law enforcement agencies). Yeah: no deal, Google.

Perhaps the only company I’d trust less to provide a VPN service would be Facebook, because you just know they’d be doing so exclusively to undermine individual privacy. Oh wait; that’s exactly what they did. Sigh.

Google One VPN announcement, featuring the words "US Only"×

Accidental Geohashing

Over the last six years I’ve been on a handful of geohashing expeditions, setting out to functionally-random GPS coordinates to see if I can get there, and documenting what I find when I do. The comic that inspired the sport was already six years old by the time I embarked on my first outing, and I’m far from the most-active member of the ‘hasher community, but I’ve a certain closeness to them as a result of my work to resurrect and host the “official” website. Either way: I love the sport.

Dan, Ruth, and baby Annabel at geohashpoint 2014-04-21 51 -1
I even managed to drag-along Ruth and Annabel to a hashpoint (2014-04-21 51 -1) once.

But even when I’ve not been ‘hashing, it occurs to me that I’ve been tracking my location a lot. Three mechanisms in particular dominate:

  • Google’s somewhat-invasive monitoring of my phones’ locations (which can be exported via Google Takeout)
  • My personal GPSr logs (I carry the device moderately often, and it provides excellent precision)
  • The personal μlogger server I’ve been running for the last few years (it’s like Google’s system, but – y’know – self-hosted, tweakable, and less-creepy)

If I could mine all of that data, I might be able to answer the question… have I ever have accidentally visited a geohashpoint?

Let’s find out.

KML from Google is coverted into GPX where it joins GPX from my GPSr and real-time position data from uLogger in a MySQL database. This is queried against historic hashpoints to produce a list of candidate accidental hashpoints.
There’s a lot to my process, but it’s technically quite simple.

Data mining my own movements

To begin with, I needed to get all of my data into μLogger. The Android app syncs to it automatically and uploading from my GPSr was simple. The data from Google Takeout was a little harder.

I found a setting in Google Takeout to export past location data in KML, rather than JSON, format. KML is understood by GPSBabel which can convert it into GPX. I can “cut up” the resulting GPX file using a little grep-fu (relevant xkcd?) to get month-long files and import them into μLogger. Easy!

Requesting KML rather than JSON from Google Takeout
It’s slightly hidden, but Google Takeout choose your geoposition output format (from a limited selection).

Well.. μLogger’s web interface sometimes times-out if you upload enormous files like a whole month of Google Takeout logs. So instead I wrote a Nokogiri script to convert the GPX into SQL to inject directly into μLogger’s database.

Next, I got a set of hashpoint offsets. I only had personal positional data going back to around 2010, so I didn’t need to accommodate for the pre-2008 absence of the 30W time zone rule. I’ve had only one trip to the Southern hemisphere in that period, and I checked that manually. A little rounding and grouping in SQL gave me each graticule I’d been in on every date. Unsurprisingly, I spend most of my time in the 51 -1 graticule. Adding (or subtracting, for the Western hemisphere) the offset provided the coordinates for each graticule that I visited for the date that I was in that graticule. Nice.

SQL retreiving hashpoints for every graticule I've been to in the last 10 years, grouped by date.
Preloading the offsets into a temporary table made light work of listing all the hashpoints in all the graticules I’d visited, by date. Note that some dates (e.g. 2011-08-04, above) saw me visit multiple graticules.

The correct way to find the proximity of my positions to each geohashpoint is, of course, to use WGS84. That’s an easy thing to do if you’re using a database that supports it. My database… doesn’t. So I just used Pythagoras’ theorem to find positions I’d visited that were within 0.15° of a that day’s hashpoint.

Using Pythagoras for geopositional geometry is, of course, wrong. Why? Because the physical length of a “degree” varies dependent on latitude, and – more importantly – a degree of latitude is not the same distance as a degree of longitude. The ratio varies by latitude: only an idealised equatorial graticule would be square!

But for this case, I don’t care: the data’s going to be fuzzy and require some interpretation anyway. Not least because Google’s positioning has the tendency to, for example, spot a passing train’s WiFi and assume I’ve briefly teleported to Euston Station, which is apparently where Google thinks that hotspot “lives”.

GIF animation comparing routes recorded by Google My Location with those recorded by my GPSr: they're almost identical
I overlaid randomly-selected Google My Location and GPSr routes to ensure that they coincided, as an accuracy-test. It’s interesting to note that my GPSr points cluster when I was moving slower, suggesting it polls on a timer. Conversely Google’s points cluster when I was using data (can you see the bit where I used a chat app), suggesting that Google Location Services ramps up the accuracy and poll frequency when you’re actively using your device.

I assumed that my algorithm would detect all of my actual geohash finds, and yes: all of these appeared as-expected in my results. This was a good confirmation that my approach worked.

And, crucially: about a dozen additional candidate points showed up in my search. Most of these – listed at the end of this post – were 50m+ away from the hashpoint and involved me driving or cycling past on a nearby road… but one hashpoint stuck out.

Hashing by accident

Annabel riding on Tom's shoulders in Edinburgh.
We all had our roles to play in our trip to Edinburgh. Tom… was our pack mule.

In August 2015 we took a trip up to Edinburgh to see a play of Ruth‘s brother Robin‘s. I don’t remember much about the play because I was on keeping-the-toddler-entertained duty and so had to excuse myself pretty early on. After the play we drove South, dropping Tom off at Lanark station.

We exited Lanark via the Hyndford Bridge… which is – according to the map – tantalisingly-close to the 2015-08-22 55 -3 hashpoint: only about 23 metres away!

The 2015-08-22 55 -3
Google puts the centre of the road I drove down only 23m from the 2015-08-22 55 -3 hashpoint (of course, I was actually driving on the near side of the road and may have been closer still).

That doesn’t feel quite close enough to justify retroactively claiming the geohash, tempting though it would be to use it as a vehicle to my easy geohash ribbon. Google doesn’t provide error bars for their exported location data so I can’t draw a circle of uncertainty, but it seems unlikely that I passed through this very close hashpoint.

Pity. But a fun exercise. This was the nearest of my near misses, but plenty more turned up in my search, too:

  1. 2013-09-28 54 -2 (9,000m)
    Near a campsite on the River Eden. I drove past on the M6 with Ruth on the way to Loch Lomond for a mini-break to celebrate our sixth anniversary. I was never more than 9,000 metres from the hashpoint, but Google clearly had a moment when it couldn’t get good satellite signal and tries to trilaterate my position from cell masts and coincidentally guessed, for a few seconds, that I was much closer. There are a few such erroneous points in my data but they’re pretty obvious and easy to spot, so my manual filtering process caught them.
  2. 2019-09-13 52 -0 (719m)
    A600, near Cardington Airstrip, south of Bedford. I drove past on the A421 on my way to Three Rings‘ “GDPR Camp”, which was more fun than it sounds, I promise.
  3. 2014-03-29 53 -1 (630m)
    Spen Farm, near Bramham Interchange on the A1(M). I drove past while heading to the Nightline Association Conference to talk about Three Rings. Curiously, I came much closer to the hashpoint the previous week when I drove a neighbouring road on my way to York for my friend Matt’s wedding.
  4. 2020-05-06 51 -1 (346m)
    Inside Kidlington Police Station! Short of getting arrested, I can’t imagine how I’d easily have gotten to this one, but it’s moot anyway because I didn’t try! I’d taken the day off work to help with child-wrangling (as our normal childcare provisions had been scrambled by COVID-19), and at some point during the day we took a walk and came somewhat near to the hashpoint.
  5. 2016-02-05 51 -1 (340m)
    Garden of a house on The Moors, Kidlington. I drove past (twice) on my way to and from the kids’ old nursery. Bonus fact: the house directly opposite the one whose garden contained the hashpoint is a house that I looked at buying (and visited), once, but didn’t think it was worth the asking price.
  6. 2017-08-30 51 -1 (318m)
    St. Frieswide Farm, between Oxford and Kidlington. I cycled past on Banbury Road twice – once on my way to and once on my way from work.
  7. 2015-01-25 51 -1 (314m)
    Templar Road, Cutteslowe, Oxford. I’ve cycled and driven along this road many times, but on the day in question the closest I came was cycling past on nearby Banbury Road while on the way to work.
  8. 2018-01-28 51 -1 (198m)
    Stratfield Brake, Kidlington. I took our youngest by bike trailer this morning to his Monkey Music class: normally at this point in history Ruth would have been the one to take him, but she had a work-related event that she couldn’t miss in the morning. I cycled right by the entrance to this nature reserve: it could have been an ideal location for a geohash!
  9. 2014-01-24 51 -1 (114m)
    On the Marston Cyclepath. I used to cycle along this route on the way to and from work most days back when I lived in Marston, but by 2014 I lived in Kidlington and so I’d only cycle past the end of it. So it was that I cycled past the Linacre College of the path, around 114m away from the hashpoint, on this day.
  10. 2015-06-10 51 -1 (112m)
    Meadow near Peartree Interchange, Oxford. I stopped at the filling station on the opposite side of the roundabout, presumably to refuel a car.
  11. 2020-02-27 51 -1 (70m)
    This was a genuine attempt at a hashpoint that I failed to reach and was so sad about that I never bothered to finish writing up. The hashpoint was very close (but just out of sight of, it turns out) a geocache I’d hidden in the vicinity, and I was hopeful that I might be able to score the most-epic/demonstrable déjà vu/hash collision achievement ever, not least because I had pre-existing video evidence that I’d been at the coordinates before! Unfortunately it wasn’t to be: I had inadequate footwear for the heavy rains that had fallen in the days that preceded the expedition and I was in a hurry to get home, get changed, and go catch a train to go and see the Goo Goo Dolls in concert. So I gave up and quit the expedition. This turned out to be the right decision: going to the concert one of the last “normal” activities I got to do before the COVID-19 lockdown made everybody’s lives weird.
  12. 2014-05-23 51 -1 (61m)
    White Way, Kidlington, near the Bicester Road to Green Road footpath. I passed close by while cycling to work, but I’ve since walked through this hashpoint many times: it’s on a route that our eldest sometimes used to take when walking home from her school! With the exception only of the very-near-miss in Lanark, this was my nearest “near miss”.
  13. 2015-08-22 55 -3 (23m)
    So near, yet so far. 🙄
Rain in Lanark, seen through a car window
No silly grin, but coincidentally – perhaps by accident – I took a picture out of the car window shortly after we passed the hashpoint. This is what Lanark looks like when you drive through it in the rain.
Dan, Ruth, and baby Annabel at geohashpoint 2014-04-21 51 -1× KML from Google is coverted into GPX where it joins GPX from my GPSr and real-time position data from uLogger in a MySQL database. This is queried against historic hashpoints to produce a list of candidate accidental hashpoints.× Requesting KML rather than JSON from Google Takeout× SQL retreiving hashpoints for every graticule I've been to in the last 10 years, grouped by date.× GIF animation comparing routes recorded by Google My Location with those recorded by my GPSr: they're almost identical× Annabel riding on Tom's shoulders in Edinburgh.× The 2015-08-22 55 -3 × Rain in Lanark, seen through a car window×

Pay Up, Or We’ll Make Google Ban Your Ads

This article is a repost promoting content originally published elsewhere. See more things Dan's reposted.

A new email-based extortion scheme apparently is making the rounds, targeting Web site owners serving banner ads through Google’s AdSense program. In this scam, the fraudsters demand bitcoin in exchange for a promise not to flood the publisher’s ads with so much bot and junk traffic that Google’s automated anti-fraud systems suspend the user’s AdSense account for suspicious traffic.

The shape of our digital world grows increasingly strange. As anti-DoS techniques grow better and more and more uptime-critical websites hide behind edge caches, zombie network operators remain one step ahead and find new and imaginative ways to extort money from their victims. In this new attack, the criminal demands payment (in cryptocurrency) under threat that, if it’s not delivered, they’ll unleash an army of bots to act like the victim trying to scam their advertising network, thereby getting the victim’s site demonetised.

Note #14218

DanQ.me scoring 100/100 in all four Lighthouse categories

#TodayILearned that if you get a perfect 100-100-100-100 + #PWA score from @Google‘s site performance/accessibility/best practice/SEO tool “@____lighthouse” then you’re treated to an animated fireworks display alongside your scores. (Yes, this is for danq.me.)

Goodbye Google Analytics, Hello Fathom

This article is a repost promoting content originally published elsewhere. See more things Dan's reposted.

Big news! This site is no longer using Google Analytics and I’ve switched to a self-hosted version of brand new analytics product Fathom.

Fathom Analytics dashboard

 

Fathom is very simple. It only tracks 4 things: Unique Visitors, Page Views, Time on Site, and Bounce Rate. It shows me a chart of page views and visitors and then gives me a break down of referrers and top performing content. That’s it. And to be quite honest, that’s about all I need from my blog analytics.

You know what, Dave:me too! I’ve been running Google Analytics since forever and Piwik/Matomo (in parallel with it) for about a year and honestly: I get more than enough of what I need from the latter. So you’ve inspired me to cut the line with Google: after all, all I was doing was selling them my friends’ data in exchange for some analytics I wasn’t really paying attention to… and I’d frankly rather not.

So: for the first time in a decade or so, there’s no Google Analytics on this site. Woop!

Update 2023-12-13: I eventually went further still and dropped all analytics, even self-hosted variants, and it feels great.

Google AMP lowered our page speed, and there’s no choice but to use it – unlike kinds

This article is a repost promoting content originally published elsewhere. See more things Dan's reposted.

We here at unlike kinds decided that we had to implement Google AMP. We have to be in the Top Stories section because otherwise we’re punted down the page and away from potential readers. We didn’t really want to; our site is already fast because we made it fast, largely with a combination of clever caching and minimal code. But hey, maybe AMP would speed things up. Maybe Google’s new future is bright.

It isn’t. According to Google’s own Page Speed Insights audit (which Google recommends to check your performance), the AMP version of articles got an average performance score of 87. The non-AMP versions? 95. (Note: I updated these numbers recently with an average after running the test 6 times per version.)

I’ve complained about AMP before plenty – starting here, for example – but it’s even harder to try to see the alleged “good sides” of the technology when it doesn’t even deliver the one thing it was supposed to. The Internet should be boycotting this shit, not drinking the Kool-Aid.

Google’s Three Gender Emoji Future

This article is a repost promoting content originally published elsewhere. See more things Dan's reposted.

Coming to Android this year: a third gender option for emojis such as Police Officer, Zombie, Person Facepalming, Construction Worker and People With Bunny Ears.

Revealed by Google in a submission to the Unicode Consortium last week, these changes signal a new direction from Google which has in recent years played ball with other vendors in overlooking Unicode guidelines, in favor of cross platform compatibility.

Above: Google will introduce a distinct appearance for emojis which don’t specify any gender in 2019. Image: Google designs / Emojipedia composite.

In giving public notice via Unicode, Google hopes that other vendors will join them in this effort to standardize many of the emoji which don’t specify a gender.

This builds on an initial few gender inclusive revisions made by Google in 2018.

Killed by Google – The Google Graveyard & Cemetery

This article is a repost promoting content originally published elsewhere. See more things Dan's reposted.

Fusion Tables… Fabric… Inbox… Google+… goo.gl… Goggles… Site Search… Glass… Now… Code… Bump!… Gears… Desktop Search…

Just some of the projects and services that Google has offered and then killed; this site aims to catalogue them all. Some, like Wave, were given to the community (Wave lived on for a while as an Apache project but is now basically dead), but most, like Reader, were assassinated in a misguided attempt to drive traffic to other services (ultimately, Reader was killed perhaps to try to get people onto Google+, which was then also killed).

Google can’t be trusted to maintain the services of theirs that you depend upon (relevant XKCD?). That’s not a phenomenon that’s unique to Google, of course: it’s perhaps just that they produce so many new and often-experimental services that they inevitably cease supporting more of them than some of the many other providers who’ve killed the silos that people depended upon.

How could things be better? For a start, Google could make a better commitment to open-source and developing standards rather than platforms. But if you don’t think you can trust them to do that – and you can’t – then the only solution for individuals is to use fewer Google products to break the Google-monoculture. Encourage the competition to weaken their position, and break free from silos in general where it’s possible to do so.

148+ projects and services dead. But hey, we’re getting Stadia so everything’s okay, right? <sigh>

Minimal Google Analytics Snippet

This article is a repost promoting content originally published elsewhere. See more things Dan's reposted.

<script>
(function(a,b,c){var d=a.history,e=document,f=navigator||{},g=localStorage,
h=encodeURIComponent,i=d.pushState,k=function(){return Math.random().toString(36)},
l=function(){return g.cid||(g.cid=k()),g.cid},m=function(r){var s=[];for(var t in r)
r.hasOwnProperty(t)&&void 0!==r[t]&&s.push(h(t)+"="+h(r[t]));return s.join("&")},
n=function(r,s,t,u,v,w,x){var z="https://www.google-analytics.com/collect",
A=m({v:"1",ds:"web",aip:c.anonymizeIp?1:void 0,tid:b,cid:l(),t:r||"pageview",
sd:c.colorDepth&&screen.colorDepth?screen.colorDepth+"-bits":void 0,dr:e.referrer||
void 0,dt:e.title,dl:e.location.origin+e.location.pathname+e.location.search,ul:c.language?
(f.language||"").toLowerCase():void 0,de:c.characterSet?e.characterSet:void 0,
sr:c.screenSize?(a.screen||{}).width+"x"+(a.screen||{}).height:void 0,vp:c.screenSize&&
a.visualViewport?(a.visualViewport||{}).width+"x"+(a.visualViewport||{}).height:void 0,
ec:s||void 0,ea:t||void 0,el:u||void 0,ev:v||void 0,exd:w||void 0,exf:"undefined"!=typeof x&&
!1==!!x?0:void 0});if(f.sendBeacon)f.sendBeacon(z,A);else{var y=new XMLHttpRequest;
y.open("POST",z,!0),y.send(A)}};d.pushState=function(r){return"function"==typeof d.onpushstate&&
d.onpushstate({state:r}),setTimeout(n,c.delay||10),i.apply(d,arguments)},n(),
a.ma={trackEvent:function o(r,s,t,u){return n("event",r,s,t,u)},
trackException:function q(r,s){return n("exception",null,null,null,null,r,s)}}})
(window,"XX-XXXXXXXXX-X",{anonymizeIp:true,colorDepth:true,characterSet:true,screenSize:true,language:true});
</script>

This is cute: a Google Analytics code snippet that results in a payload about a fiftieth of the size of the one provided by Google but still provides most of the important features.

Goodbye, EdgeHTML

This article is a repost promoting content originally published elsewhere. See more things Dan's reposted.

Microsoft is officially giving up on an independent shared platform for the internet. By adopting Chromium, Microsoft hands over control of even more of online life to Google.

This may sound melodramatic, but it’s not. The “browser engines” — Chromium from Google and Gecko Quantum from Mozilla — are “inside baseball” pieces of software that actually determine a great deal of what each of us can do online. They determine core capabilities such as which content we as consumers can see, how secure we are when we watch content, and how much control we have over what websites and services can do to us. Microsoft’s decision gives Google more ability to single-handedly decide what possibilities are available to each one of us.

From a business point of view Microsoft’s decision may well make sense. Google is so close to almost complete control of the infrastructure of our online lives that it may not be profitable to continue to fight this. The interests of Microsoft’s shareholders may well be served by giving up on the freedom and choice that the internet once offered us. Google is a fierce competitor with highly talented employees and a monopolistic hold on unique assets. Google’s dominance across search, advertising, smartphones, and data capture creates a vastly tilted playing field that works against the rest of us.

From a social, civic and individual empowerment perspective ceding control of fundamental online infrastructure to a single company is terrible. This is why Mozilla exists. We compete with Google not because it’s a good business opportunity. We compete with Google because the health of the internet and online life depend on competition and choice. They depend on consumers being able to decide we want something better and to take action.

Will Microsoft’s decision make it harder for Firefox to prosper? It could. Making Google more powerful is risky on many fronts. And a big part of the answer depends on what the web developers and businesses who create services and websites do. If one product like Chromium has enough market share, then it becomes easier for web developers and businesses to decide not to worry if their services and sites work with anything other than Chromium. That’s what happened when Microsoft had a monopoly on browsers in the early 2000s before Firefox was released. And it could happen again.

If you care about what’s happening with online life today, take another look at Firefox. It’s radically better than it was 18 months ago — Firefox once again holds its own when it comes to speed and performance. Try Firefox as your default browser for a week and then decide. Making Firefox stronger won’t solve all the problems of online life — browsers are only one part of the equation. But if you find Firefox is a good product for you, then your use makes Firefox stronger. Your use helps web developers and businesses think beyond Chrome. And this helps Firefox and Mozilla make overall life on the internet better — more choice, more security options, more competition.

Scathing but well-deserved dig at Microsoft by Mozilla, following on from the Edge-switch-to-Chromium I’ve been going on about. Chris is right: more people should try Firefox (it’s been my general-purpose browser on desktop and mobile ever since Opera threw in the towel and joined the Chromium hivemind in 2013, and on-and-off plenty before then) – not just because it’s a great browser (and it is!) but also now because it’s important for the diversity and health of the Web.

(Reprinted in full under a creative commons license.)

While we Blink, we loose [sic] the Web

This article is a repost promoting content originally published elsewhere. See more things Dan's reposted.

We used to have much more diversity in terms of browser engines years ago than we do today. This is easy to understand as the Web in 2018 is far more complex than it was in the early noughties. It is very costly to develop and maintain a Web engine and few companies have the necessary talent and cash to do it. Microsoft is one of those companies but the fact that it might be throwing in the towel on its engine signals a bad development for all of us.

Further evaluation of the dangers of the disappearing diversity on the Web, following in the theme of my thoughts the other day about Microsoft’s adoption of Chromium instead of EdgeHTML in its browser.

Andre raises a real point: how will we fight for a private and decentralised Web when it becomes “the Google Web”?

Risking a Homogeneous Web

This article is a repost promoting content originally published elsewhere. See more things Dan's reposted.

I don’t think Microsoft using Chromium is the end of the world, but it is another step down a slippery slope. It’s one more way of bolstering the influence Google currently has on the web.

We need Google to keep pushing the web forward. But it’s critical that we have other voices, with different viewpoints, to maintain some sense of balance. Monocultures don’t benefit anyone.

This essay follows-up nicely on my concerns about Microsoft’s move from EdgeHTML to Chromium in Edge, but goes further to discuss some of the bigger problems of a homogeneous web, especially one under Google’s influence.

Edge may be becoming Chromium-powered, and that’s terrible

Microsoft engineers have been spotted committing code to Chromium, the backend of Google Chrome and many other web browsers. This, among other things, has lead to speculation that Microsoft’s browser, Edge, might be planned to switch from its current rendering engine (EdgeHTML) to Blink (Chromium’s). This is bad news.

This page in Microsoft Edge
This post, as it would appear if you were looking at it in Edge. Which you might be, I suppose.

The younger generation of web developers are likely to hail this as good news: one fewer engine to develop for and test in, they’re all already using Chrome or something similar (and certainly not Edge) for development and debugging anyway, etc. The problem comes perhaps because they’re too young to remember the First Browser War and its aftermath. Let me summarise:

  1. Once upon a time – let’s call it the mid-1990s – there were several web browsers: Netscape Navigator, Internet Explorer, Opera, etc. They all used different rendering engines and so development was sometimes a bit of a pain, but only if you wanted to use the latest most cutting-edge features: if you were happy with the standard, established features of the Web then your site would work anywhere, as has always been the case.
    Best viewed with... any damn browser
  2. Then, everybody starting using just one browser: following some shady dealings and monopoly abuse, 90%+ of Web users started using just one web browser, Internet Explorer. By the time anybody took notice, their rivals had been economically crippled beyond any reasonable chance of recovery, but the worst had yet to come…
    Best viewed with Internet Explorer
  3. Developers started targeting only that one browser: instead of making websites, developers started making “Internet Explorer sites” which were only tested in that one browser or, worse yet, only worked at all in that browser, actively undermining the Web’s position as an open platform. As the grip of the monopoly grew tighter, technological innovation was centred around this single platform, leading to decade-long knock-on effects.
  4. The Web ceased to grow new features: from the release of Internet Explorer 6 there were no significant developments in the technology of the Web for many years. The lack of competition pushed us into a period of stagnation. A decade and a half later, we’re only just (finally) finishing shaking off this unpleasant bit of our history.
    "Netscape sux"

History looks set to repeat itself. Substitute Chrome in place of Internet Explorer and update the references to other web browsers and the steps above could be our future history, too. Right now, we’re somewhere in or around step #2 – Chrome is the dominant browser – and we’re starting to see the beginnings of step #3: more and more “Chrome only” sites. More-alarmingly this time around, Google’s position in providing many major Web services allows them to “push” even harder for this kind of change, even just subtly: if you make the switch from Chrome to e.g. Firefox (and you absolutely should) you might find that YouTube runs slower for you because YouTube’s (Google) engineers favour Google’s web browser.

Chrome is becoming the new Internet Explorer 6, and that’s a huge problem. Rachel Nabors wrote in her excellent article The Ecological Impact of Browser Diversity:

So these are the three browser engines we have: WebKit/Blink, Gecko, and EdgeHTML. We are unlikely to get any brand new bloodlines in the foreseeable future. This is it.

If we lose one of those browser engines, we lose its lineage, every permutation of that engine that would follow, and the unique takes on the Web it could allow for.

And it’s not likely to be replaced.

The Circle of Browsers, by Rachel Nabors

Imagine a planet populated only by hummingbirds, dolphins, and horses. Say all the dolphins died out. In the far, far future, hummingbirds or horses could evolve into something that could swim in the ocean like a dolphin. Indeed, ichthyosaurs in the era of dinosaurs looked much like dolphins. But that creature would be very different from a true dolphin: even ichthyosaurs never developed echolocation. We would wait a very long time (possibly forever) for a bloodline to evolve the traits we already have present in other bloodlines today. So, why is it ok to stand by or even encourage the extinction of one of these valuable, unique lineages?

We have already lost one.

We used to have four major rendering engines, but Opera halted development of its own rendering engine Presto before adopting Blink.

Three left. Spend them wisely.

As much as I don’t like having to work-around the quirks in all of the different browsers I test in, daily, it’s way preferable to a return to the dark days of the Web circa most of the first decade of this century. Please help keep browsers diverse: nobody wants to start seeing this shit –

Best viewed with Google Chrome

Update: this is now confirmed. A sad day for the Web.

This page in Microsoft Edge× Best viewed with Google Chrome×

Rehabilitating Google AMP: My failed attempt

This article is a repost promoting content originally published elsewhere. See more things Dan's reposted.

This article is a follow-up to my article “Why Google AMP is a threat to the Open Web”. In the comments of that article I promised I’d soon provide a follow-up, and for reasons I’ll get into, that has not been possible until now – but now I’m finally providing it.

Back in February I wrote an article saying how I believed Google AMP has been imposed on the web by Google as a ‘standard’ for developing fast webpages, and my dismay about that. Google apparently developed this as an internal project without any open collaboration, and avoiding the W3C standardization processes. Google made implementation of Google AMP a requirement to show at the top of the search results for common news searches.

To many of us open web folk, Google’s AMP violated the widely held principle of search engines not putting bias into search results, and/or the principle of web standards (take your pick – it would not be bias if it was a standardized approach that the wider web community had agreed upon).

You know how I feel about AMP. I’m not alone, and others are doing a pretty good job of talking to Google about our concerns. Unfortunately, Google aren’t listening.

Now that Google+ has been shuttered, I should air my dirty laundry…

This article is a repost promoting content originally published elsewhere. See more things Dan's reposted.

Now that Google+ has been shuttered, I should air my dirty laundry on how awful the project and exec team was.

I’m still pissed about the bait and switch they pulled by telling me I’d be working on Chrome, then putting me on this god forsaken piece of shit on day one.

This will be a super slow burn that goes back many years. I’ll continue to add to over the next couple of days. I’ll preface it with a bunch of backstory and explain what I had left behind, which made me more unhappy about the culture I had come into.

I spent most of my early career working for two radical sister non-profit orgs. I was the only designer working on anywhere from 4-5 different products at the same time. All centered around activism and used by millions of people.

It’s how I cut my teeth. Learned to be the designer that I am today. Most importantly, the people I worked for are imho some of the greatest people on the planet. Highly intelligent, empathetic, caring, and true role models for a young me. I adore them.

You might not know who they are, but if you’re reading this then you have definitely seen their work. Maybe OpenCongress, or Miro, or maybe Amara which is Vimeo’s partner transcription service. Definitely Fight for the Future, our internet defenders, which was shortly after me.

I married the love of my life in 2008, started a family, and at some point realized that I simply needed to make a better living. No matter how prolific, non-profits usually can’t provide the type of income that you need for a growing family with huge ambitions.

So as I gained visibility – via @dribbble – I started to field recruiters and consider new opportunities. Mostly little startups. I interviewed at one (Rockmelt) and they passed on me (hi, @iamxande ?).

Got an email from Kickstarter (hi, @amotion ?). Schlepped to New York and wasted days of time to be passed on by their founders. Then they unfollowed me on twitter. At least I ate some deli. ?

Then Google reached out. I remember that ”holy shit” moment. “Me!? Are they kidding?? The schmuck who tested out of high school and dropped out of college??” They told me I’d interview to work on Chrome. I was over the moon. I remember Manda tearing up. God I love her.

They gave me a little bit of time for a design exercise. You can see it here in all it’s dated glory: morganallanknutson.com/google/ Click and hold for the overlay. More schlepping from LA and an interview at their silly college-like campus. I was a nervous wreck.

The process felt very haphazard. At one point a front-end dev with a bow-tie grilled me on CSS and asked some super dumb questions. My advocate (a sweetheart named Peter) seemed to be rushing people through, quelling their fears. I still appreciate his belief in me to this day.

I felt like I had done ok. The last two interviews that I failed at were real shots to the heart. I took this one incredibly seriously. I wanted this job so badly. I wanted to prove I was worthy.

Weeks went by and I heard nothing. I accepted the inevitable and started responding to other recruiters. It was ok. I wasn’t joining the big leagues. I could play triple-A ball for longer. As long as I got up to SF where the opportunities were.

I took a gig with a failing news startup (lol) called Ongo (hi, @bethdean ?). They got me up here. I guess it was a bit of a Hail Mary for them. In a couple of months I knocked out more work than they could have built in a year with their eng team. Then…

Google got back in touch almost 3-4 months after the interview (who does this??).

I got the job.

To be continued…

If you ever thought that Google+, despite its laudable aims, was rolled out like a shitstorm… image what it must have been like behind the scenes. Actually, image no more – read this thread for a taste of the full horror.