This article is a follow-up to my article “Why Google AMP is a threat to the Open
Web”. In the comments of that article I promised I’d soon provide a follow-up, and for reasons I’ll get into, that has not been possible until now – but now I’m finally
providing it.
Back in February I wrote an article saying how I believed Google AMP has been imposed on the web by Google as a ‘standard’ for developing fast webpages, and my dismay about that.
Google apparently developed this as an internal project without any open collaboration, and avoiding the W3C standardization processes. Google made implementation of Google AMP a
requirement to show at the top of the search results for common news searches.
To many of us open web folk, Google’s AMP violated the widely held principle of search engines not putting bias into search results, and/or the principle of web standards (take your
pick – it would not be bias if it was a standardized approach that the wider web community had agreed upon).
…
You know how I feel about AMP. I’m not alone, and others are doing a pretty good job of talking to Google about our concerns. Unfortunately, Google aren’t
listening.
Now that Google+ has been shuttered, I should air my dirty laundry on how awful the project and exec team was.
I’m still pissed about the bait and switch they pulled by telling me I’d be working on Chrome, then putting me on this god forsaken piece of shit on day one.
This will be a super slow burn that goes back many years. I’ll continue to add to over the next couple of days. I’ll preface it with a bunch of backstory and explain what I had left
behind, which made me more unhappy about the culture I had come into.
I spent most of my early career working for two radical sister non-profit orgs. I was the only designer working on anywhere from 4-5 different products at the same time. All centered
around activism and used by millions of people.
It’s how I cut my teeth. Learned to be the designer that I am today. Most importantly, the people I worked for are imho some of the greatest people on the planet. Highly intelligent,
empathetic, caring, and true role models for a young me. I adore them.
You might not know who they are, but if you’re reading this then you have definitely seen their work. Maybe OpenCongress, or Miro, or maybe Amara which is Vimeo’s partner
transcription service. Definitely Fight for the Future, our internet defenders, which was shortly after me.
I married the love of my life in 2008, started a family, and at some point realized that I simply needed to make a better living. No matter how prolific, non-profits usually can’t
provide the type of income that you need for a growing family with huge ambitions.
So as I gained visibility – via @dribbble – I started to field recruiters and consider new opportunities. Mostly little startups. I interviewed at one (Rockmelt) and they passed on me
(hi, @iamxande ?).
Got an email from Kickstarter (hi, @amotion ?). Schlepped to New York and wasted days of time to be passed on by their founders. Then they unfollowed me on twitter. At least I ate
some deli. ?
Then Google reached out. I remember that ”holy shit” moment. “Me!? Are they kidding?? The schmuck who tested out of high school and dropped out of college??” They told me I’d
interview to work on Chrome. I was over the moon. I remember Manda tearing up. God I love her.
They gave me a little bit of time for a design exercise. You can see it here in all it’s dated glory: morganallanknutson.com/google/ Click and hold for the overlay. More schlepping
from LA and an interview at their silly college-like campus. I was a nervous wreck.
The process felt very haphazard. At one point a front-end dev with a bow-tie grilled me on CSS and asked some super dumb questions. My advocate (a sweetheart named Peter) seemed to be
rushing people through, quelling their fears. I still appreciate his belief in me to this day.
I felt like I had done ok. The last two interviews that I failed at were real shots to the heart. I took this one incredibly seriously. I wanted this job so badly. I wanted to prove I
was worthy.
Weeks went by and I heard nothing. I accepted the inevitable and started responding to other recruiters. It was ok. I wasn’t joining the big leagues. I could play triple-A ball for
longer. As long as I got up to SF where the opportunities were.
I took a gig with a failing news startup (lol) called Ongo (hi, @bethdean ?). They got me up here. I guess it was a bit of a Hail Mary for them. In a couple of months I knocked out
more work than they could have built in a year with their eng team. Then…
Google got back in touch almost 3-4 months after the interview (who does this??).
I got the job.
To be continued…
…
If you ever thought that Google+, despite its laudable aims, was rolled out like a shitstorm… image what it must have been like behind the scenes. Actually, image no more –
read this thread for a taste of the full horror.
I’ve come to believe that the goal of any good framework should be to make itself unnecessary.
Brian said it explicitly of his PhoneGap project:
The ultimate purpose of PhoneGap is to cease to exist.
That makes total sense, especially if your code is a polyfill—those solutions are temporary by d…
As well as publishers creating AMP versions of their pages in order to appease Google, perhaps they will start to ask “Why can’t our regular pages be this fast?” By showing that
there is life beyond big bloated invasive web pages, perhaps the AMP project will work as a demo of what the whole web could be.
Alas, as time has passed, that hope shows no signs of being fulfilled. If anything, I’ve noticed publishers using the existence of their AMP pages as a justification for just letting
their “regular” pages put on weight.
In fact, AMP’s evolution has made it a viable solution to build entire websites.
On an episode of the Dev Mode podcast a while back, AMP was a hotly-debated topic. But even
those defending AMP were doing so on the understanding that it was more a proof-of-concept than a long-term solution (and also that AMP is just for news stories—something else that
Google are keen to change).
But now it’s clear that the Google AMP Project is being marketed more like a framework for the future: a collection of web components that prioritise performance
…
You all know my feelings on AMP already, I’m sure. As Jeremy points out, our optimistic ideas that these problems might go away as AMP “made itself
redundant” are turning out not to be true, and Google continues to abuse its monopoly on search to push its walled-garden further into the mainstream. Read his full article…
There’s a story that young network engineers are sometimes told to help them understand network stacks and/or the OSI model, and it goes something like this:
You overhear a conversation between two scientists on the subject of some topic relevant to their field of interest. But as you listen more-closely, you realise that the scientists
aren’t in the same place at all but are talking to one another over the telephone (presumably on speakerphone, given that you can hear them both, I guess). As you pay more attention
still, you realise that it isn’t the scientists on the phone call at all but their translators: each scientist speaks to their translator in the scientist’s own language, and the
translators are translating what they say into a neutral language shared with the other translator who translate it into the language spoken by the other scientist. Ultimately, the
two scientists are communicating with one another, but they’re doing so via a “stack” at their end which only needs to be conceptually the same as the “stack” at the other end as far
up as the step-below-them (the “first link” in their communication, with the translator). Below this point, they’re entrusting the lower protocols (the languages, the telephone
system, etc.), in which they have no interest, to handle the nitty-gritty on their behalf.
This kind of delegation to shared intermediary protocols is common in networking and telecommunications. The reason relates to opportunity cost, or – for those of you who are Discworld fans – the Sam Vimes’ “Boots” Theory.
Obviously an efficiency could be gained here if all scientists learned a lingua franca, a universal shared second
language for their purposes… but most-often, we’re looking for a short-term solution to solve a problem today, and the short-term solution is to find a work-around that fits
with what we’ve already got: in the case above, that’s translators who share a common language. For any given pair of people communicating, it’s more-efficient to use a translator, even
though solving the global problem might be better accomplished by a universal second language (perhaps Esperanto, for valid if Eurocentric reasons!).
The phenomenon isn’t limited to communications, though. Consider self-driving cars. If you look back to autonomous vehicle designs of the 1950s (because yes, we’ve been talking about
how cool self-driving cars would be for a long, long time), they’re distinctly different from the ideas we see today. Futurism of the 1950s focussed on adapting the roads themselves to
make them more-suitable for self-driving vehicles, typically by implanting magnets or electronics into the road surface itself or by installing radio beacons alongside highways to allow
the car to understand its position and surroundings. The modern approach, on the other hand, sees self-driving cars use LiDAR and/or digital cameras to survey their surroundings and complex computer hardware to interpret the data.
This difference isn’t just a matter of the available technology (although technological developments certainly inspired the new approach): it’s a fundamentally-different outlook! Early
proposals for self-driving cars aimed to overhaul the infrastructure of the road network: a “big solution” on the scale of teaching everybody a shared second language. But nowadays we
instead say “let’s leave the roads as they are and teach cars to understand them in the same way that people do.” The “big solution” is too big, too hard, and asking everybody
to chip in a little towards outfitting every road with a standardised machine-readable marking is a harder idea to swallow than just asking each person who wants to become an early
adopter of self-driving technology to pay a lot to implement a more-complex solution that works on the roads we already have.
This week, Google showed off Duplex, a technology that they claim can perform the same kind of delegated-integration for our
existing telephone lives. Let’s ignore for a moment the fact that this is clearly going to be
overhyped and focus on the theoretical potential of this technology, which (even if it’s not truly possible today) is probably inevitable as chatbot technology improves: what does
this mean for us? Instead of calling up the hairdresser to make an appointment, Google claim, you’ll be able to ask Google Assistant to do it for you. The robot will call the
hairdresser and make an appointment on your behalf, presumably being mindful of your availability (which it knows, thanks to your calendar) and travel distance. Effectively, Google
Assistant becomes your personal concierge, making all of those boring phone calls so that you don’t have to. Personally, I’d be more than happy to outsource to a computer every time
I’ve had to sit in a telephone queue, giving the machine a summary of my query and asking it to start going through a summary of it to the human agent at the other end while I make my
way back to the phone. There are obviously ethical
considerations here too: I don’t like being hounded by robot callers and so I wouldn’t want to inflict that upon service providers… and I genuinely don’t know if it’s better or
worse if they can’t tell whether they’re talking to a machine or not.
But ignoring the technology and the hype and the ethics, there’s still another question that this kind of technology raises for me: what will our society look like when this kind of
technology is widely-available? As chatbots become increasingly human-like, smarter, and cheaper, what kinds of ways can we expect to interact with them and with one another? By
the time I’m able to ask my digital concierge to order me a pizza (safe in the knowledge that it knows what I like and will ask me if it’s unsure, has my credit card details, and is
happy to make decisions about special offers on my behalf where it has a high degree of confidence), we’ll probably already be at a point at which my local takeaway also has a
chatbot on-staff, answering queries by Internet and telephone. So in the end, my chatbot will talk to their chatbot… in English… and work it out between the two of them.
Let that sink in for a moment: because we’ve a tendency to solve small problems often rather than big problems rarely and we’ve an affinity for backwards-compatibility, we will probably
reach the point within the lifetimes of people alive today that a human might ask a chatbot to call another chatbot: a colossally-inefficient way to exchange information built
by instalments on that which came before. If you’re still sceptical that the technology could evolve this way, I’d urge you to take a look at how the technologies underpinning the
Internet work and you’ll see that this is exactly the kind of evolution we already see in our communications technology: everything gets stacked on top of a popular existing
protocol, even if it’s not-quite the right tool for the job, because it makes one fewer problem to solve today.
Hacky solutions on top of hacky solutions work: the most believable thing about Max Headroom’s appearance in
Ready Player One (the book, not the film: the latter presumably couldn’t get the rights to the character) as a digital assistant was the versatility of his conversational
interface.
By the time we’re talking about a “digital concierge” that knows you better than anyone, there’s no reason that it couldn’t be acting on your behalf in other matters. Perhaps in the
future your assistant, imbued with intimate knowledge about your needs and interests and empowered to negotiate on your behalf, will be sent out on virtual “dates” with other people’s
assistants! Only if it and the other assistant agree that their owners would probably get along, it’ll suggest that you and the other human meet in the real world. Or you could have
your virtual assistant go job-hunting for you, keeping an eye out for positions you might be interested in and applying on your behalf… after contacting the employer to ask the kinds of
questions that it anticipates that you’d like to know: about compensation, work/life balance, training and advancement opportunities, or whatever it thinks matter to you.
We quickly find ourselves colliding with ethical questions again, of course: is it okay that those who have access to more-sophisticated digital assistants will have an advantage?
Should a robot be required to identify itself as a robot when acting on behalf of a human? I don’t have the answers.
But one thing I think we can say, based on our history of putting hacky solutions atop our existing ways of working and the direction in which digital assistants are headed, is
that voice interfaces are going to dominate chatbot development a while… even where the machines end up talking to one another!
Over the past year, we’ve been comparing Google Maps and Apple Maps in New York,
San Francisco, and London—but some of the biggest differences are outside of large cities.
Take my childhood neighborhood in rural Illinois. Here the maps are strikingly different, and Apple’s looks empty compared to Google’s:
Similar to what we saw earlier this year at Patricia’s Green in San
Francisco, Apple’s parks are missing their green shapes. But perhaps the biggest difference is the building footprints: Google
seems to have them all, while Apple doesn’t have any.
Before the year 2014, there were many people using Google, Facebook, and Amazon. Today, there are still many people using services from those three tech giants (respectively, GOOG,
FB, AMZN). Not much has changed, and quite literally the user interface and features on those sites has remained mostly untouched. However, the underlying dynamics of power on the Web
have drastically changed, and those three companies are at the center of a fundamental transformation of the Web.
It looks like nothing changed since 2014, but GOOG and FB now have direct influence over 70%+ of internet traffic.
Internet activity itself hasn’t slowed down. It maintains a steady growth, both in amount of users and amount of websites…
If you’re a web developer and you haven’t come across the Google AMP project yet… then what
stone have you been living under? But just in case you have been living under such a stone – or you’re not a web developer – I’ll fill you in. If you believe Google’s elevator
pitch, AMP is “…an open-source initiative aiming to make the web better for all… consistently fast, beautiful and high-performing across devices and distribution platforms.”
I believe that AMP is fucking poisonous and that the people who’ve come out against it by saying it’s “controversial” so far don’t go remotely
far enough. Let me tell you about why.
When you configure your website for AMP – like the BBC, The Guardian, Reddit, and Medium already have – you deliver copies of your pages written using AMP HTML and AMP JS rather than
the HTML and Javascript that you’re normally would. This provides a subset of the functionality you’re used to, but it’s quite a rich subset and gives you a lot of power with minimal
effort, whether you’re trying to make carousels, video players, social sharing features, or whatever. Then when your site is found via Google Search on a mobile device, then instead of
delivering the user to your AMP HTML page or its regular-HTML alternative… Google delivers your site for you via an ultra-fast precached copy via their own network. So far, a mixed bag, right? Wrong.
Google’s stated plan to favour pages that use AMP creates a publisher’s arms race in which
content creators are incentivised to produce content in the (open-source but) Google-controlled AMP format to rank higher in the search results, or at least regain parity, versus their
competitors. Ultimately, if everybody supported AMP then – ignoring the speed benefits for mobile users (more on that in a moment) – the only winner is Google. Google, who would then
have a walled garden of Facebook-beating proportions around the web. Once Google delivers all of your content, there’s no such thing as a free and open Internet any more.
So what about those speed increases? Yes, the mobile web is slower than we’d like and AMP improves
that. But with the exception of the precaching – which is something that could be achieved by other means – everything that AMP provides can be done using existing technologies. AMP
makes it easy for lazy developers to make their pages faster, quickly, but if speed on mobile devices is the metric for your success: let’s
just start making more mobile-friendly pages! We can make the mobile web better and still let it be our Web: we don’t need to give control of it to Google in order to shave a few milliseconds off the load time.
We need to reject AMP, and we need to reject it hard. Right now, it
might be sufficient to stand up to your boss and say “no, implementing AMP on our sites is a bad idea.” But one day, it might mean avoiding the use of AMP entirely (there’ll be browser
plugins to help you, don’t worry). And if it means putting up with a slightly-slower mobile web while web developers remain lazy, so be it: that’s a sacrifice I’m willing to make to
help keep our web free and open. And I hope you will be, too.
Like others, I’m just hoping that Sir Tim will feel the
urge to say something about this development soon.
Google has made much of their Accelerated Mobile Pages project as a solution to bloated websites and frustrated users. But could AMP actually be
bad news for the web, bad news for news, and part of a trend of news distribution that is bad for society in general?
I didn’t start out as strongly anti-AMP. Providing tools for making websites faster is always great, as is supporting users in developing countries with lighter-weight pages that
don’t cost them a month’s wages. It’s totally true that today webpages are in a pretty sorry state…
There’s a very famous Neil Gaiman quote among librarians and lovers of libraries: “Google will bring you back, you know, a hundred thousand answers. A librarian will bring you back
the right one.”…
Sometimes it’s really like we’re living in the future. Exciting new technologies keep appearing, and people just keep… using them as if they’d always been there. If
tomorrow we perfected the jetpack, the flying car, and the silver jumpsuit, I’ll bet that nobody would think twice about it.
Recently, I’ve had two occasions to use Google+ Hangouts, and I’ve been incredibly impressed.
The first was at Eurovision Night 2012, which was quite a while ago now. Adam did a particularly spectacular job of putting together some wonderful pre-Eurovision entertainments, which were synched-up between our
two houses. Meanwhile, he and I (and Rory and Gareth and occasionally other people) linked up our webcams and spare screens via a Google+ hangout, and… it worked.
It just worked. Now I know that the technology behind this isn’t new: back in 2004, I upgraded the Troma Night set-up in Aberystwyth
to add a second webcam to the Troma Night live feed. But that was one-way, and we didn’t do sound (for lack of bandwidth and concerns about accidental piracy of the
soundtracks to the movies we were watching, of all things, rather than for any particularly good reason). But it really did “just work”, and we were able to wave at each other and chat
to each other and – mostly – just “share in the moment” of enjoying the Eurovision Song Contest together, just like we would have in person when we lived in the same town.
At the weekend, I was originally supposed to be in Lancashire, hanging out with my family, but owing to a series of unfortunate disasters (by the way; I’m walking with a stick right now
– but that’s not interesting enough to be worth blogging about), I was stuck in Oxford. Despite torrential rain where I was, Preston was quite sunny, and my family decided to have a
barbeque.
I was invited… via Google+. They didn’t have Internet access, so they used a mobile dongle plugged into a laptop. I connected in from my desktop computer and then – later – from my
mobile phone. So yes, this was at times a genuine mobile-to-mobile multi-party video conference, and it was simple enough that my mother was able to set it up by herself.
It’s like stepping back in time through videogaming history. And also sideways, into a parallel universe of knights and dragons.
It’s like Google Maps, but in the style of retro top-down, turn-based RPGs. It’s really quite impressive: it’s presumably being generated at least semi-dynamically (as it covers the
whole world), but it’s more than a little impressive. It sometimes makes mistakes with rivers – perhaps where their visibility from the air is low – but nonetheless an
interesting feat from a technical perspective.
You just can’t rely on GMail’s “contacts” search any more. Look what it came up with:
With apologies to those of you who won’t “get” this: the person who came up in the search results is a name that is far, far away, in my mind, from the word “virgin”.
In not-completely-unrelated news, I use a program called SwiftKey X on my phone, which uses Markov chains (as I’ve described before) to
intelligently suggest word completion and entire words and phrases based on the language I naturally use. I had the software thoroughly parse my text messages, emails, and even this
blog to help it learn my language patterns. And recently, while writing a text message to my housemate Paul, it suggested
the following sentence as the content of my message:
I am a beautiful person.
I have no idea where it got the idea that that’s something I’m liable to say with any regularity. Except now that it’s appeared on my blog, it will. It’s all gone a little recursive.
The first of the two apps mentioned in this article – “Gmail Notifier” – sounds perfect, but doesn’t seem to exist any
more.
GMail Notifier + Widgets looks like it might do it (it’s designed to do different icons
depending on labels). Does anybody have any experience with this?
Or any other suggestions? I’m running CM7.1 on a HTC Sensation, in case it matters.
You may remember that earlier this year I wrote a letter to Google suggesting that they ought
to publicise the number of Samaritans to people searching for suicide-related topics: sort-of like a free Featured Link, but
just advertising the phone number of a support service that, in particular, provides emotional support to those who are having suicidal thoughts.
Well, it seems that now they’ve done it (click on the image below to see a larger version).
I’d like to think that I played a small part in making this happen. Thanks, Google.
I sent a letter to Google, today. Click to see it in large-o-vision.
I my letter, I suggest that the search giant should add a feature to their UK search, as they already have to their USA search, that would provide the details of an appropriate emotional
support helpline service to people searching for suicide-related topics (such as “how to commit suicide”, etc.). This would provide minimal disruption to users merely interested in the
topic, but could potentially provide a critical lifeline to somebody in dire need.