Improving URLs for AMP pages

This is a repost promoting content originally published elsewhere. See more things Dan's reposted.

Improving URLs for AMP pages (Accelerated Mobile Pages Project)

TL;DR: We are making changes to how AMP works in platforms such as Google Search that will enable linked pages to appear under publishers’ URLs instead of the google.com/amp URL space while maintai…

TL;DR: We are making changes to how AMP works in platforms such as Google Search that will enable linked pages to appear under publishers’ URLs instead of the google.com/amp URL space while maintaining the performance and privacy benefits of AMP Cache serving.

When we first launched AMP in Google Search we made a big trade-off: to achieve the user experience that users were telling us that they wanted, instant loading, we needed to start loading the page before the user clicked. As we detailed in a deep-dive blog post last year,  privacy reasons make it basically impossible to load the page from the publisher’s server. Publishers shouldn’t know what people are interested in until they actively go to their pages. Instead, AMP pages are loaded from the Google AMP Cache but with that behavior the URLs changed to include the google.com/amp/ URL prefix.

We are huge fans of meaningful URLs ourselves and recognize that this isn’t ideal. Many of y’all agree. It is certainly the #1 piece of feedback we hear about AMP. We sought to ensure that these URLs show up in as few places as possible. Over time our Google Search native apps on Android and iOS started defaulting to showing the publishers URLs and we worked with browser vendors to share the publisher’s URL of an article where possible. We couldn’t, however, fix the state of URLs where it matters most: on the web and the browser URL bar.

Regular readers may recall that I’ve complained about AMP. This latest announcement by the project lead of the AMP team at Google goes some way to solving the worst of the problems with the AMP project, but it still leaves a lot to be desired: for example, while Google still favours AMP pages in search results they’re building a walled garden and penalising people who don’t choose to be inside it, and it’s a walled garden with fewer features than the rest of the web and a lock-in effect once you’re there. We’ve seen this before with “app culture” and with Facebook, but Google have the power to do a huge amount more damage.

Enabling IPv6 Support in nginx

This is a repost promoting content originally published elsewhere. See more things Dan's reposted.

This is going to be a really short post, but for someone it could save an hour of life.

So, you’ve nothing to do and you’ve decided to play around with IPv6 or maybe you’re happened to be an administrator of a web service that needs to support IPv6 connectivity and you need to make your nginx server work nicely with this protocol.

First thing you need to do is to enable IPv6 in nginx by recompiling it with --with-ipv6 configure option and reinstalling it. If you use some pre-built package, check if your nginx already has this key enabled by running nginx -V.

The Web began dying in 2014, here’s how

This is a repost promoting content originally published elsewhere. See more things Dan's reposted.

Before the year 2014, there were many people using Google, Facebook, and Amazon. Today, there are still many people using services from those three tech giants (respectively, GOOG, FB, AMZN). Not much has changed, and quite literally the user interface and features on those sites has remained mostly untouched. However, the underlying dynamics of power on the Web have drastically changed, and those three companies are at the center of a fundamental transformation of the Web.

It looks like nothing changed since 2014, but GOOG and FB now have direct influence over 70%+ of internet traffic.

Internet activity itself hasn’t slowed down. It maintains a steady growth, both in amount of users and amount of websites…

Fnorders

I’ve not posted much recently: I’ve had a lot of Complicated Life Stuff going on, sorry.

But I did make a thing: fnorders.com. You’re welcome.

Fnord.

Sub-Pixel Problems in CSS

This is a repost promoting content originally published elsewhere. See more things Dan's reposted.

Something that jumped at me, recently, was a rendering dilemma that browsers have to encounter, and gracefully handle, on a day-by-day basis with little, to no, standardization.

Take the following page for example. You have 4 floated divs, each with a width of 25%, contained within a parent div of width 50px. Here’s the question: How wide are each of the divs?

The problem lies in the fact that each div should be, approximately, 12.5px wide and since technology isn’t at a level where we can start rendering at the sub-pixel level we tend to have to round off the number. The problem then becomes: Which way do you round the number? Up, down, or a mixture of the two? I think the results will surprise you, as they did me…

AMP Is Poisonous

If you’re a web developer and you haven’t come across the Google AMP project yet… then what stone have you been living under? But just in case you have been living under such a stone – or you’re not a web developer – I’ll fill you in. If you believe Google’s elevator pitch, AMP is “…an open-source initiative aiming to make the web better for all… consistently fast, beautiful and high-performing across devices and distribution platforms.”

I believe that AMP is fucking poisonous and that the people who’ve come out against it by saying it’s “controversial” so far don’t go remotely far enough. Let me tell you about why.

AMP logo in handcuffs

When you configure your website for AMP – like the BBC, The Guardian, Reddit, and Medium already have – you deliver copies of your pages written using AMP HTML and AMP JS rather than the HTML and Javascript that you’re normally would. This provides a subset of the functionality you’re used to, but it’s quite a rich subset and gives you a lot of power with minimal effort, whether you’re trying to make carousels, video players, social sharing features, or whatever. Then when your site is found via Google Search on a mobile device, then instead of delivering the user to your AMP HTML page or its regular-HTML alternative… Google delivers your site for you via an ultra-fast precached copy via their own network. So far, a mixed bag, right? Wrong.

What’s poisonous about Google AMP

Ignoring the facts that you can get locked-in if you try it once, it makes the fake news problem worse than ever, and it breaks the core concepts of a linkable web, the thing that worries me the most is that AMP represents the most-subtle threat to Net Neutrality I’ve ever seen… and it’s from an organisation that is nominally in favour of a free and open Internet but that stands to benefit from a more-closed Internet so long as it’s one that they control.

Google’s stated plan to favour pages that use AMP creates a publisher’s arms race in which content creators are incentivised to produce content in the (open-source but) Google-controlled AMP format to rank higher in the search results, or at least regain parity, versus their competitors. Ultimately, if everybody supported AMP then – ignoring the speed benefits for mobile users (more on that in a moment) – the only winner is Google. Google, who would then have a walled garden of Facebook-beating proportions around the web. Once Google delivers all of your content, there’s no such thing as a free and open Internet any more.

So what about those speed increases? Yes, the mobile web is slower than we’d like and AMP improves that. But with the exception of the precaching – which is something that could be achieved by other means – everything that AMP provides can be done using existing technologies. AMP makes it easy for lazy developers to make their pages faster, quickly, but if speed on mobile devices is the metric for your success: let’s just start making more mobile-friendly pages! We can make the mobile web better and still let it be our Web: we don’t need to give control of it to Google in order to shave a few milliseconds off the load time.

We need to reject AMP, and we need to reject it hard. Right now, it might be sufficient to stand up to your boss and say “no, implementing AMP on our sites is a bad idea.” But one day, it might mean avoiding the use of AMP entirely (there’ll be browser plugins to help you, don’t worry). And if it means putting up with a slightly-slower mobile web while web developers remain lazy, so be it: that’s a sacrifice I’m willing to make to help keep our web free and open. And I hope you will be, too.

Like others, I’m just hoping that Sir Tim will feel the urge to say something about this development soon.

Let them paste passwords

This is a repost promoting content originally published elsewhere. See more things Dan's reposted.

Anti-copy/paste Javascript code, on a wall.

One of the things people often tweet to us @ncsc are examples of websites which prevent you pasting in a password. Why do websites do this? The debate has raged – with most commentators raging how annoying it is.

So why do organisations do this? Often no reason is given, but when one is, that reason is ‘security’. The NCSC don’t think the reasons add up. We think that stopping password pasting (or SPP) is a bad thing that reduces security. We think customers should be allowed to paste their passwords into forms, and that it improves security…

AMP: breaking news

This is a repost promoting content originally published elsewhere. See more things Dan's reposted.

Google has made much of their Accelerated Mobile Pages project as a solution to bloated websites and frustrated users. But could AMP actually be bad news for the web, bad news for news, and part of a trend of news distribution that is bad for society in general?

I didn’t start out as strongly anti-AMP. Providing tools for making websites faster is always great, as is supporting users in developing countries with lighter-weight pages that don’t cost them a month’s wages. It’s totally true that today webpages are in a pretty sorry state…

Rails is f*cking boring! I love it.

This is a repost promoting content originally published elsewhere. See more things Dan's reposted.

Together with a friend I recently built Dropshare Cloud. We offer online storage for the file and screenshot sharing app Dropshare for macOS/iOS. After trying out Django for getting started (we both had some experience using Django) I decided to rewrite the codebase in Rails. My past experience developing in Rails made the process quick — and boring…

TLS 1.3 FTW

This is a repost promoting content originally published elsewhere. See more things Dan's reposted.

In common slang, FTW is an acronym “for the win” and while that’s appropriate here, I think a better expansion is “for the world.”

We’re pleased to announce that we have sponsored the development of TLS 1.3 in OpenSSL. As it is one of the most widely-used TLS libraries, it is a good investment for the overall health and security of the Internet, so that everyone is able to deploy TLS 1.3 as soon as possible…

Against DNSSEC

This is a repost promoting content originally published elsewhere. See more things Dan's reposted.

All secure crypto on the Internet assumes that the DNS lookup from names to IP addresses are insecure. Securing those DNS lookups therefore enables no meaningful security. DNSSEC does make some attacks against insecure sites harder. But it doesn’t make those attacks infeasible, so sites still need to adopt secure transports like TLS. With TLS properly configured, DNSSEC adds nothing…

Troy Hunt: HTTPS adoption has reached the tipping point

This is a repost promoting content originally published elsewhere. See more things Dan's reposted.

That’s it – I’m calling it – HTTPS adoption has now reached the moment of critical mass where it’s gathering enough momentum that it will very shortly become “the norm” rather than the exception it so frequently was in the past. In just the last few months, there’s been some really significant things happen that have caused me to make this call, here’s why I think we’re now at that tipping point…

Steer! An Experimental Canvas/Websocket Game

As you may know, I’ve lately found an excuse to play with some new web technologies, and I’ve also taken the opportunity to try to gain a deeper understanding of some less bleeding-edge technologies that I think have some interesting potential. And so it was that, while I was staffing the Three Rings stall at last week’s NCVO conference, I made use of the time that the conference delegates were all off listening to a presentation to throw together a tech demo I call Steer!

Animated GIF from a video, showing a player using their mobile phone to steer a car on a desktop computer screen, all using the web browsers on both devices.
A player uses their mobile phone to steer a car on a desktop computer, using nothing more than a web browser.

As you can see from the GIF above, Steer! is a driving game. The track and your car are displayed in a web browser on a large screen, for example a desktop or laptop computer, television, or tablet, and your mobile phone is used to steer the car by tilting it to swerve around a gradually-narrowing weaving road. It’s pretty fun, but what really makes it interesting to me is the combination of moderately-new technologies I’ve woven together to make it possible, specifically:

  • The Device Orientation API, which enables a web application to detect the angle at which you’re holding your mobile phone
  • Websockets as a mechanism to send that data in near-real-time from the phone to the browser, via a web server: for the fastest, laziest possible development, I used Firebase for this, but I’m aware that I could probably get better performance by running a local server on the LAN shared by both devices
  • The Canvas API to draw the output to the screen

Infographic showing how Steer! works. Phone accelerometer determines orientation, pushes to Firebase (up to 60 times/sec), which pushes to browser (via Websocket), which updates screen.

The desktop browser does all of the real work: it takes the orientation of the device and uses that, and the car’s current speed, to determine how it’s position changes over the time that’s elapsed since the screen was last refreshed: we’re aiming for 60 frames a second, of course, but we don’t want the car to travel slower when the game is played on a slower computer, so we use requestAnimationFrame to get the fastest rate possible and calculate the time between renderings to work out how much of a change has occurred this ‘tick’. We leave the car’s sprite close to the bottom of the screen at all times but change how much it rotates from side to side, and we use it’s rotated to decide how much of its motion is lateral versus the amount that’s “along the track”. The latter value determines how much track we move down the screen “behind” it.

The track is generated very simply by the addition of three sine waves of different offset and frequency – a form of very basic procedural generation. Despite the predictability of mathematical curves, this results in a moderately organic-feeling road because the player only sees a fraction of the resulting curve at any given time: the illustration below shows how these three curves combine to make the resulting road. The difficulty is ramped up the further the player has travelled by increasing the amplitude of the resulting wave (i.e. making the curves gradually more-agressive) and by making the road itself gradually narrower. The same mathematics are used to determine whether the car is mostly on the tarmac or mostly on the grass and adjust its maximum speed accordingly.

Sum of sine waves as used to generate the track for Steer!

In order to help provide a visual sense of the player’s speed, I added dashed lines down the road (dividing it into three lanes to begin with and two later on) which zip past the car and provide a sense of acceleration, deceleration, overall speed, and the impact of turning ‘sideways’ (which of course reduces the forward momentum to nothing).

This isn’t meant to be a finished game: it’s an experimental prototype to help explore some technologies that I’d not had time to look seriously at before now. However, you’re welcome to take a copy – it’s all open source – and adapt or expand it. Particular ways in which it’d be fun to improve it might include:

  • Allowing the player more control, e.g. over their accelerator and brakes
  • Adding hazards (trees, lamp posts, and others cars) which must be avoided
  • Adding bonuses like speed boosts
  • Making it challenging, e.g. giving time limits to get through checkpoints
  • Day and night cycles (with headlights!)
  • Multiplayer capability, like a real race?
  • Smarter handling of multiple simultaneous users: right now they’d share control of the car (which is the major reason I haven’t given you a live online version to play with and you have to download it yourself!), but it’d be better if they could “queue” until it was their turn, or else each play in their own split-screen view or something
  • Improving the graphics with textures
  • Increasing the entropy of the curves used to generate the road, and perhaps adding pre-scripted scenery or points of interest on a mathematically-different procedural generation algorithm
  • Switching to a local LAN websocket server, allowing better performance than the dog-leg via Firebase
  • Greater compatibility: I haven’t tried it on an iPhone, but I gather than iOS devices report their orientation differently from Android ones… and I’ve done nothing to try to make Steer! handle more-unusual screen sizes and shapes
  • Anything else? (Don’t expect me to have time to enhance it, though: but if you do so, I’d love to hear about it!)