DNDle (Wordle, but with D&D monster stats)

Don’t have time to read? Just start playing:

Play DNDle

There’s a Wordle clone for everybody

Am I too late to get onto the “making Wordle clones” bandwagon? Probably; there are quite a few now, including:

Screenshot showing a WhatsApp conversation. Somebody shares a Wordle-like "solution" board but it's got six columns, not five. A second person comments "Hang on a minute... that's not Wordle!"
I’m sure that by now all your social feeds are full of people playing Wordle. But the cool nerds are playing something new…

Now, a Wordle clone for D&D players!

But you know what hasn’t been seen before today? A Wordle clone where you have to guess a creature from the Dungeons & Dragons (5e) Monster Manual by putting numeric values into a character sheet (STR, DEX, CON, INT, WIS, CHA):

Screenshot of DNDle, showing two guesses made already.
Just because nobody’s asking for a game doesn’t mean you shouldn’t make it anyway.

What are you waiting for: go give DNDle a try (I pronounce it “dindle”, but you can pronounce it however you like). A new monster appears at 10:00 UTC each day.

And because it’s me, of course it’s open source and works offline.

The boring techy bit

  • Like Wordle, everything happens in your browser: this is a “backendless” web application.
  • I’ve used ReefJS for state management, because I wanted something I could throw together quickly but I didn’t want to drown myself (or my players) in a heavyweight monster library. If you’ve not used Reef before, you should give it a go: it’s basically like React but a tenth of the footprint.
  • A cache-first/background-updating service worker means that it can run completely offline: you can install it to your homescreen in the same way as Wordle, but once you’ve visited it once it can work indefinitely even if you never go online again.
  • I don’t like to use a buildchain that’s any more-complicated than is absolutely necessary, so the only development dependency is rollup. It resolves my import statements and bundles a single JS file for the browser.
×

Can I use HTTP Basic Auth in URLs?

Web standards sometimes disappear

Sometimes a web standard disappears quickly at the whim of some company, perhaps to a great deal of complaint (and at least one joke).

But sometimes, they disappear slowly, like this kind of web address:

http://username:password@example.com/somewhere

If you’ve not seen a URL like that before, that’s fine, because the answer to the question “Can I still use HTTP Basic Auth in URLs?” is, I’m afraid: no, you probably can’t.

But by way of a history lesson, let’s go back and look at what these URLs were, why they died out, and how web browsers handle them today. Thanks to Ruth who asked the original question that inspired this post.

Basic authentication

The early Web wasn’t built for authentication. A resource on the Web was theoretically accessible to all of humankind: if you didn’t want it in the public eye, you didn’t put it on the Web! A reliable method wouldn’t become available until the concept of state was provided by Netscape’s invention of HTTP cookies in 1994, and even that wouldn’t see widespread for several years, not least because implementing a CGI (or similar) program to perform authentication was a complex and computationally-expensive option for all but the biggest websites.

Comic showing a conversation between a web browser and server. Browser: "Show me that page. (GET /)" Server: "No, take a ticket and fill this form. (Redirect, Set-Cookie)" Browser: "I've filled your form and here's your ticket (POST request with Cookie set)" Server: "Okay, Keep hold of your ticket. (Redirect, Set-Cookie)" Browser: "Show me that page, here's my ticket. (GET /, Cookie:)"
A simplified view of the form-and-cookie based authentication system used by virtually every website today, but which was too computationally-expensive for many sites in the 1990s.

1996’s HTTP/1.0 specification tried to simplify things, though, with the introduction of the WWW-Authenticate header. The idea was that when a browser tried to access something that required authentication, the server would send a 401 Unauthorized response along with a WWW-Authenticate header explaining how the browser could authenticate itself. Then, the browser would send a fresh request, this time with an Authorization: header attached providing the required credentials. Initially, only “basic authentication” was available, which basically involved sending a username and password in-the-clear unless SSL (HTTPS) was in use, but later, digest authentication and a host of others would appear.

Comic showing conversation between web browser and server. Browser: "Show me that page (GET /)" Server: "No. Send me credentials. (HTTP 401, WWW-Authenticate)" Browser: "Show me that page. I enclose credentials (Authorization)" Server: "Okay (HTTP 200)"
For all its faults, HTTP Basic Authentication (and its near cousins) are certainly elegant.

Webserver software quickly added support for this new feature and as a result web authors who lacked the technical know-how (or permission from the server administrator) to implement more-sophisticated authentication systems could quickly implement HTTP Basic Authentication, often simply by adding a .htaccess file to the relevant directory. .htaccess files would later go on to serve many other purposes, but their original and perhaps best-known purpose – and the one that gives them their name – was access control.

Credentials in the URL

A separate specification, not specific to the Web (but one of Tim Berners-Lee’s most important contributions to it), described the general structure of URLs as follows:

<scheme>://<username>:<password>@<host>:<port>/<url-path>#<fragment>

At the time that specification was written, the Web didn’t have a mechanism for passing usernames and passwords: this general case was intended only to apply to protocols that did have these credentials. An example is given in the specification, and clarified with “An optional user name. Some schemes (e.g., ftp) allow the specification of a user name.”

But once web browsers had WWW-Authenticate, virtually all of them added support for including the username and password in the web address too. This allowed for e.g. hyperlinks with credentials embedded in them, which made for very convenient bookmarks, or partial credentials (e.g. just the username) to be included in a link, with the user being prompted for the password on arrival at the destination. So far, so good.

Comic showing conversation between web browser and server. Browser asks for a page, providing an Authorization: header outright; server responds with the page immediately.
Encoding authentication into the URL provided an incredible shortcut at a time when Web round-trip times were much longer owing to higher latencies and no keep-alives.

This is why we can’t have nice things

The technique fell out of favour as soon as it started being used for nefarious purposes. It didn’t take long for scammers to realise that they could create links like this:

https://YourBank.com@HackersSite.com/

Everything we were teaching users about checking for “https://” followed by the domain name of their bank… was undermined by this user interface choice. The poor victim would actually be connecting to e.g. HackersSite.com, but a quick glance at their address bar would leave them convinced that they were talking to YourBank.com!

Theoretically: widespread adoption of EV certificates coupled with sensible user interface choices (that were never made) could have solved this problem, but a far simpler solution was just to not show usernames in the address bar. Web developers were by now far more excited about forms and cookies for authentication anyway, so browsers started curtailing the “credentials in addresses” feature.

Internet Explorer window showing https://YourBank.com@786590867/ in the address bar.
Users trained to look for “https://” followed by the site they wanted would often fall for scams like this one: the real domain name is after the @-sign. (This attacker is also using dword notation to obfuscate their IP address; this dated technique wasn’t often employed alongside this kind of scam, but it’s another historical oddity I enjoy so I’m shoehorning it in.)

(There are other reasons this particular implementation of HTTP Basic Authentication was less-than-ideal, but this reason is the big one that explains why things had to change.)

One by one, browsers made the change. But here’s the interesting bit: the browsers didn’t always make the change in the same way.

How different browsers handle basic authentication in URLs

Let’s examine some popular browsers. To run these tests I threw together a tiny web application that outputs the Authorization: header passed to it, if present, and can optionally send a 401 Unauthorized response along with a WWW-Authenticate: Basic realm="Test Site" header in order to trigger basic authentication. Why both? So that I can test not only how browsers handle URLs containing credentials when an authentication request is received, but how they handle them when one is not. This is relevant because some addresses – often API endpoints – have optional HTTP authentication, and it’s sometimes important for a user agent (albeit typically a library or command-line one) to pass credentials without first being prompted.

In each case, I tried each of the following tests in a fresh browser instance:

  1. Go to http://<username>:<password>@<domain>/optional (authentication is optional).
  2. Go to http://<username>:<password>@<domain>/mandatory (authentication is mandatory).
  3. Experiment 1, then f0llow relative hyperlinks (which should correctly retain the credentials) to /mandatory.
  4. Experiment 2, then follow relative hyperlinks to the /optional.

I’m only testing over the http scheme, because I’ve no reason to believe that any of the browsers under test treat the https scheme differently.

Chromium desktop family

Chrome at an "Auth Optional" page, showing no header sent.Chrome 93 and Edge 93 both immediately suppressed the username and password from the address bar, along with the “http://” as we’ve come to expect of them. Like the “http://”, though, the plaintext username and password are still there. You can retrieve them by copy-pasting the entire address.

Opera 78 similarly suppressed the username, password, and scheme, but didn’t retain the username and password in a way that could be copy-pasted out.

Authentication was passed only when landing on a “mandatory” page; never when landing on an “optional” page. Refreshing the page or re-entering the address with its credentials did not change this.

Navigating from the “optional” page to the “mandatory” page using only relative links retained the username and password and submitted it to the server when it became mandatory, even Opera which didn’t initially appear to retain the credentials at all.

Navigating from the “mandatory” to the “optional” page using only relative links, or even entering the “optional” page address with credentials after visiting the “mandatory” page, does not result in authentication being passed to the “optional” page. However, it’s interesting to note that once authentication has occurred on a mandatory page, pressing enter at the end of the address bar on the optional page, with credentials in the address bar (whether visible or hidden from the user) does result in the credentials being passed to the optional page! They continue to be passed on each subsequent load of the “optional” page until the browsing session is ended.

Firefox desktop

Firefox window with popup reading "You are about to log in to the site 192.168.0.11 with the username alpha, but the web site does not require authentication. This may be an attempt to trick you."Firefox 91 does a clever thing very much in-line with its image as a browser that puts decision-making authority into the hands of its user. When going to the “optional” page first it presents a dialog, warning the user that they’re going to a site that does not specifically request a username, but they’re providing one anyway. If the user says that no, navigation ceases (the GET request for the page takes place the same either way; this happens before the dialog appears). Strangely: regardless of whether the user selects yes or no, the credentials are not passed on the “optional” page. The credentials (although not the “http://”) appear in the address bar while the user makes their decision.

Similar to Opera, the credentials do not appear in the address bar thereafter, but they’re clearly still being stored: if the refresh button is pressed the dialog appears again. It does not appear if the user selects the address bar and presses enter.

Firefox dialog reading "You are about to log in to the site 192.168.0.11 with the username alpha".Similarly, going to the “mandatory” page in Firefox results in an informative dialog warning the user that credentials are being passed. I like this approach: not only does it help protect the user from the use of authentication as a tracking technique (an old technique that I’ve not seen used in well over a decade, mind), it also helps the user be sure that they’re logging in using the account they mean to, when following a link for that purpose. Again, clicking cancel stops navigation, although the initial request (with no credentials) and the 401 response has already occurred.

Visiting any page within the scope of the realm of the authentication after visiting the “mandatory” page results in credentials being sent, whether or not they’re included in the address. This is probably the most-true implementation to the expectations of the standard that I’ve found in a modern graphical browser.

Safari desktop

Safari showing a dialog "Log in" / "Your password will be sent unencrypted."Safari 14 never displays or uses credentials provided via the web address, whether or not authentication is mandatory. Mandatory authentication is always met by a pop-up dialog, even if credentials were provided in the address bar. Boo!

Once passed, credentials are later provided automatically to other addresses within the same realm (i.e. optional pages).

Older browsers

Let’s try some older browsers.

Internet Explorer 8 showing the error message "Windows cannot find http://alpha:beta@10.0.2.2/optional. Check the spelling and try again."From version 7 onwards – right up to the final version 11 – Internet Explorer fails to even recognise addresses with authentication credentials in as legitimate web addresses, regardless of whether or not authentication is requested by the server. It’s easy to assume that this is yet another missing feature in the browser we all love to hate, but it’s interesting to note that credentials-in-addresses is permitted for ftp:// URLs…

Internet Explorer 5 showing credentials in the address bar being passed to the server.…and if you go back a little way, Internet Explorer 6 and below supported credentials in the address bar pretty much as you’d expect based on the standard. The error message seen in IE7 and above is a deliberate design decision, albeit a somewhat knee-jerk reaction to the security issues posed by the feature (compare to the more-careful approach of other browsers).

These older versions of IE even (correctly) retain the credentials through relative hyperlinks, allowing them to be passed when they become mandatory. They’re not passed on optional pages unless a mandatory page within the same realm has already been encountered.

Netscape Communicator 4.7 showing credentials in a URL, passed to a server.Pre-Mozilla Netscape behaved the same way. Truly this was the de facto standard for a long period on the Web, and the varied approaches we see today are the anomaly. That’s a strange observation to make, considering how much the Web of the 1990s was dominated by incompatible implementations of different Web features (I’ve written about the <blink> and <marquee> tags before, which was perhaps the most-visible division between the Microsoft and Netscape camps, but there were many, many more).

Screenshot showing Netscape 7.2, with a popup saying "You are about to log in to site 192.168.0.11 with the username alpha, but the website does not require authenticator. This may be an attempt to trick you." The username and password are visible in the address bar.Interestingly: by Netscape 7.2 the browser’s behaviour had evolved to be the same as modern Firefox’s, except that it still displayed the credentials in the address bar for all to see.

Screenshot of Opera 5 showing credentials in a web address with the password masked, being passed to the server on an optional page.Now here’s a real gem: pre-Chromium Opera. It would send credentials to “mandatory” pages and remember them for the duration of the browsing session, which is great. But it would also send credentials when passed in a web address to “optional” pages. However, it wouldn’t remember them on optional pages unless they remained in the address bar: this feels to me like an optimum balance of features for power users. Plus, it’s one of very few browsers that permitted you to change credentials mid-session: just by changing them in the address bar! Most other browsers, even to this day, ignore changes to HTTP Authentication credentials, which was sometimes be a source of frustration back in the day.

Finally, classic Opera was the only browser I’ve seen to mask the password in the address bar, turning it into a series of asterisks. This ensures the user knows that a password was used, but does not leak any sensitive information to shoulder-surfers (the length of the “masked” password was always the same length, too, so it didn’t even leak the length of the password). Altogether a spectacular design and a great example of why classic Opera was way ahead of its time.

The Command-Line

Most people using web addresses with credentials embedded within them nowadays are probably working with code, APIs, or the command line, so it’s unsurprising to see that this is where the most “traditional” standards-compliance is found.

I was unsurprised to discover that giving curl a username and password in the URL meant that username and password was sent to the server (using Basic authentication, of course, if no authentication was requested):

$ curl http://alpha:beta@localhost/optional
Header: Basic YWxwaGE6YmV0YQ==
$ curl http://alpha:beta@localhost/mandatory
Header: Basic YWxwaGE6YmV0YQ==

However, wget did catch me out. Hitting the same addresses with wget didn’t result in the credentials being sent except where it was mandatory (i.e. where a HTTP 401 response and a WWW-Authenticate: header was received on the initial attempt). To force wget to send credentials when they haven’t been asked-for requires the use of the --http-user and --http-password switches:

$ wget http://alpha:beta@localhost/optional -qO-
Header:
$ wget http://alpha:beta@localhost/mandatory -qO-
Header: Basic YWxwaGE6YmV0YQ==

lynx does a cute and clever thing. Like most modern browsers, it does not submit credentials unless specifically requested, but if they’re in the address bar when they become mandatory (e.g. because of following relative hyperlinks or hyperlinks containing credentials) it prompts for the username and password, but pre-fills the form with the details from the URL. Nice.

Lynx browser following a link from an optional-authentication to a mandatory-authentication page. The browser prompts for a username but it's pre-filled with the one provided by the URL.

What’s the status of HTTP (Basic) Authentication?

HTTP Basic Authentication and its close cousin Digest Authentication (which overcomes some of the security limitations of running Basic Authentication over an unencrypted connection) is very much alive, but its use in hyperlinks can’t be relied upon: some browsers (e.g. IE, Safari) completely munge such links while others don’t behave as you might expect. Other mechanisms like Bearer see widespread use in APIs, but nowhere else.

The WWW-Authenticate: and Authorization: headers are, in some ways, an example of the best possible way to implement authentication on the Web: as an underlying standard independent of support for forms (and, increasingly, Javascript), cookies, and complex multi-part conversations. It’s easy to imagine an alternative timeline where these standards continued to be collaboratively developed and maintained and their shortfalls – e.g. not being able to easily log out when using most graphical browsers! – were overcome. A timeline in which one might write a login form like this, knowing that your e.g. “authenticate” attributes would instruct the browser to send credentials using an Authorization: header:

<form method="get" action="/" authenticate="Basic">
<label for="username">Username:</label> <input type="text" id="username" authenticate="username">
<label for="password">Password:</label> <input type="text" id="password" authenticate="password">
<input type="submit" value="Log In">
</form>

In such a world, more-complex authentication strategies (e.g. multi-factor authentication) could involve encoding forms as JSON. And single-sign-on systems would simply involve the browser collecting a token from the authentication provider and passing it on to the third-party service, directly through browser headers, with no need for backwards-and-forwards redirects with stacks of information in GET parameters as is the case today. Client-side certificates – long a powerful but neglected authentication mechanism in their own right – could act as first class citizens directly alongside such a system, providing transparent second-factor authentication wherever it was required. You wouldn’t have to accept a tracking cookie from a site in order to log in (or stay logged in), and if your browser-integrated password safe supported it you could log on and off from any site simply by toggling that account’s “switch”, without even visiting the site: all you’d be changing is whether or not your credentials would be sent when the time came.

The Web has long been on a constant push for the next new shiny thing, and that’s sometimes meant that established standards have been neglected prematurely or have failed to evolve for longer than we’d have liked. Consider how long it took us to get the <video> and <audio> elements because the “new shiny” Flash came to dominate, how the Web Payments API is only just beginning to mature despite over 25 years of ecommerce on the Web, or how we still can’t use Link: headers for all the things we can use <link> elements for despite them being semantically-equivalent!

The new model for Web features seems to be that new features first come from a popular JavaScript implementation, and then eventually it evolves into a native browser feature: for example HTML form validations, which for the longest time could only be done client-side using scripting languages. I’d love to see somebody re-think HTTP Authentication in this way, but sadly we’ll never get a 100% solution in JavaScript alone: (distributed SSO is almost certainly off the table, for example, owing to cross-domain limitations).

Or maybe it’s just a problem that’s waiting for somebody cleverer than I to come and solve it. Want to give it a go?

× × × × × × × × × × × × × ×

Higher/Lower Datepicker

I’ve written before about the trend in web development to take what the web gives you for free, throw it away, and then rebuild it in Javascript. The rebuilt version is invariably worse in many ways – less-accessible, higher-bandwidth, reduced features, more fragile, etc. – but it’s more convenient for developers. Personally, I try not to value developer convenience at the expense of user experience, but that’s an unpopular opinion lately.

Screenshot showing a hovered hyperlink to "Digital Forest" on a list of green hosting providers in France.
Here’s a perfect example I bumped into earlier this week, courtesy of The Green Web Foundation. This looks like a hyperlink… but if you open it in a new tab/window, you see a page (not even a 404 page!) with the text “It looks like nothing was found at this location.”

In the site shown in the screenshot above, the developer took something the web gave them for free (a hyperlink), threw it away (by making it a link-to-nowhere), and rebuilt its functionality with Javascript (without thinking about the fact that you can do more with hyperlinks than click them: you can click-and-drag them, you can bookmark them, you can share them, you can open them in new tabs etc.). Ugh.

Date pickers

Particularly egregious are the date pickers. Entering your date of birth on a web form ought to be pretty simple: gov.uk pretty much solved it based on user testing they did in 2013.

Here’s the short of it:

  • Something you can clearly type a numeric day, month and year into is best.
  • Three dropdowns are slightly worse, but at least if you use native HTML <select> elements keyboard users can still “type” to filter.
  • Everything else – including things that look like <select>s but are really funky React <div>s, is pretty terrible.
Calendar datepicker with slider-based timepicker and no text-based fallback.
Calendars can be great for choosing your holiday date range. But pressing “Prev” ~480 times to get to my month of birth isn’t good. Also: what’s with the time “sliders”? (Yes, I know I’ve implemented these myself, in the past, and I’m sorry.)

My fellow Automattician Enfys recently tweeted:

People designing webforms that require me to enter my birthdate:

I am begging you: just let me type it in.

Typing it in is 6-8 quick keystrokes. Trying to navigate a little calendar or spinny wheels back to the 1970s is time-consuming, frustrating and unnecessary.

They’re right. Those little spinny wheels are a pain in the arse if you’ve got to use one to go back 40+ years.

Date "spinner" currently showing 20 December 2012.
These things are okay (I guess) on mobile/touchscreen devices, though I’d still prefer the option to type in my date of birth. But send one to my desktop and I will curse your name.

Can we do worse?

If there’s one thing we learned from making the worst volume control in the world, the other year, it’s that you can always find a worse UI metaphor. So here’s my attempt at making a date of birth field that’s somehow even worse than “date spinners”:

My datepicker implements a game of “higher/lower”. Starting from bounds specified in the HTML code and a random guess, it narrows-down its guess as to what your date of birth is as you click the up or down buttons. If you make a mistake you can start over with the restart button.

Amazingly, this isn’t actually the worst datepicker into which I’ve entered my date of birth! It’s cognitively challenging compared to most, but it’s relatively fast at narrowing down the options from any starting point. Plus, I accidentally implemented some good features that make it better than plenty of the datepickers out there:

  • It’s progressively enhanced – if the Javascript doesn’t load, you can still enter your date of birth in a sensible way.
  • Because it leans on a <input type="date"> control, your browser takes responsibility for localising, so if you’re from one of those weird countries that prefers mm-dd-yyyy then that’s what you should see.
  • It’s moderately accessible, all things considered, and it could easily be improved further.

It turns out that even when you try to make something terrible, so long as you’re building on top of the solid principles the web gives you for free, you can accidentally end up with something not-so-bad. Who knew?

× × ×

The Fourth Way to Inject CSS

This is a repost promoting content originally published elsewhere. See more things Dan's reposted.

Last week, Jeremy Keith wrote an excellent summary of the three ways to inject CSS into a web document. In short, he said:

There are three ways—that I know of—to associate styles with markup.

External CSS

<link rel="stylesheet" href="/path/to/styles.css">

Embedded CSS

<style>element { property: value; }</style>

Inline CSS

<element style="property: value"></element>

While talking about external CSS, he hinted at what I consider to be a distinct fourth way with its own unique use cases:; using the Link: HTTP header. I’d like to share with you how it works and why I think it needs to be kept in people’s minds, even if it’s not suitable for widespread deployment today.

Injecting CSS using the Link: HTTP Header

Every one of Jeremy’s suggestions involve adding markup to the HTML document itself. Which makes sense; you almost always want to associate styles with a document regardless of the location it’s stored or the medium over which it’s transmitted. The most popular approach to adding CSS to a page uses the <link> HTML element, but did you know… the <link> element has a semantically-equivalent HTTP header, Link:.

Bash shell running the command "curl -i https://hell.meiert.org/core/php/link.php". The response includes a HTTP Link: header referencing a CSS stylesheet; this is highlighted.
The only active example I’ve been able to find are test pages like Jens Oliver Meiert’s (pictured), Louis Lazaris’s , and Anne van Kesteren’s, but it’s possible that others are hiding elsewhere on the Web.

According to the specifications, the following HTTP responses are equivalent in terms of the CSS that would be loaded and applied to the document:

Traditional external CSS injection:

HTTP/1.1 200 OK
Content-Type: text/html; charset=UTF-8

<!doctype html>
<html>
  <head>
    <title>My page</title>
    <link rel="stylesheet" href="/style/main.css">
  </head>
  <body>
    <h1>My page</h1>
  </body>
</html>

Link: header CSS injection:

HTTP/1.1 200 OK
Content-Type: text/html; charset=UTF-8
Link: </style/main.css>; rel="stylesheet"

<!doctype html>
<html>
  <head>
    <title>My page</title>
  </head>
  <body>
    <h1>My page</h1>
  </body>
</html>
Diagram showing a web browser requesting a document from a web server, the web server finding the document and returning it after attaching HTTP headers.
A webserver adds headers when it serves a document anyway. Adding one more is no big deal.

Why is this important?

This isn’t something you should put on your website right now. This (21-year-old!) standard is still only really supported in Firefox and pre-Blink Opera, so you lose perhaps 95% of the Web (it could be argued that because CSS ought to be considered progressive enhancement, it’s tolerable so long as your HTML is properly-written).

If it were widely-supported, though, that would be a really good thing: HTTP headers beat meta/link tags for configurability, performance management, and separation of concerns. Need some specific examples? Sure: here’s what you could use HTTP stylesheet linking for:

Diagram showing a reverse proxy server modifying the headers set by an upstream web server in response to a request by a web browser.
You have no idea how many times in my career I’d have injected CSS Link: headers using a reverse proxy server the standard was universally-implemented. This technique would have made one of my final projects at the Bodleian so much easier…
  • Performance improvement using aggressively preloaded “top” stylesheets before the DOM parser even fires up.
  • Stylesheet injection by edge caches to provide regionalised/localised changes to brand identity.
  • Strong separation of content and design by hosting content and design elements in different systems.
  • Branding your staff intranet differently when it’s accessed from outside the network than inside it.
  • Rebranding proprietary services on your LAN without deep inspection, using reverse proxies.
  • Less-destructive user stylesheet injection by plugins etc. that doesn’t risk breaking icky on-page Javascript (e.g. theme switchers).
  • Browser detection? 😂 You could use this technique today to detect Firefox. But you absolutely shouldn’t; if you think you need browser detection in CSS, use this instead.

Unfortunately right now though, stylesheet Link: headers remain consigned to the bin of “cool stylesheet standards that we could probably use if it weren’t for fucking Google”; see also alternate stylesheets.

Update: I later used this technique to make a seemingly-empty web page.

× × ×

Watching Films Together… Apart

This weekend I announced and then hosted Homa Night II, an effort to use technology to help bridge the chasms that’ve formed between my diaspora of friends as a result mostly of COVID. To a lesser extent we’ve been made to feel distant from one another for a while as a result of our very diverse locations and lifestyles, but the resulting isolation was certainly compounded by lockdowns and quarantines.

Mark, Sian, Alec, Paul, Kit, Adam, Dan and Claire at Troma Night V.
Long gone are the days when I could put up a blog post to say “Troma Night tonight?” and expect half a dozen friends to turn up at my house.

Back in the day we used to have a regular weekly film night called Troma Night, named after the studio who dominated our early events and whose… genre… influenced many of our choices thereafter. We had over 300 such film nights, by my count, before I eventually left our shared hometown of Aberystwyth ten years ago. I wasn’t the last one of the Troma Night regulars to leave town, but more left before me than after.

Sour Grapes: participants share "hearts" with Ruth
Observant readers will spot a previous effort I made this year at hosting a party online.

Earlier this year I hosted Sour Grapes, a murder mystery party (an irregular highlight of our Aberystwyth social calendar, with thanks to Ruth) run entirely online using a mixture of video chat and “second screen” technologies. In some ways that could be seen as the predecessor to Homa Night, although I’d come up with most of the underlying technology to make Homa Night possible on a whim much earlier in the year!

WhatsApp chat: Dan proposes "Troma Night Remote"; Matt suggests calling it "Troma at Homa"; Dan settles on "Homa Night".
The idea spun out of a few conversations on WhatsApp but the final name – Homa Night – wasn’t agreed until early in November.

How best to make such a thing happen? When I first started thinking about it, during the first of the UK’s lockdowns, I considered a few options:

  • Streaming video over a telemeeting service (Zoom, Google Meet, etc.)
    Very simple to set up, but the quality – as anybody who’s tried this before will attest – is appalling. Being optimised for speech rather than music and sound effects gives the audio a flat, scratchy sound, video compression artefacts that are tolerable when you’re chatting to your boss are really annoying when they stop you reading a crucial subtitle, audio and video often get desynchronised in a way that’s frankly infuriating, and everybody’s download speed is limited by the upload speed of the host, among other issues. The major benefit of these platforms – full-duplex audio – is destroyed by feedback so everybody needs to stay muted while watching anyway. No thanks!
  • Teleparty or a similar tool
    Teleparty (formerly Netflix Party, but it now supports more services) is a pretty clever way to get almost exactly what I want: synchronised video streaming plus chat alongside. But it only works on Chrome (and some related browsers) and doesn’t work on tablets, web-enabled TVs, etc., which would exclude some of my friends. Everybody requires an account on the service you’re streaming from, potentially further limiting usability, and that also means you’re strictly limited to the media available on those platforms (and further limited again if your party spans multiple geographic distribution regions for that service). There’s definitely things I can learn from Teleparty, but it’s not the right tool for Homa Night.
  • “Press play… now!”
    The relatively low-tech solution might have been to distribute video files in advance, have people download them, and get everybody to press “play” at the same time! That’s at least slightly less-convenient because people can’t just “turn up”, they have to plan their attendance and set up in advance, but it would certainly have worked and I seriously considered it. There are other downsides, though: if anybody has a technical issue and needs to e.g. restart their player then they’re basically doomed in any attempt to get back in-sync again. We can do better…
  • A custom-made synchronised streaming service…?
Homa Night architecture: S3 delivers static content to browsers, browsers exchange real-time information via Firebase.
A custom solution that leveraged existing infrastructure for the “hard bits” proved to be the right answer.

So obviously I ended up implementing my own streaming service. It wasn’t even that hard. In case you want to try your own, here’s how I did it:

Media preparation

First, I used Adobe Premiere to create a video file containing both of the night’s films, bookended and separated by “filler” content to provide an introduction/lobby, an intermission, and a closing “you should have stopped watching by now” message. I made sure that the “intro” was a nice round duration (90s) and suitable for looping because I planned to hold people there until we were all ready to start the film. Thanks to Boris & Oliver for the background music!

Dan uses a green screen to add to the intermission.
Honestly, the intermission was just an excuse to keep my chroma key gear out following its most-recent use.

Next, I ran the output through Handbrake to produce “web optimized” versions in 1080p and 720p output sizes. “Web optimized” in this case means that metadata gets added to the start of the file to allow it to start playing without downloading the entire file (streaming) and to allow the calculation of what-part-of-the-file corresponds to what-part-of-the-timeline: the latter, when coupled with a suitable webserver, allows browsers to “skip” to any point in the video without having to watch the intervening part. Naturally I’m encoding with H.264 for the widest possible compatibility.

Handbrake preparing to transcode Premiere's output.
Even using my multi-GPU computer for the transcoding I had time to get up and walk around a bit.

Real-Time Synchronisation

To keep everybody’s viewing experience in-sync, I set up a Firebase account for the application: Firebase provides an easy-to-use Websockets platform with built-in data synchronisation. Ignoring the authentication and chat features, there wasn’t much shared here: just the currentTime of the video in seconds, whether or not introMode was engaged (i.e. everybody should loop the first 90 seconds, for now), and whether or not the video was paused:

Firebase database showing shared currentTime, introMode, and paused values.
Firebase makes schemaless real-time databases pretty easy.

To reduce development effort, I never got around to implementing an administrative front-end; I just manually went into the Firebase database and acknowledged “my” computer as being an administrator, after I’d connected to it, and then ran a little Javascript in my browser’s debugger to tell it to start pushing my video’s currentTime to the server every few seconds. Anything else I needed to edit I just edited directly from the Firebase interface.

Other web clients’ had Javascript to instruct them to monitor these variables from the Firebase database and, if they were desynchronised by more than 5 seconds, “jump” to the correct point in the video file. The hard part of the code… wasn’t really that hard:

// Rewind if we're passed the end of the intro loop
function introModeLoopCheck() {
  if (!introMode) return;
  if (video.currentTime > introDuration) video.currentTime = 0;
}

function fixPlayStatus() {
  // Handle "intro loop" mode
  if (remotelyControlled && introMode) {
    if (video.paused) video.play(); // always play
    introModeLoopCheck();
    return; // don't look at the rest
  }

  // Fix current time
  const desync = Math.abs(lastCurrentTime - video.currentTime);
  if (
    (video.paused && desync > DESYNC_TOLERANCE_WHEN_PAUSED) ||
    (!video.paused && desync > DESYNC_TOLERANCE_WHEN_PLAYING)
  ) {
    video.currentTime = lastCurrentTime;
  }
  // Fix play status
  if (remotelyControlled) {
    if (lastPaused && !video.paused) {
      video.pause();
    } else if (!lastPaused && video.paused) {
      video.play();
    }
  }
  // Show/hide paused notification
  updatePausedNotification();
}

Web front-end

Finally, there needed to be a web page everybody could go to to get access to this. As I was hosting the video on S3+CloudFront anyway, I put the HTML/CSS/JS there too.

Configuring a Homa Night video player.
I decided to carry the background theme of the video through to the web interface too.

I tested in Firefox, Edge, Chrome, and Safari on desktop, and (slightly less) on Firefox, Chrome and Safari on mobile. There were a few quirks to work around, mostly to do with browsers not letting videos make sound until the page has been interacted with after the video element has been rendered, which I carefully worked-around by putting a popup “over” the video to “enable sync”, but mostly it “just worked”.

Delivery

On the night I shared the web address and we kicked off! There were a few hiccups as some people’s browsers got disconnected early on and tried to start playing the film before it was time, and one of these even when fixed ran about a minute behind the others, leading to minor spoilers leaking via the rest of us riffing about them! But on the whole, it worked. I’ve had lots of useful feedback to improve on it for the next version, and I might even try to tidy up my code a bit and open-source the results if this kind of thing might be useful to anybody else.

× × × × × × × ×

Endpoint Encabulator

(This video is also available on YouTube.)

I’ve been working as part of the team working on the new application framework called the Endpoint Encabulator and wanted to share with you what I think makes our project so exciting: I promise it’ll make for two minutes of your time you won’t seen forget!

Naturally, this project wouldn’t have been possible without the pioneering work that preceded it by John Hellins Quick, Bud Haggart, and others. Nothing’s invented in a vacuum. However, my fellow developers and I think that our work is the first viable encabulator implementation to provide inverse reactive data binding suitable for deployment in front of a blockchain-driven backend cache. I’m not saying that all digital content will one day be delivered through Endpoint Encabulator, but… well; maybe it will.

If the technical aspects go over your head, pass it on to a geeky friend who might be able to make use of my work. Sharing is caring!

<blink> and <marquee>

I was chatting with a fellow web developer recently and made a joke about the HTML <blink> and <marquee> tags, only to discover that he had no idea what I was talking about. They’re a part of web history that’s fallen off the radar and younger developers are unlikely to have ever come across them. But for a little while, back in the 90s, they were a big deal.

Macromedia Dreamweaver 3 code editor window showing a <h2> heading wrapped in <marquee> and <blink> tags, for emphasis.
Even Macromedia Dreamweaver, which embodied the essence of 1990s web design, seemed to treat wrapping <blink> in <marquee> as an antipattern.

Invention of the <blink> element is often credited to Lou Montulli, who wrote pioneering web browser Lynx before being joining Netscape in 1994. He insists that he didn’t write any of the code that eventually became the first implementation of <blink>. Instead, he claims: while out at a bar (on the evening he’d first meet his wife!), he pointed out that many of the fancy new stylistic elements the other Netscape engineers were proposing wouldn’t work in Lynx, which is a text-only browser. The fanciest conceivable effect that would work across both browsers would be making the text flash on and off, he joked. Then another engineer – who he doesn’t identify – pulled a late night hack session and added it.

And so it was that when Netscape Navigator 2.0 was released in 1995 it added support for the <blink> tag. Also animated GIFs and the first inklings of JavaScript, which collectively would go on to define the “personal website” experience for years to come. Here’s how you’d use it:

<BLINK>This is my blinking text!</BLINK>

With no attributes, it was clear from the outset that this tag was supposed to be a joke. By the time HTML4 was published as a a recommendation two years later, it was documented as being a joke. But the Web of the late 1990s saw it used a lot. If you wanted somebody to notice the “latest updates” section on your personal home page, you’d wrap a <blink> tag around the title (or, if you were a sadist, the entire block).

Cameron's World website, screenshot, showing GIFS and bright pallette
If you missed this particular chapter of the Web’s history, you can simulate it at Cameron’s World.

In the same year as Netscape Navigator 2.0 was released, Microsoft released Internet Explorer 2.0. At this point, Internet Explorer was still very-much playing catch-up with the features the Netscape team had implemented, but clearly some senior Microsoft engineer took a look at the <blink> tag, refused to play along with the joke, but had an innovation of their own: the <marquee> tag! It had a whole suite of attributes to control the scroll direction, speed, and whether it looped or bounced backwards and forwards. While <blink> encouraged disgusting and inaccessible design as a joke, <marquee> did it on purpose.

<MARQUEE>Oh my god this still works in most modern browsers!</MARQUEE>

Oh my god this still works in most modern browsers!

If you see the text above moving… you’re looking at a living fossil in browser history.

But here’s the interesting bit: for a while in the late 1990s, it became a somewhat common practice to wrap content that you wanted to emphasise with animation in both a <blink> and a <marquee> tag. That way, the Netscape users would see it flash, the IE users would see it scroll or bounce. Like this:

<MARQUEE><BLINK>This is my really important message!</BLINK></MARQUEE>
Internet Explorer 5 showing a marquee effect.
Wrap a <blink> inside a <marquee> and IE users will see the marquee. Delightful.

The web has always been built on Postel’s Law: a web browser should assume that it won’t understand everything it reads, but it should provide a best-effort rendering for the benefit of its user anyway. Ever wondered why the modern <video> element is a block rather than a self-closing tag? It’s so you can embed within it code that an earlier browser – one that doesn’t understand <video> – can read (a browser’s default state when seeing a new element it doesn’t understand is to ignore it and carry on). So embedding a <blink> in a <marquee> gave you the best of both worlds, right? (welll…)

Netscape Navigator 5 showing a blink effect.
Wrap a <blink> inside a <marquee> and Netscape users will see the blink. Joy.

Better yet, you were safe in the knowledge that anybody using a browser that didn’t understand either of these tags could still read your content. Used properly, the web is about progressive enhancement. Implement for everybody, enhance for those who support the shiny features. JavaScript and CSS can be applied with the same rules, and doing so pays dividends in maintainability and accessibility (though, sadly, that doesn’t stop people writing sites that needlessly require these technologies).

Opera 5 showing no blinking nor marquee text.
Personally, I was a (paying! – back when people used to pay for web browsers!) Opera user so I mostly saw neither <blink> nor <marquee> elements. I don’t feel like I missed out.

I remember, though, the first time I tried Netscape 7, in 2002. Netscape 7 and its close descendent are, as far as I can tell, the only web browsers to support both <blink> and <marquee>. Even then, it was picky about the order in which they were presented and the elements wrapped-within them. But support was good enough that some people’s personal web pages suddenly began to exhibit the most ugly effect imaginable: the combination of both scrolling and flashing text.

Netscape 7 showing text that both blinks and marquee-scrolls.
If Netscape 7’s UI didn’t already make your eyes bleed (I’ve toned it down here by installing the “classic skin”), its simultaneous rendering of <blink> and <marquee> would.

The <blink> tag is very-definitely dead (hurrah!), but you can bring it back with pure CSS if you must. <marquee>, amazingly, still survives, not only in polyfills but natively, as you might be able to see above. However, if you’re in any doubt as to whether or not you should use it: you shouldn’t. If you’re looking for digital nostalgia, there’s a whole rabbit hole to dive down, but you don’t need to inflict <marquee> on the rest of us.

× × × × ×

CSS Logical Properties

This is a repost promoting content originally published elsewhere. See more things Dan's reposted.

.my-element {
  margin-inline-start: 1em;
}

What this now does is instead of saying “add margin to the left”, it says “regardless of direction, put margin on the starting side”. If the language of the document was right to left, like Arabic, that margin would be on the right hand side.

This is clever. If you use e.g. margin-left on every list element after the first to put space “between” them, the spacing isn’t quite right when the order of the elements is reversed, for example because your page has been automatically translated into a language that reads in the opposite direction (e.g. right-to-left, rather than left-to-right). When you use margin-left in this way you’re imposing a language-direction-centric bias on your content, and there’s no need: margin-inline-start and its friends are widely-supported and says what you mean: “place a margin before this element”. I’ll be trying to remember to use this where it’s appropriate from now on.

Loading CSS Asynchronously Without JS Dependency

tl;dr? skip to the proof-of-concept/demo of lazy-loading CSS where possible while still loading it “conventionally” to users without Javascript

In a “daily tip” a couple of days ago, the excellent Chris Ferdinandi recommended an approach to loading CSS asynchronously based on a refined technique by Scott Jehl. The short of it is that you load your stylesheets like this:

<link rel="stylesheet" href="/path/to/my.css" media="print" onload="this.media='all'">

You see what that’s doing? It’s loading the stylesheet for the print medium, but then when the document finishes loading it’s switching the media type from “print” to “all”. Because it didn’t apply to begin with the stylesheet isn’t render-blocking. You can use this to delay secondary styles so the page essentials can load at full speed.

This website's Lighthouse score showing a Total Blocking Time of 0ms.
Reducing blocking times, like I have on this page, is one of many steps in optimising perceived page performance.

I don’t like this approach. I mean: I love the elegance… I just don’t like the implications.

Why I don’t like lazy-loading CSS using Javascript

Using Javascript to load CSS, in order to prevent that CSS blocking rendering, feels to me like it conceptually breaks the Web. It certainly violates the expectations of progressive enhancement, because it introduces a level of fault-intolerance that I consider (mostly) unacceptable.

CSS and Javascript are independent of one another. A well-designed progressively-enhanced page should function with HTML only, HTML-and-CSS only, HTML-and-JS only, or all three.CSS adds style, and JS adds behvaiour to a page; and when you insist that the user agent uses Javascript in order to load stylistic elements, you violate the separation of these technologies (I’m looking at you, the majority of heavyweight front-end frameworks!).

If you’re thinking that the only people affected are nerds like me who browse with Javascript wholly or partially disabled, you’re wrong: gov.uk research shows that around 1% of your visitors have Javascript fail for some reason or another: because it’s disabled (whether for preference, privacy, compatibility with accessibility technologies, or whaterver), blocked, firewalled, or they’re using a browser that you didn’t expect.

The Web Pyramid. In the style of a "food pyramid", shows Text Worth Reading at the bottom, supporting Markup, supporting Images, supporting CSS, supporting (a small amount of) Scripts.
Maciej Cegłowski‘s 2015 talk “Website Obesity” draws the boundaries firmly, using this great diagram.

Can we lazy-load CSS in a way that doesn’t depend on Javascript? (spoiler: yes)

Chris’s daily tip got me thinking: could there exist a way to load CSS in a non-render-blocking way but which degraded gracefully in the event that Javascript was unavailable? I.e. if Javascript is working, lazy-load CSS, otherwise: load conventionally as a fallback. It turns out, there is!

In principle, it’s this:

  1. Link your stylesheet from within a <noscript> block, thereby only exposing it where Javascript is disabled. Give it a custom attribute to make it easy to find later, e.g. <noscript lazyload> (if you’re a standards purist, you might prefer to use a data- attribute).
  2. Have your Javascript extract the contents of these <noscript> blocks and reinject them. In modern browsers, this is as simple as e.g. [...document.querySelectorAll('noscript[lazyload]')].forEach(ns=>ns.outerHTML=ns.innerHTML).

If you need support for Internet Explorer, you need a little more work, because Internet Explorer doesn’t expose<noscript> blocks to the DOM in a helpful way. There are a variety of possible workarounds; I’ve implemented one but not put too much thought into it because I rarely have to think about Internet Explorer these days.

In any case, I’ve implemented a proof of concept/demonstration if you’d like to see it in action: just take a look and view source (or read the page) for details. Or view the source alone via this gist.

Lazy-loading CSS using my approach provides most of the benefits of other approaches… but works properly in environments without Javascript too.

Update: Chris Ferdinandi’s refined this into an even cleaner approach that takes the best of both worlds.

× ×

Unreal Engine 5 is meant to ridicule web developers

This is a repost promoting content originally published elsewhere. See more things Dan's reposted.

Our sources report that the underlying reason behind the impressive tech demo for Unreal Engine 5 by Epic Games is to ridicule web developers.

According to the Washington Post, the tech demo includes a new dynamic lighting system and a rendering approach with a much higher geometric detail for both shapes and textures. For example, a single statue in the demo can be rendered with 33 million triangles, giving it a truly unprecedented level of detail and visual density.

Turns out that the level of computational optimization and sheer power of this incredible technology is meant to make fun of web developers, who struggle to maintain 15fps while scrolling a single-page application on a $2000 MacBook Pro, while enjoying 800ms delays typing the corresponding code into their Electron-based text editors.

Funny but sadly true. However, the Web can be fast. What makes it slow is bloated, kitchen-sink-and-all frontend frameworks, pushing computational effort to the browser with overcomplicated DOM trees and unnecessarily rich CSS rules, developer privilege, and blindness to the lower-powered devices that make up most of the browsing world. Oh, and of course embedding a million third-party scripts to get you all the analytics, advertising, etc. you think you need doesn’t help, either.

The Web will never be as fast as native, for obvious reasons. But it can be fast; blazingly so. It just requires a little thought and consideration. I’ve talked about this recently.

Visitors, Developers, or Machines

This is a repost promoting content originally published elsewhere. See more things Dan's reposted.

The performance tradeoff isn’t about where the bottleneck is. It’s about who has to carry the burden. It’s one thing for a developer to push the burden onto a server they control. It’s another thing entirely to expect visitors to carry that load when connectivity and device performance isn’t a constant.

This is another great take on the kind of thing I was talking about the other day: some developers who favour heavy frameworks (e.g. React) argue for the performance benefits, both in development velocity and TTFB. But TTFB alone is not a valid metric of a user’s perception of an application’s performance: if you’re sending a fast payload that then requires extensive execution and/or additional calls to the server-side, it stands to reason that you’re not solving a performance bottleneck, you’re just moving it.

I, for one, generally disfavour solutions that move a Web application’s bottleneck to the user’s device (unless there are other compelling reasons that it should be there, for example as part of an Offline First implementation, and even then it should be done with care). Moving the burden of the bottleneck to the user’s device disadvantages those on slower or older devices and limits your ability to scale performance improvements through carefully-engineered precaching e.g. static compilation. It also produces a tendency towards a thick-client solution, which is sometimes exactly what you need but more-often just means a larger initial payload and more power consumption on the (probably mobile) user’s device.

Next time you improve performance, ask yourself where the time saved has gone. Some performance gains are genuine; others are just moving the problem around.

Giving a Damn at Scale

This is a repost promoting content originally published elsewhere. See more things Dan's reposted.

How many more developers have to point out how bloated we’ve made the web with our frameworks, tracking scripts, and other 3rd party solutions before we take things seriously? We’ve been banging on about this for ages. It’s like a plague!

This is just what I was thinking when I wrote about that terribly bloated digital climate activism website the other day. I hadn’t meant to single them out specifically because the problem’s… well, it’s a pandemic!

And as Bridget goes on to say, it’s especially important at this unusual time, with many people confined to home and using the Internet to try to get accurate and up-to-date information and resources (and sometimes overwhelming major websites), that performance, accessibility, and availability matters most.

There’ll be many lessons to learn from the coronavirus crisis. But these lessons aren’t just related to healthcare, or work, or supermarket logistics. No field will be left untouched. And one of the things that I hope my field learns, when this is over, is the importance of things working even when things get tough. Test the sad paths, test your site under heavy load, test your site with the CDNs simulated “down”, and develop accordingly. Because this isn’t the worst kind of crisis we could face, and we have to be ready for the worse ones that could come.

Digital Climate Strike’s Carbon Footprint

Ironically, the web page promoting the “Digital Climate Strike” is among the dirtiest on the Internet, based on the CO2 footprint of visiting it.

Global Climate Strike's "Take Action" webpage
Save your bandwidth: just look at this screenshot of the site instead of visiting.

Going to that page results in about 14 Mb of data being transmitted from their server to your device (which you’ll pay for if you’re on a metered connection). For comparison, reading my recent post about pronouns results in about 356 Kb of data. In other words, their page is forty times more bandwidth-consuming, despite the fact that my page has about four times the word count. The page you’re reading right now, thanks to its images, weighs in at about 650 Kb: you could still download it more than twenty times while you were waiting for theirs.

globalclimatestrike.net/action: "Uh oh! This web page is dirtier than 97% of web pages tested. Oh my, 7.74g of CO2 is produced every time someone visits this web page."
Well that’s got to be pretty embarassing.

Worse still, the most-heavyweight of the content they deliver is stuff that’s arguably strictly optional and doesn’t add to the message:

  • Eight different font files are served from three different domains (the fonts alone consume about 140 Kb) – seven more are queued but not used.
  • Among the biggest JavaScript files they serve is that of Hotjar analytics: I understand the importance of measuring your impact, but making your visitors – and the planet – pay for it is a little ironic.
  • The biggest JavaScript file seems to be for Mapbox, which as far as I can see is never actually used: that map on the page is a static image which, incidentally, I was able to reduce from 0.5 Mb to 0.2 Mb just by running it through a free online image compressor.
Image compression comparison for the map image. Before: 536K, after: 201K (-63%).
This took me literally seconds to do but would save about a twelfth of a second for every single typical 4G user to their site. And it’s not even the worst culprit.

And because the site sets virtually no caching headers, even if you’ve visited the website before you’re likely to have to download the whole thing again. Every single time.

It’s not just about bandwidth: all of those fonts, that JavaScript, their 60 Kb of CSS (this page sent you 13 Kb) all has to be parsed and interpreted by your device. If you’re on a mobile device or a laptop, that means you’re burning through lithium (a non-renewable resource whose extraction and disposal is highly polluting) and regardless of your device you’re using you’re using more electricity to visit their site than you need to. Coding antipatterns like document.write() and active event listeners that execute every time you scroll the page keep your processor working hard, turning electricity into waste heat. It took me over 12 seconds on a high-end smartphone and a good 4G connection to load this page to the point of usability. That’s 12 seconds of a bright screen, a processor running full tilt,a  data connection working its hardest, and a battery ticking away. And I assume I’m not the only person visiting the website today.

This isn’t really about this particular website, of course (and I certainly don’t want to discourage anybody from the important cause of saving the planet!). It’s about the bigger picture: there’s a widespread and long-standing trend in web development towards bigger, heavier, more power-hungry websites, built on top of heavyweight frameworks that push the hard work onto the user’s device and which favour developer happiness over user experience. This is pretty terrible: it makes the Web slow, and brittle, and it increases the digital divide as people on slower connections and older devices get left behind.

(Bonus reading: luckily there’s a counterculture of lean web developers…)

But this trend is also bad for the environment, and when your website exists to try to save it, that’s more than a little bit sad.

× × ×

Why the GOV.UK Design System team changed the input type for numbers

This is a repost promoting content originally published elsewhere. See more things Dan's reposted.

Android number pad

Using <input type="text" inputmode="numeric" pattern="[0-9]*"> allows for a degree of separation between how the user enters data (“input mode”), what the browser expects the user input to contain (type equals number), and potentially how it tries to validate it.

I’ve sung the praises of the GDS research team before, and it’s for things like this that I respect them the most: they’re knowing for taking a deep-dive user-centric approach to understanding usability issues, and they deliver valuable actionable answers off the back of it.

If you’ve got Web forms that ask people for numbers, this is how you should be doing it. If you’re doing so specifically for 2FA purposes, see that post I shared last month on a similar topic.

×

BingO Bakery

This is a repost promoting content originally published elsewhere. See more things Dan's reposted.

Don’t understand why Web accessibility is important? Need a quick and easily-digestible guide to the top things you should be looking into in order to make your web applications screenreader ready? Try this fun, video-game-themed 5 minute video from Microsoft.

There’s a lot more to accessibility than is covered here, and it’s perhaps a little over-focussed on screenreaders, but it’s still a pretty awesome introduction.