Browsers are pretty good at loading pages, it turns out

This article is a repost promoting content originally published elsewhere. See more things Dan's reposted.

The <a> tag is one of the most important building blocks of the Internet. It lets you create a hyperlink: a piece of text, usually colored blue, that you can use to go to a new page. When you click on a hyperlink, your web browser downloads the new page from the server and displays it on the screen. Most web browsers also store the pages you previously visited so you can quickly go back to them. The best part is, the <a> tag gives you all of that behavior for free! Just tell the browser where you want to go, and it handles the rest.

Lately, though, that hasn’t been enough for website developers. The new fad is “client-side navigation”, where instead of relying on the browser to load new pages for you, you write a bunch of JavaScript code to do it instead. It’s actually really hard to get it right—loading the new page is simple enough, but you also have to write code to display a loading bar, make the Back and Forward buttons work, show an error page if the connection drops, and so on.

For a while, I didn’t understand why anyone did this. Was it just silly make-work, like how every social network redesigns their website every couple years for no discernable reason? Do <a> tags interfere with some creepy ad-tracking technique? Was there some really complicated technical reason why you shouldn’t use them?

Spoiler: good old-fashioned <a> hyperlinks tend to outperform Javascript-driven client-side navigation. We already learned about one reason for this – that adding more Javascript code just to get back what the browser gives you for free increases the payload you deliver to the user – but Carter demonstrates that progressive rendering goes a long way to explaining it, too. You see: browsers understand traditional navigation and are well-equipped with a suite of shortcuts to help them optimise for it. They can start rendering content before it’s all downloaded, offset (hinted-at) asynchronous data for later, and of course they already contain a pretty solid caching engine and you don’t even have to implement it yourself.

JavaScript frameworks are better for accessibility (and other myths)

This article is a repost promoting content originally published elsewhere. See more things Dan's reposted.

The other day, I saw someone on Twitter say (I’m not linking to the original tweet because I don’t want to pile-on the author):

I don’t bother with frameworks, I just use vanilla JS.

Roughly translated:

I’m smarter than the thousands of people who tried to solve the problems I’m about to solve. I’m an expert on security, a11y, browser support, and perf. I don’t care about ROI, I just want to code.

Here’s the thing: frameworks don’t really help you with this stuff.

Earlier this year, WebAIM conducted a survey of the top million sites on the web and found those that use frameworks are actually more likely to have accessibility issues.

Very much this. People who use Javascript frameworks because they think they protect them from common web development pitfalls are simply trading away a set of known, solvable problems and taking on a different set of unknown, unsolvable ones.

I’m not anti-framework, but I am pro-informed-developer. If security, accessibility, performance, and browser support are things you care about – and they absolutely should be – then you need to know the impact that the tools you choose have upon those things. It’s easy to learn the impact that vanilla JS has on them, but it’s harder to understand exactly what impact a framework might have or how that impact might be affected by interactions between it and all of the other frameworks and libraries you mix-in. And many developers don’t bother to learn.

Use frameworks if they’re the right tool for your job. But you should work towards understanding your tools. Incidentally: in doing so, you’ll probably come to discover that frameworks are the right tool for fewer jobs than you thought.

Reply to: A modern font loading strategy with the vanilla JS FontFaceSet.load() method

Chris Ferdinandi‘s daily tip for yesterday addressed a common familiar to Web developers using custom fonts (i.e. basically all of them):

In many browsers, if a custom typeface is declared but hasn’t finished downloading and parsing yet, browsers will leave space for the text but not render it until the file is ready.

This is often called a Flash Of Invisible Text (or FOIT).

In a now slightly outdated article, Ilya Grigorik, a web performance engineer at Google, reports:

29% of page loads on Chrome for Android displayed blank text: the user agent knew the text it needed to paint, but was blocked from doing so due to the unavailable font resource. In the median case the blank text time was ~350 ms, ~750 ms for the 75th percentile, and a scary ~2300 ms for the 95th.

To make matters worse, some mobile browsers never timeout a failed font file, and therefore never show text in a fallback typeface if the custom one fails to load. You get nothing at all.

Let’s talk about how to fix that.

Chris is right…

He’s right that the FOIT is annoying, and he’s right that for most text (and especially body text) the best result would be if a fallback system font was used immediately and swapped-out for the designer’s preferred font as soon as it becomes available: this maximises usability, especially on slower devices and connections. His solution is this:

  1. Set the font to a fallback font initially.
  2. Set the font to the preferred font once a CSS class is applied to a root element.
  3. Use Javascript to set apply that CSS class either when FontFaceSet.load() indicates that the font is available, and (via a cookie) for as long as the font file is expected to appear in the browser cache.

This approach is not without its problems. It requires Javascript (users for whom Javascript fails for some reason won’t see the font at all, but may still have to download the font file!), conflates cookie lifetime with cache lifetime (the two can be cleared independently, cookies can sometimes be synchronised across devices that don’t necessarily share caches, etc.), and uses Javascript techniques that don’t work in some browsers (Edge and Internet Explorer are both capable of showing custom web fonts but both will use the fallback font unless either (a) further Javascript is added (which Chris doesn’t supply) or (b) browser detection and/or conditional comments are used to trigger different behaviour in these browsers (which is icky).

…but he’s also wrong…

If only there was a better way to prevent the FOIT. One which degrades gracefully in older browsers, doesn’t require Javascript, doesn’t make assumptions about user cookie/cache configuration, and ideally involves a lot less code. It turns out, there is!

The font-display CSS directive exists to solve this exact issue [MDN]. Here’s what it looks like being used to solve the problem Chris presents (example taken from my blog’s CSS!):

@font-face{
  font-family:"Raleway";
  font-style:normal;
  font-weight:400;
  src: local("Raleway"),
       local("Raleway-Regular"),
       url(/wp-content/themes/q18/fonts/raleway-v11-latin-regular.woff2) format("woff2"),
       url(/wp-content/themes/q18/fonts/raleway-v11-latin-regular.woff) format("woff");
  font-display:swap;
}

Setting font-display: swap in the @font-face block tells the browser to use fallback fonts in place of this font while it loads. That’s probably exactly what you want for text fonts and especially body text; it means that the user sees the text as soon as possible and it’s swapped-out for the preferred font the moment it becomes available: no Javascript necessary! Conversely, font-display: block is a better choice for icon fonts where you want to force the browser to wait as long as possible for the font file to load (because any content rendered using it makes no sense otherwise).

font-display works out-of-the-box with Chrome, Firefox, and Safari and with the next version of Edge; older versions of Edge and Internet Explorer will simply fall-back to their default behaviour (FOIT where-necessary) – this is a progressive enhancement technique. But instead of a couple of dozen lines of Javascript, it’s a single line of CSS.

The only downside is that Google Web Fonts won’t add this directive, so you’ll need to self-host your font files (which is really easy, by the way: there’s a tool that’ll show you how). You should consider doing this anyway, of course: CDNs introduce a number of problems and no longer provide the relative performance benefits they used to. So self-host your fonts, add font-display: swap, and enjoy the most-lightweight and well-standardised approach possible to combatting the FOIT.

The problem with single page apps

This article is a repost promoting content originally published elsewhere. See more things Dan's reposted.

Single-page apps (or SPAs as they’re sometimes called) serve all of the code for an entire multi-UI app from a single index.html file.

They use JavaScript to handle URL routing with real URLs. For this to work, you need to:

  1. Configure the server to point all paths on a domain back to the root index.html file. For example, todolist.com and todolist.com/lists should both point to the same file.
  2. Suppress the default behavior when someone clicks a link that points to another page in the app.
  3. Use more JavaScript—history.pushState()—to update the URL without triggering a page reload.
  4. Match the URL against a map of routes, and serve the right content based on it.
  5. If your URL has variable information in it (like a todolist ID, for example), parse that data out of the URL.
  6. Detect when someone clicks the browser’s back button/forward button, and update the URL and UI.
  7. Update the title element on the page.
  8. Use even more JavaScript to dynamically focus the content area when the content changes (for screen-reader users).

(Shoutout to Ashley Bischoff for those last two!)

You end up recreating with JavaScript a lot of the features the browser gives you out-of-the-box.

This becomes more code to maintain, more complexity to manage, and more things to break. It makes the whole app more fragile and bug-prone than it has to be.

I’m going to share some alternatives that I prefer.

Like – it seems – Chris Ferdinandi, I’ve got nothing against Single Page Applications in their place.

My biggest concern with SPAs is that they’re routinely seen as an inevitable progression of web development: that is, that an increasing number of web developers have been brainwashed into thinking that they’re intrinsically superior to traditional multi-page websites. As Adam Silver observed the other year, using your heavyweight Javascript framework to Ajaxify your page loads does make the application feel faster… but only because the download and processing time of the heavyweight Javascript framework made it feel slow in the first place! The net result: web bloat, penalising of mobile users, and brittle applications with many failure points.

Whenever I see a new front-end framework sing the praises of its routing engine I wonder how we got to this point. After all: the Web’s had a routing engine since 1990, and most efforts to reinvent it invariably make it worse: less-accessible, less-archivable, less-sharable, less-discoverable, less-reliable, or several of these.

Reducing motion with the picture element

This article is a repost promoting content originally published elsewhere. See more things Dan's reposted.

I was just talking with Dave about the accessibility of moving images on the web, and he said:

hm… I wonder if you could use picture + prefers-reduced-motion?

He then sends the following code:

<picture>
  <source srcset="no-motion.jpg" media="(prefers-reduced-motion: reduce)"></source> 
  <img srcset="animated.gif alt="brick wall"/>
</picture>

I copied the code, dropped it into a post of mine, created a static image of an animated GIF, and turned on the “reduce motion” preference (System Preferences > Accessibility > Display). And then BOOM. Just worked. In real time!

I added reduced-motion support to DanQ.me earlier this year, but I only bothered to pay attention to the animated parts of the layout and design itself (the “bounce” on the menus and the cutesy motion of the logo, for example) and considered the (few) GIF animations and the like that I’ve added to be out-of-scope. But this approach is really quite simple and elegant, and I’ll bear it in mind if I ever have need of such a thing!

Accessibility Resources

This article is a repost promoting content originally published elsewhere. See more things Dan's reposted.

As it’s Global Accessibility Awareness, I’m sharing a list of accessibility resources that I regularly refer to. Happy reading, watching and listening!

A fabulously-useful concise list of some of the hottest articles, books, and webinars on accessibility in web development; saved for my later convenience.

JavaScript and the mobile-only user

This article is a repost promoting content originally published elsewhere. See more things Dan's reposted.

It’s 2019. No one disables JavaScript.

Actually (cue Adam Conover!)… some people do. They represent about a tenth of a percent (0.1%) of the people who don’t get your JavaScript file, though, and they’ve chosen to browse the web that way, so let’s ignore them for a second.

I see this argument—that one one disables JS in 2019—as an argument for not bothering to care or worry about progressive enhancement. But it’s wrong!

There are plenty of other reasons why people don’t get your JavaScript.

  • Your CDN fails
  • An ad blocker or filewall got a little overly aggressive
  • A JS error (either in your code or something third-party) stops all of the JS from rendering
  • The file times out because it’s too big or too slow to parse

Today, I want to focus on that last one.

So very much this. Web bloat is becoming a huge issue (incidentally, I was pleased to see that DanQ.me’s homepage Web Bloat Score is in the region of a nice, clean 0.1, but I’m confident that there’s still plenty I could do to improve it); it’s easy to see how developers on their powerful desktops and laptops and with their WiFi-connected high-end smartphones might overlook people on older, less-powerful devices and slower, lower-bandwidth connections.

I’m not saying that Javascript is bad: it’s not! I’m saying that where functionality exists in simpler, more-lightweight technologies (like good old-fashioned links and regular <input> elements, both of which are routinely reimplemented in the front-end), then those technologies should be used in the first instance. If you want to build on top of that with Javascript, that’s great! But starting from the ground floor when building for the web is the surest way to support the widest diversity of potential users (and it makes it easier to achieve your accessibility goals, too!)

Blogging with semantic insertions and deletions

When I write a blog post, it generally becomes a static thing: its content always usually stays the same for the rest of its life (which is, in my case, pretty much forever). But sometimes, I go back and make an amendment. When I make minor changes that don’t affect the overall meaning of the work, like fixing spelling mistakes and repointing broken links, I just edit the page, but for more-significant changes I try to make it clear what’s changed and how.

An insertion and a deletion on a 2007 blog post announcing Troma Night plans.
This blog post from 2007, for example, was amended after its publication with the insertion of content at the top and the deletion of content within.

Historically, I’d usually marked up deletions with the HTML <strike>/<s> elements (or other visually-similar approaches) and insertions by clearly stating that a change had been made (usually accompanied by the date and/or time of the change), but this isn’t a good example of semantic code. It also introduces an ambiguity when it clashes with the times I use <s> for comedic effect in the Web equivalent of the old caret-notation joke:

Be nice to this fool^H^H^H^Hgentleman, he's visiting from corporate HQ.

Better, then, to use the <ins> and <del> elements, which were designed for exactly this purpose and even accept attributes to specify the date/time of the modification and to cite a resource that explains the change, e.g. <ins datetime="2019-05-03T09:00:00+00:00" cite="https://alices-blog.example.com/2019/05/03/speaking.html">The last speaker slot has now been filled; thanks Alice</ins>. I’ve worked to retroactively add such semantic markup to my historical posts where possible, but it’ll be an easier task going forwards.

Of course, no browser I’m aware of supports these attributes, which is a pity because the metadata they hold may well have value to a reader. In order to expose them I’ve added a little bit of CSS that looks a little like this, which makes their details (where available) visible as a sort-of tooltip when hovering over or tapping on an affected area. Give it a go with the edits at the top of this post!

ins[datetime], del[datetime] {
  position: relative;
}

ins[datetime]::before, del[datetime]::before {
  position: absolute;
  top: -24px;
  font-size: 12px;
  color: #fff;
  border-radius: 4px;
  padding: 2px 6px;
  opacity: 0;
  transition: opacity 0.25s;
  hyphens: none;                    /* suppresses sitewide line break hyphenation rules */
  white-space: nowrap;              /* suppresses extraneous line breaks in Chrome      */
}

ins[datetime]:hover::before, del[datetime]:hover::before {
  opacity: 0.75;
}

ins[datetime]::before {
  content: 'inserted ' attr(datetime) ' ' attr(cite);
  background: #050;                 /* insertions are white-on-green                    */
}

del[datetime]::before {
  content: 'deleted ' attr(datetime) ' ' attr(cite);
  background: #500;                 /* deletions are white-on-red                       */
}
CSS facilitating the display of <ins>/<del> datetimes and citations on hover or touch.

I’m aware that the intended use-case of <ins>/<del> is change management, and that the expectation is that the “final” version of a document wouldn’t be expected to show all of the changes that had been made to it. Such a thing could be simulated, I suppose, by appropriately hiding and styling the <ins>/<del> blocks on the client-side, and that’s something I might look into in future, but in practice my edits are typically small and rare enough that nobody would feel inconvenienced by their inclusion/highlighting: after all, nobody’s complained so far and I’ve been doing exactly that, albeit in a non-semantic way, for many years!

I’m also slightly conscious that my approach to the “tooltip” might cause it to obstruct interactivity with something directly above an insertion or deletion: e.g. making a hyperlink inaccessible. I’ve tested with a variety of browsers and devices and it doesn’t seem to happen (my line height works in my favour) but it’s something I’ll need to be mindful of if I change my typographic design significantly in the future.

A final observation: I love the CSS attr() function, and I’ve been using it (and counter()) for all kinds of interesting things lately, but it annoys me that I can only use it in a content: statement. It’d be amazingly valuable to be able to treat integer-like attribute values as integers and combine it with a calc() in order to facilitate more-dynamic styling of arbitrary sets of HTML elements. Maybe one day…

For the time being, I’m happy enough with my new insertion/deletion markers. If you’d like to see them in use in their natural environment, see the final paragraph of my 2012 review of The Signal and The Noise.

How many people are missing out on JavaScript enhancement?

This article is a repost promoting content originally published elsewhere. See more things Dan's reposted.

few weeks back, we were chatting about the architecture of the Individual Electoral Registration web service.  We started discussing the pros and cons of an approach that would provide a significantly different interaction for any people not running JavaScript.

“What proportion of people is that?” an inquisitive mind asked.

Silence.

We didn’t really have any idea how many people are experiencing UK government web services without the enhancement of JavaScript. That’s a bad thing for a team that is evangelical about data driven design, so I thought we should find out.

The answer is:

1.1% of people aren’t getting Javascript enhancements (1 in 93)

This article by the GDS is six years old now, but its fundamental point is still as valid as ever: a small proportion (probably in the region of 1%) of your users won’t experience some or all of the whizzy Javascript stuff on your website, and it’s not because they’re a power user who disables Javascript.

There are so many reasons a user won’t run your Javascript, including:

  • They’re using a browser that doesn’t support Javascript (or doesn’t support the version you’re using)
  • They, or somebody they share their device with, has consciously turned-off Javascript either wholesale or selectively, in order to for example save bandwidth, improve speed, reinforce security, or improve compatibility with their accessibility technologies
  • They’re viewing a locally-saved, backed-up, or archived version of your page (possibly in the far future long after your site is gone)
  • Their virus scanner mis-classified your Javascript as potentially malicious
  • One or more of your Javascript files contains a bug which, on their environment, stops execution
  • One or more of your Javascript files failed to be delivered, for example owing to routing errors, CDN downtime, censorship, cryptographic handshake failures, shaky connections, cross-domain issues, stale caches…
  • On their device, your Javascript takes too long to execute or consumes too many resources and is stopped by the browser

Fundamentally, you can’t depend on Javascript and so you shouldn’t depend on it being there, 100% of the time, when it’s possible not to. Luckily, the Web already gives us all the tools we need to develop the vast, vast majority of web content in a way that doesn’t depend on Javascript. Back in the 1990s we just called it “web development”, but nowadays Javascript (and other optional/under-continuous-development web technologies like your favourite so-very-2019 CSS hack) is so ubiquitous that we give it the special name “progressive enhancement” and make a whole practice out of it.

The Web was designed for forwards- and backwards-compatibility. When you break that, you betray your users and you make work for yourself.

(by the way: I know I plugged the unpoly framework already, the other day, but you should really give it a look if you’re just learning how to pull off progressive enhancement)

I Used The Web For A Day On Internet Explorer 8

This article is a repost promoting content originally published elsewhere. See more things Dan's reposted.

Who In The World Uses IE8?

Before we start; a disclaimer: I am not about to tell you that you need to start supporting IE8.

There’s every reason to not support IE8. Microsoft officially stopped supporting IE8, IE9 and IE10 over three years ago, and the Microsoft executives are even telling you to stop using Internet Explorer 11.

But as much as we developers hope for it to go away, it just. Won’t. Die. IE8 continues to show up in browser stats, especially outside of the bubble of the Western world.

Sure, you aren’t developing for IE8 any more. But you should be developing with progressive enhancement, and if you do that right, you get all kinds of compatibility, accessibility, future- and past-proofing built-in. This isn’t just about supporting the (many) African countries where IE8 usage remains at over 1%… it’s about supporting the Web’s openness and archivibility and following best-practice in your support of new technologies.

The “Backendification” of Frontend Development

This article is a repost promoting content originally published elsewhere. See more things Dan's reposted.

a post

Asynchronous JavaScript in the form of Single Page Applications (SPA) offer an incredible opportunity for improving the user experience of your web applications. CSS frameworks like Bootstrap enable developers to quickly contribute styling as they’re working on the structure and behaviour of things.

Unfortunately, SPA and CSS frameworks tend to result in relatively complex solutions where traditionally separated concerns – HTML-structure, CSS-style, and JS-behaviour – are blended together as a matter of course — Counter to the lessons learned by previous generations.

This blending of concerns can prevent entry level developers and valued specialists (Eg. visual design, accessibility, search engine optimization, and internationalization) from making meaningful contributions to a project.

In addition to the increasing cost of the few developers somewhat capable of juggling all of these concerns, it can also result in other real world business implications.

What is a front-end developer? Does anybody know, any more? And more-importantly, how did we get to the point where we’re actively encouraging young developers into habits like writing (cough React cough) files containing a bloaty, icky mixture of content, HTML (markup), CSS (style), and Javascript (behaviour)? Yes, I get that the idea is that individual components should be packaged together (if you’re thinking in a React-like worldview), but that alone doesn’t justify this kind of bullshit antipattern.

It seems like the Web used to have developers. Then it got complex so we started differentiating back-end from front-end developers and described those who, like me, spanned the divide, as full-stack developers We gradually became a minority as more and more new developers, deprived of the opportunity to learn each new facet organically in this newly-complicated landscape, but that’s fine. But then… we started treating the front-end as the only end, and introducing all kinds of problems as a result… and most people don’t seem to have noticed, yet, exactly how much damage we’re doing to Web applications’ security, maintainability, future-proofibility, archivability, addressibility…

Modern CSS on DanQ.me

The current iteration of my blog diverges from an architectural principle common to most of previous versions of the last 20 years. While each previous change in design and layout was intended to provide a single monolithic upgrade, this version tries to provide me with a platform for continuous ongoing experimentation and change.

Debug console on DanQ.me showing Dan's head and a speech bubble.
Earlier this year I added experimental console art, for example. Click through for more details.

I’ve been trying to make better use of my blog as a vehicle for experimenting with web technologies, as I used to with personal sites back in the 1990s and early 2000s; to see a vanity site like this one as a living playground rather than something that – like most of the sites I’m paid to work on – something whose design is, for the most part, static for long periods of time.

"Blog" dropdown menu on DanQ.me.
The “popular” flag and associated background colour in the “Blog” top-level menu became permanent after a period of A/B testing. Thanks, unwitting testers!

Among the things I’ve added prior to the initial launch of this version of the design are gracefully-degrading grids, reduced-motion support, and dark-mode support – three CSS features of increasing levels of “cutting edge”-ness but each of which is capable of being implemented in a way that does not break the site’s compatibility. This site’s pages are readable using (simulations of) ancient rendering engines or even in completely text-based browsers, and that’s just great.

Here’s how I’ve implemented those three features:

Gracefully-degrading grids

Grid of recent notes and shares on DanQ.me
I’m not entirely happy with the design of these boxes, but that’s a job for another day.

The grid of recent notes, shares, checkins and videos on my homepage is powered by the display: grid; CSS directive. The number of columns varies by screen width from six on the widest screens down to three or just one on increasingly small screens. Crucially, grid-auto-flow: dense; is used to ensure an even left-to-right filling of the available space even if one of the “larger” blocks (with grid-column: span 2; grid-row: span 2;) is forced for space reasons to run onto the next line. This means that content might occasionally be displayed in a different order from that in which it is written in the HTML (which is reverse order of publication), but in exchange the items are flush with both sides.

Grid sample showing impact of dense flow.
The large “5 Feb” item in this illustration should, reverse-chronologically, appear before the “3 Feb” item, but there isn’t room for it on the previous line. grid-auto-flow: dense; means that the “3 Feb” item is allowed to bubble-up and fill the gap, appearing out-of-order but flush with the edge.

Not all web browsers support display: grid; and while that’s often only one of design and not of readability because these browsers will fall back to usually-very-safe default display modes like block and inline, as appropriate, sometimes there are bigger problems. In Internet Explorer 11, for example, I found (with thanks to @_ignatg) a problem with my directives specifying the size of these cells (which are actually <li> elements because, well, semantics matter). Because it understood the directives that ought to impact the sizing of the list items but not the one that redeclared its display type, IE made… a bit of a mess of things…

Internet Explorer scrambles a list/grid combination.
Thanks, Internet Explorer. That’s totally what I was looking for.

Do websites need to look the same in every browser? No. But the content should be readable regardless, and here my CSS was rendering my content unreadable. Given that Internet Explorer users represent a little under 0.1% of visitors to my site I don’t feel the need to hack it to have the same look-and-feel: I just need it to have the same content readability. CSS Feature Queries to the rescue!

CSS Feature Queries – the @supports selector – make it possible to apply parts of your stylesheet if and only if the browser supports specific CSS features, for example grids. Better yet, using it in a positive manner (i.e. “apply these rules only if the browser supports this feature”) is progressive enhancement, because browsers that don’t understand the  @supports selector act in the same way as those that understand it but don’t support the specified feature. Fencing off the relevant parts of my stylesheet in a @supports (display: grid) { ... } block instructed IE to fall back to displaying that content as a boring old list: exactly what I needed.

Internet Explorer's view of the "grid" on the DanQ.me homepage.
It isn’t pretty, but it’s pretty usable!

Reduced-motion support

I like to put a few “fun” features into each design for my blog, and while it’s nowhere near as quirky as having my head play peek-a-boo when you hover your cursor over it, the current header’s animations are in the same ballpark: hover over or click on some of the items in the header menu to see for yourself..

Main menu with "Dan Q" title in it's "bounced" position.
I’m most-pleased with the playful “bounce” of the letter Q when you hover over my name.

These kinds of animations are fun, but they can also be problematic. People with inner ear disorders (as well as people who’re just trying to maximise the battery life on their portable devices!) might prefer not to see them, and web designers ought to respect that choice where possible. Luckily, there’s an emerging standard to acknowledge that: prefers-reduced-motion. Alongside its cousins inverted-colors, prefers-reduced-transparency, prefers-contrast and prefers-color-scheme (see below for that last one!), these new CSS tools allow developers to optimise based on the accessibility features activated by the user within their operating system.

Motion-reducing controls in Windows 10 and MacOS X.
In Windows you turn off animations while in MacOS you turn on not-having animations, but the principle’s the same.

If you’ve tweaked your accessibility settings to reduce the amount of animation your operating system shows you, this website will respect that choice as well by not animating the contents of the title, menu, or the homepage “tiles” any more than is absolutely necessary… so long as you’re using a supported browser, which right now means Safari or Firefox (or the “next” version of Chrome). Making the change itself is pretty simple: I just added a @media screen and (prefers-reduced-motion: reduce) { ... } block to disable or otherwise cut-down on the relevant animations.

Dark-mode support

DanQ.me in dark mode.

Similarly, operating systems are beginning to support “dark mode”, designed for people trying to avoid eyestrain when using their computer at night. It’s possible for your browser to respect this and try to “fix” web pages for you, of course, but it’s better still if the developer of those pages has anticipated your need and designed them to acknowledge your choice for you. It’s only supported in Firefox and Safari so far and only on recent versions of Windows and MacOS, but it’s a start and a helpful touch for those nocturnal websurfers out there.

Enabling Dark Mode on Windows 10 and MacOS X
Come to the dark side, Luke. Or just get f.lux, I suppose.

It’s pretty simple to implement. In my case, I just stacked some overrides into a @media (prefers-color-scheme: dark) { ... } block, inverting the background and primary foreground colours, softening the contrast, removing a few “bright” borders, and darkening rather than lightening background images used on homepage tiles. And again, it’s an example of progressive enhancement: the (majority!) of users whose operating systems and/or browsers don’t yet support this feature won’t be impacted by its inclusion in my stylesheet, but those who can make use of it can appreciate its benefits.

This isn’t the end of the story of CSS experimentation on my blog, but it’s a part of the it that I hope you’ve enjoyed.

Cache-Control for Civilians

This article is a repost promoting content originally published elsewhere. See more things Dan's reposted.

Cache-Control for Civilians – CSS Wizardry – CSS Architecture, Web Performance Optimisation, and more, by Harry Roberts

One of the most common and effective ways to manage the caching of your assets is via the Cache-Control HTTP header. This header applies to individual assets, meaning everything on our pages can have a very bespoke and granular cache policy. The amount of control we’re granted makes for very intricate and powerful caching strategies.

A Cache-Control header might look something like this:

Cache-Control: public, max-age=31536000

Cache-Control is the header, and each of public and max-age=31536000 are directives. The Cache-Control header can accept one or more directives, and it is these directives, what they really mean, and their optimum use-cases that I want to cover in this post.

A great reference for configuring your HTTP caching headers.

WorldWideWeb, 30 years on

This month, a collection of some of my favourite geeks got invited to CERN in Geneva to participate in a week-long hackathon with the aim of reimplementing WorldWideWeb – the first web browser, circa 1990-1994 – as a web application. I’m super jealous, but I’m also really pleased with what they managed to produce.

DanQ.me as displayed by the reimagined WorldWideWeb browser circa 1990
With the exception of a few character entity quirks, this site remains perfectly usable in the simulated WorldWideWeb browser. Clearly I wasn’t the only person to try this vanity-check…

This represents a huge leap forward from their last similar project, which aimed to recreate the line mode browser: the first web browser that didn’t require a NeXT computer to run it and so a leap forward in mainstream appeal. In some ways, you might expect reimplementing WorldWideWeb to be easier, because its functionality is more-similar that of a modern browser, but there were doubtless some challenges too: this early browser predated the concept of the DOM and so there are distinct processing differences that must be considered to get a truly authentic experience.

Geeks hacking on WorldWideWeb reborn
It’s just like any other hackathon, if you ignore the enormous particle collider underneath it.

Among their outputs, the team also produced a cool timeline of the Web, which – thanks to some careful authorship – is as legible in WorldWideWeb as it is in a modern browser (if, admittedly, a little less pretty).

WorldWideWeb screenshot by Sir Tim Berners-Lee
When Sir Tim took this screenshot, he could never have predicted the way the Web would change, technically, over the next 25-30 years. But I’m almost more-interested in how it’s stayed the same.

In an age of increasing Single Page Applications and API-driven sites and “apps”, it’s nice to be reminded that if you develop right for the Web, your content will be visible (sort-of; I’m aware that there are some liberties taken here in memory and processing limitations, protocols and negotiation) on machines 30 years old, and that gives me hope that adherence to the same solid standards gives us a chance of writing pages today that look just as good in 30 years to come. Compare that to a proprietary technology like Flash whose heyday 15 years ago is overshadowed by its imminent death (not to mention Java applets or ActiveX <shudders>), iOS apps which stopped working when the operating system went 64-bit, and websites which only work in specific browsers (traditionally Internet Explorer, though as I’ve complained before we’re getting more and more Chrome-only sites).

The Web is a success story in open standards, natural and by-design progressive enhancement, and the future-proof archivability of human-readable code. Long live the Web.

Update 24 February 2019: After I submitted news of the browser to MetaFilter, I (and others) spotted a bug. So I came up with a fix…