…every major OS vendor has been adhering to the convention that checkboxes are square and radio buttons are round.
…
Apple is the first major operating system vendor who had abandoned a four-decades-long tradition. Their new visionOS — for the first time in the history of Apple — will have round
checkboxes.
Anyway, with Apple’s betrayal, I think it’s fair to say there’s no hope for this tradition to continue.
I therefore officially announce 2024 to be the year when the square checkbox has finally died.
The Web did a bad enough job of making checkboxes and radiobuttons inconsistent. I’m not saying you can’t style them, Web developers, but let’s at least keep the fundamental
shape of them the way that they have been for decades so that users can understand them!
But yeah, Apple’s new designs could spell the beginning of the end of this long-established standard. Sad times.
This is an alternate history of the Web. The premise is true, but the story diverges from our timeline and looks at an alternative “Web that might have been”.
Prehistory
This is the story of P3P, one of the greatest Web standards whose history has been forgotten1, and how the abject failure of its first versions paved the
way for its bright future decades later. But I’m getting ahead of myself…
Drafted in 2002 in the wake of growing concern about the death of privacy on the Internet, P3P 1.0 aimed to make the collection of personally-identifiable data online transparent. Hurrah, right?
Initially, the principle was sound. The specification was weak. The implementation was apalling. But P3P 1.1
could have worked well.
Developers are lazy3 and soon converged on the simplest possible solution: add a garbage HTTP header like P3P: CP="See our website for our privacy policy." and your cookies work just fine! Ignore the problem, ignore the
proposed solution, just do what gets the project shipped.
Without any meaningful enforcement it also perfectly feasible to, y’know, just lie about how well you treat user data. Seeing the way the wind was blowing, Mozilla dropped
support for P3P, and Microsoft’s support – which had always been half-baked and lacked even the most basic user-facing
controls or customisation options – languished in obscurity.
For a while, it seemed like P3P was dying. Maybe, in some alternate timeline, it did die: vanishing into
nothing like VRML, WAP, and XBAP.
But fortunately for us, we don’t live in that timeline.
Revival
In 2009, the European Union revisited the Privacy and Electronic Communications
Directive. The initial regulations, published in 2002, required that Web users be able to opt-out of tracking cookies, but the amendment required that sites ensure that
users opted-in.
As-written, this confusing new regulation posed an
immediate problem: if a user clicked the button to say “no, I don’t want cookies”, and you didn’t want to ask for their consent again on every page load… you had to give them a cookie
(or use some other technique
legally-indistinguishable from cookies). Now you’re stuck in an endless cookie-circle.4
This, and other factors of informed consent, quickly introduced a new pattern among those websites that were fastest to react to the legislative change:
The cookie consent banner, with all its confusing language and dark patterns, looked like it was going to become the new normal for web users in the early 2010s. But thankfully, our
saviour had been waiting in the wings all along.
Web users rebelled. These ugly overlays felt like a regresssion to a time when popup ads and splash pages were commonplace. “If only,” people cried out, “There were a better way to do
this!”
It was Professor Lorie Cranor, one of the original authors of the underloved P3P specification and a respected champion of usable privacy and security, whose rallying cry gave us hope. Her CNET article, “Why
the EU Cookie Directive is a solved problem”5, inspired a new generation of development on what would become known as P3P 2.0.
While maintaining backwards compatibility, this new standard:
deprecated those horrible XML documents in favour of HTTP
headers and <link> tags alone,
removing support for Set-Cookie2: headers, which nobody used anyway, and
added features by which the provenance and purpose of cookies could be stated in a way that dramatically simplified adoption in browsers
Internet Explorer at this point was still used by a majority of Web users. It still supported the older
version of the standard, and – as perhaps the greatest gift that the much-maligned browser ever gave us – provided a reference implementation as well as a stepping-stone to wider
adoption.
Opera, then Firefox, then “new kid” Chrome each adopted P3P 2.0; Microsoft finally got on board with IE 8 SP 1. Now the latest versions of all the mainstream browsers had a solid
implementation6
well before the European data protection regulators began fining companies that misused tracking cookies.
Nowadays, we’ve pretty-well standardised on the address bar being the place where all cookie and privacy information and settings are stored. Can you imagine if things had
gone any other way?
But where the story of P3P‘s successes shine brightest came in 2016, with the passing of the GDPR. The W3C realised that P3P could simplify both the expression and understanding of privacy policies for users, and formed a group to work on version 2.1. And that’s
the version you use today.
When you launch a new service, you probably use one of the many free wizard-driven tools to express your privacy policy and the bases for your data processing, and it spits out a
template privacy policy. You need the human-readable version, of course, since the 2020 German court ruling that you cannot rely on a machine-readable privacy policy alone, but
the real gem is the P3P: 2.1 header version.
Assuming you don’t have any unusual quirks in your data processing (ask your lawyer!), you can just paste the relevant code into your server configuration and you’re good to go. Site
users get a warning if their personal data preferences conflict with your data policies, and can choose how to act: not using your service, choosing which of your
features to opt-in or out- of, or – hopefully! – granting an exception to your site (possibly with caveats, such as sandboxing your cookies or clearing them immediately after closing
the browser tab).
Sure, what we’ve got isn’t perfect. Sometimes companies outright lie about their use of information or use illicit methods to track user behaviour. There’ll always be bad guys out there. That’s what laws are there to deal with.
But what we’ve got today is so seamless, it’s hard to imagine a world in which we somehow all… collectively decided that the correct solution to the privacy problem might have been to
throw endless popovers into users’ faces, bury consent-based choices under dark patterns, and make humans do the work that should from the outset have been done by machines. What a
strange and terrible timeline that would have been.
Footnotes
1 If you know P3P‘s
history, regardless of what timeline you’re in: congratulations! You win One Internet Point.
2 Techbros have been trying to solve political problems using technology since long before
the word “techbro” was used in its current context. See also: (a) there aren’t enough mental health professionals, let’s make an AI app? (b) we don’t have enough ventilators for this
pandemic, let’s 3D print air pumps? (c) banks keep failing, let’s make a cryptocurrency? (d) we need less carbon in the atmosphere or we’re going to go extinct, better hope direct
carbon capture tech pans out eh? (e) we have any problem at all, lets somehow shoehorn blockchain into some far-fetched idea about how to solve it without me having to get out of my
chair why not?
3 Note to self: find a citation for this when you can be bothered.
4 I can’t decide whether “endless cookie circle” is the name of the New Wave band I want
to form, or a description of the way I want to eventually die. Perhaps both.
6 Implementation details varied, but that’s part of the joy of the Web. Firefox favoured
“conservative” defaults; Chrome and IE had “permissive” ones; and Opera provided an ultra-configrable matrix of options by which a user could specify exactly which kinds of cookies to
accept, linked to which kinds of personal data, from which sites, all somehow backed by an extended regular expression parser that was only truly understood by three people, two of
whom were Opera developers.
What happens when you give Gutenberg and Elementor to complete Beginners? In this challenge, Meg and Lily (two of my daughters) are tasked with re-creating a webpage. They’ve never
used Elementor or Gutenberg before, and I only gave them 30 minutes each.
…
Jamie of Pootlepress challenged his daughters – who are presumably both digital natives, but have no WordPress experience – to build a page to a specific design using both Gutenberg and Elementor. In 30 minutes.
Regardless of what you think about the products under test or the competitors in the challenge (Lily + Gutenberg clearly seems to be the fan favourite, which I’d sort-of expect because
IMO Gutenberg’s learning curve is much flatter that Elementor’s), this is a fantastic example of “thinking aloud” (“talkalong”)
UX testing. And with (only) a £20 prize on offer, it’s possibly the best-value testing of its type I’ve ever seen too! Both the
participants do an excellent job of expressing their praise of and frustration with different parts of the interface of their assigned editing platform, and the developers of both – and
other systems besides – could learn a lot from watching this video.
Specifically, this video shows how enormous the gulf is between how developers try to express concepts that are essential to web design and how beginner users assume things will work.
Concepts like thinking in terms of “blocks” that can resize or reposition dynamically, breakpoints, assets as cross-references rather than strictly embedded within documents, style as
an overarching concept by preference to something applied to individual elements, etc… some as second nature once you’re sixteen levels deep into the DOM and you’ve been doing it for years! But they’re rarely intuitive… or, perhaps, not expressed in a way that makes them intuitive… to new users.
Here’s a perfect example I bumped into earlier this week, courtesy of The Green Web Foundation. This looks like a
hyperlink… but if you open it in a new tab/window, you see a page (not even a 404 page!) with the text “It looks like nothing was found at this location.”
In the site shown in the screenshot above, the developer took something the web gave them for free (a hyperlink), threw it away (by making it a link-to-nowhere), and rebuilt its
functionality with Javascript (without thinking about the fact that you can do more with hyperlinks than click them: you can click-and-drag them, you can bookmark them, you can share
them, you can open them in new tabs etc.). Ugh.
Something you can clearly type a numeric day, month and year into is best.
Three dropdowns are slightly worse, but at least if you use native HTML <select> elements keyboard
users can still “type” to filter.
Everything else – including things that look like <select>s but are really funky React <div>s, is pretty terrible.
Calendars can be great for choosing your holiday date range. But pressing “Prev” ~480 times to get to my month of birth isn’t good. Also: what’s with the time “sliders”? (Yes, I know I’ve implemented these myself, in the past, and I’m sorry.)
People designing webforms that require me to enter my birthdate:
I am begging you: just let me type it in.
Typing it in is 6-8 quick keystrokes. Trying to navigate a little calendar or spinny wheels back to the 1970s is time-consuming, frustrating and unnecessary.
They’re right. Those little spinny wheels are a pain in the arse if you’ve got to use one to go back 40+ years.
These things are okay (I guess) on mobile/touchscreen devices, though I’d still prefer the option to type in my date of birth. But send one to my desktop and I will
curse your name.
Can we do worse?
If there’s one thing we learned from making the worst volume control in the world, the other
year, it’s that you can always find a worse UI metaphor. So here’s my attempt at making a date of birth field that’s somehow
even worse than “date spinners”:
My datepicker implements a game of “higher/lower”. Starting from bounds specified in the HTML code and a random guess, it
narrows-down its guess as to what your date of birth is as you click the up or down buttons. If you make a mistake you can start over with the restart button.
Amazingly, this isn’t actually the worst datepicker into which I’ve entered my date of birth! It’s cognitively challenging compared to most, but it’s relatively fast at
narrowing down the options from any starting point. Plus, I accidentally implemented some good features that make it better than plenty of the datepickers out there:
It’s progressively enhanced – if the Javascript doesn’t load, you can still enter your date of birth in a sensible way.
Because it leans on a <input type="date"> control, your browser takes responsibility for localising, so if you’re from one of those weird countries that prefers
mm-dd-yyyy then that’s what you should see.
It’s moderately accessible, all things considered, and it could easily be improved further.
It turns out that even when you try to make something terrible, so long as you’re building on top of the solid principles the web gives you for free, you can accidentally end
up with something not-so-bad. Who knew?
I was chatting with a fellow web developer recently and made a joke about the HTML <blink> and
<marquee> tags, only to discover that he had no idea what I was talking about. They’re a part of web history that’s fallen off the radar and younger developers are
unlikely to have ever come across them. But for a little while, back in the 90s, they were a big deal.
Even Macromedia Dreamweaver, which embodied the essence of 1990s web design, seemed to treat wrapping
<blink> in <marquee> as an antipattern.
Invention of the <blink> element is often credited to Lou Montulli, who wrote pioneering web browser Lynx before being joining Netscape in 1994. He insists that he didn’t write any
of the code that eventually became the first implementation of <blink>. Instead, he claims: while out at a bar (on the evening he’d first meet his wife!), he
pointed out that many of the fancy new stylistic elements the other Netscape engineers were proposing wouldn’t work in Lynx, which is a text-only browser. The fanciest conceivable
effect that would work across both browsers would be making the text flash on and off, he joked. Then another engineer – who he doesn’t identify – pulled a late night hack session and
added it.
And so it was that when Netscape Navigator 2.0 was released in 1995 it added support for
the <blink> tag. Also animated GIFs and the first inklings of JavaScript, which collectively
would go on to define the “personal website” experience for years to come. Here’s how you’d use it:
<BLINK>This is my blinking text!</BLINK>
With no attributes, it was clear from the outset that this tag was supposed to be a joke. By the time HTML4 was
published as a a recommendation two years later, it was documented as being a joke. But the Web of the late 1990s
saw it used a lot. If you wanted somebody to notice the “latest updates” section on your personal home page, you’d wrap a <blink> tag around the title (or,
if you were a sadist, the entire block).
If you missed this particular chapter of the Web’s history, you can simulate it at Cameron’s World.
In the same year as Netscape Navigator 2.0 was released, Microsoft released Internet Explorer
2.0. At this point, Internet Explorer was still very-much playing catch-up with the features the Netscape team had implemented, but clearly some senior Microsoft engineer took a
look at the <blink> tag, refused to play along with the joke, but had an innovation of their own: the <marquee> tag! It had a whole suite of attributes to control the scroll direction, speed, and whether it looped or bounced backwards and forwards. While
<blink> encouraged disgusting and inaccessible design as a joke, <marquee> did it on purpose.
<MARQUEE>Oh my god this still works in most modern browsers!</MARQUEE>
But here’s the interesting bit: for a while in the late 1990s, it became a somewhat common practice to wrap content that you wanted to emphasise with animation in both a
<blink> and a <marquee> tag. That way, the Netscape users would see it flash, the IE users
would see it scroll or bounce. Like this:
<MARQUEE><BLINK>This is my really important message!</BLINK></MARQUEE>
Wrap a <blink> inside a <marquee> and IE users will see the marquee. Delightful.
The web has always been built on Postel’s Law: a web browser should assume that it won’t understand everything it reads,
but it should provide a best-effort rendering for the benefit of its user anyway. Ever wondered why the modern <video> element is a block rather than a self-closing
tag? It’s so you can embed within it code that an earlier browser – one that doesn’t understand <video> – can read (a browser’s default state when seeing a
new element it doesn’t understand is to ignore it and carry on). So embedding a <blink> in a <marquee> gave you the best of both worlds, right?
(welll…)
Wrap a <blink> inside a <marquee> and Netscape users will see the blink. Joy.
Better yet, you were safe in the knowledge that anybody using a browser that didn’t understand either of these tags could still read your content. Used properly, the
web is about progressive enhancement. Implement for everybody, enhance for those who support the shiny features. JavaScript and CSS can be applied with the same rules, and doing so pays dividends in maintainability and accessibility (though, sadly, that doesn’t stop people writing
sites that needlessly require these technologies).
Personally, I was a (paying! – back when people used to pay for web browsers!) Opera user so I mostly saw neither <blink> nor <marquee> elements.
I don’t feel like I missed out.
I remember, though, the first time I tried Netscape 7, in 2002. Netscape 7 and its close descendent are, as far as I can tell, the only web browsers to support both<blink> and <marquee>. Even then, it was picky about the order in which they were presented and the elements wrapped-within them. But support was
good enough that some people’s personal web pages suddenly began to exhibit the most ugly effect imaginable: the combination of both scrolling and flashing text.
If Netscape 7’s UI didn’t already make your eyes bleed (I’ve toned it down here by installing the “classic skin”), its simultaneous
rendering of <blink> and <marquee> would.
The <blink> tag is very-definitely dead (hurrah!), but you can bring it back with pure CSS if you must.
<marquee>, amazingly, still survives, not only in polyfills but natively, as you might be able to see above. However, if you’re in any doubt as to whether or not
you should use it: you shouldn’t. If you’re looking for digital nostalgia, there’s a whole
rabbit hole to dive down, but you don’t need to inflict <marquee> on the rest of us.
What this now does is instead of saying “add margin to the left”, it says “regardless of direction, put margin on the starting side”. If the language of the document was
right to left, like Arabic, that margin would be on the right hand side.
…
This is clever. If you use e.g. margin-left on every list element after the first to put space “between” them, the spacing isn’t quite right when the order of the elements
is reversed, for example because your page has been automatically translated into a language that reads in the opposite direction (e.g. right-to-left, rather than left-to-right). When
you use margin-left in this way you’re imposing a language-direction-centric bias on your content, and there’s no need: margin-inline-start and its friends
are widely-supported and says what you mean: “place a margin before this element”. I’ll be trying to remember to
use this where it’s appropriate from now on.
You see what that’s doing? It’s loading the stylesheet for the print medium, but then when the document finishes loading it’s switching the media type from “print” to “all”.
Because it didn’t apply to begin with the stylesheet isn’t render-blocking. You can use this to delay secondary styles so the page essentials can load at full speed.
Reducing blocking times, like I have on this page, is one of many steps in optimising perceived page performance.
I don’t like this approach. I mean: I love the elegance… I just don’t like the implications.
Why I don’t like lazy-loading CSS using Javascript
Using Javascript to load CSS, in order to prevent that CSS
blocking rendering, feels to me like it conceptually breaks the Web. It certainly violates the expectations of progressive enhancement, because it introduces a level of
fault-intolerance that I consider (mostly) unacceptable.
CSS and Javascript are independent of one another. A well-designed progressively-enhanced page should function with
HTML only, HTML-and-CSS only, HTML-and-JS only, or all
three.CSS adds style, and JS adds behvaiour to a page; and when
you insist that the user agent uses Javascript in order to load stylistic elements, you violate the separation of these technologies (I’m looking at you, the majority of heavyweight
front-end frameworks!).
If you’re thinking that the only people affected are nerds like me who browse with Javascript wholly or partially disabled, you’re wrong: gov.uk research shows that around 1% of your visitors have Javascript fail for some reason or another: because it’s disabled
(whether for preference, privacy, compatibility with accessibility technologies, or whaterver), blocked, firewalled, or they’re using a browser that you didn’t expect.
Can we lazy-load CSS in a way that doesn’t depend on Javascript? (spoiler: yes)
Chris’s daily tip got me thinking: could there exist a way to load CSS in a non-render-blocking way but which degraded
gracefully in the event that Javascript was unavailable? I.e. if Javascript is working, lazy-load CSS, otherwise: load
conventionally as a fallback. It turns out, there is!
In principle, it’s this:
Link your stylesheet from within a <noscript> block, thereby only exposing it where Javascript is disabled. Give it a custom attribute to make it easy to find
later, e.g. <noscript lazyload> (if you’re a standards purist, you might prefer to use a data- attribute).
Have your Javascript extract the contents of these <noscript> blocks and reinject them. In modern browsers, this is as simple as e.g.
[...document.querySelectorAll('noscript[lazyload]')].forEach(ns=>ns.outerHTML=ns.innerHTML).
If you need support for Internet Explorer, you need a little more work, because Internet Explorer doesn’t expose<noscript> blocks to the DOM in a helpful way. There are a variety of possible workarounds; I’ve implemented one but not put too much thought into it because I rarely have to
think about Internet Explorer these days.
In any case, I’ve implemented a proof of concept/demonstration if you’d like to see it in action: just take a look and view source (or read the page)
for details. Or view the source alone via this gist.
Lazy-loading CSS using my approach provides most of the benefits of other approaches… but works properly in environments without
Javascript too.
Update: Chris Ferdinandi’s refined this into an even cleaner approach that takes the best of both worlds.
Save your bandwidth: just look at this screenshot of the site instead of visiting.
Going to that page results in about 14 Mb of data being transmitted from their server to your device (which you’ll pay for
if you’re on a metered connection). For comparison, reading my recent post about pronouns results in about 356 Kb of data. In other words, their page is forty times more bandwidth-consuming, despite the fact that my page has about four times the word count. The page
you’re reading right now, thanks to its images, weighs in at about 650 Kb: you could still download it more than twenty times while
you were waiting for theirs.
Well that’s got to be pretty embarassing.
Worse still, the most-heavyweight of the content they deliver is stuff that’s arguably strictly optional and doesn’t add to the message:
Eight different font files are served from three different domains (the fonts alone consume about 140 Kb) – seven more are
queued but not used.
Among the biggest JavaScript files they serve is that of Hotjar analytics: I understand the importance of measuring your impact, but making
your visitors – and the planet – pay for it is a little ironic.
The biggest JavaScript file seems to be for Mapbox, which as far as I can see is never actually used: that map on the page is a static
image which, incidentally, I was able to reduce from 0.5 Mb to 0.2 Mb just by running it through
a free online image compressor.
This took me literally seconds to do but would save about a twelfth of a second for every single typical 4G user to their site. And it’s not even the worst culprit.
And because the site sets virtually no caching headers, even if you’ve visited the website before you’re likely to have to download the whole thing again. Every single
time.
It’s not just about bandwidth: all of those fonts, that JavaScript, their 60 Kb of CSS (this page sent you 13 Kb) all has to be parsed and interpreted by your device. If you’re on a mobile device or a laptop, that means you’re burning through lithium (a non-renewable resource whose extraction and disposal is highly polluting) and
regardless of your device you’re using you’re using more electricity to visit their site than you need to. Coding antipatterns like document.write() and active event
listeners that execute every time you scroll the page keep your processor working hard, turning electricity into waste heat. It took me over 12 seconds on a high-end smartphone and a
good 4G connection to load this page to the point of usability. That’s 12 seconds of a bright screen, a processor running full tilt,a data connection working its hardest, and a
battery ticking away. And I assume I’m not the only person visiting the website today.
This isn’t really about this particular website, of course (and I certainly don’t want to discourage anybody from the important cause of saving the planet!). It’s about the
bigger picture: there’s a widespread and long-standing trend in web development towards bigger, heavier, more power-hungry websites, built on top of heavyweight frameworks that push the
hard work onto the user’s device and which favour developer happiness over user experience. This is pretty terrible: it makes the Web slow, and brittle, and it increases the digital
divide as people on slower connections and older devices get left behind.
Using <input type="text" inputmode="numeric" pattern="[0-9]*"> allows for a degree of separation between how the user enters data (“input mode”), what the browser
expects the user input to contain (type equals number), and potentially how it tries to validate it.
…
I’ve sung the praises of the GDS research team before, and it’s for things like this that I respect them the most: they’re
knowing for taking a deep-dive user-centric approach to understanding usability issues, and they deliver valuable actionable answers off the back of it.
If you’ve got Web forms that ask people for numbers, this is how you should be doing it. If you’re doing so specifically for 2FA purposes, see that post I shared last month on a similar topic.
Don’t understand why Web accessibility is important? Need a quick and easily-digestible guide to the top things you should be looking into in order to make your web applications
screenreader ready? Try this fun, video-game-themed 5 minute video from Microsoft.
There’s a lot more to accessibility than is covered here, and it’s perhaps a little over-focussed on screenreaders, but it’s still a pretty awesome introduction.
Google’s built-in testing tool Lighthouse judges the accessibility of our websites with a score between 0 and 100. It’s laudable to try to get a high grading, but a score of 100
doesn’t mean that the site is perfectly accessible. To prove that I carried out a little experiment.
…
Manuel Matuzovic wrote a web page that’s pretty-much inaccessible to everybody: it doesn’t work with keyboard navigation, touchscreens, or mice. It doesn’t work with screen
readers. Even if you fix the other problems, its contrast is bad enough that almost nobody could read it. It fails ungracefully if CSS or JavaScript is unavailable. Even the source code is illegible. This took a special kind of evil.
But it scores 100% for accessibility on Lighthouse! I earned my firework show for this site last year but I know better than to let that lull
me into complacency: accessibility isn’t something a machine can test for you, only something that (at best) it can give you guidance on.
There are plenty of opportunities for friction in the user experience when logging in, particularly while entering a two factor authentication code. As developers we should be
building applications that support the need for account security but don’t detract from the user experience. Sometimes it can feel as though these requirements are in a battle
against each other.
In this post we will look at the humble <input> element and the HTML attributes that will help speed up our users’ two factor authentication experience.
…
Summary: simple changes like making your TOTP-receiving <input> to have
inputmode="numeric" gives user-agents solid hints about what kind of data is expected, allowing mobile phones to show a numeric keypad rather than a full keyboard, while
setting autocomplete="one-time-code" hints to password managers and autocomplete tools that what’s being collected needn’t be stored for future use as it’ll expire (and can
also help indicate to authenticators where they should auto-type).
As my current research project will show, the user experience of multifactor authentication is a barrier to entry for many users who might otherwise benefit from it. Let’s lower that
barrier.
I think that CSS would be greatly helped if we solemnly state that “CSS4 is here!” In this post I’ll try to convince you of my viewpoint.
I am proposing that we web developers, supported by the W3C CSS WG, start saying “CSS4 is here!” and excitedly chatter about how it will hit the market any moment now and transform
the practice of CSS.
Of course “CSS4” has no technical meaning whatsoever. All current CSS specifications have their own specific
versions ranging from 1 to 4, but CSS as a whole does not have a version, and it doesn’t need one, either.
Regardless of what we say or do, CSS 4 will not hit the market and will not transform anything. It also does not describe any technical reality.
But if you’ve got more than a little web savvy you might still be surprised to hear me say that CSS4 is here, or even
that it’s a “thing” at all. Welll… that’s because it isn’t. Not officially. Just like JavaScript’s versioning has gone all evergreen these last few years,
CSS has gone the same way, with different “modules” each making their way through the standards and implementation processes
independently. Which is great, in general, by the way – we’re seeing faster development of long-overdue features now than we have through most of the Web’s history – but it
does make it hard to keep track of what’s “current” unless you follow along watching very closely. Who’s got time for that?
When CSS2 gained prominence at around the turn of the millennium it was revolutionary, and part of the reason for that
– aside from the fact that it gave us all some features we’d wanted for a long time – was that it gave us a term to rally behind. This browser already supports it, that browser’s
getting there, this other browser supports it but has a f**ked-up box model (you all know the one I’m talking about)… we at last had an overarching term to discuss what was supported,
what was new, what was ready for people to write articles and books about. Nobody’s going to buy a book that promises to teach them “CSS3 Selectors Level 3, Fonts Level 3, Writing Modes
Level 3, and Containment Level 1”: that title’s not even going to fit on the cover. But if we wrapped up a snapshot of what’s current and called it CSS4… now that’s going to sell.
Can we show the CSS WG that there’s mileage in this idea and make it happen? Oh, I hope so. Because while the
modular approach to CSS is beautiful and elegant and progressive… I’m afraid that we can’t use it to inspire junior developers.
Also: I don’t want this joke to forever remain among the top results
when searching for CSS4…
This is a reply to a post published elsewhere. Its content might be duplicated as a traditional comment at the original source.
Chris Ferdinandi‘s daily tip
for yesterday addressed a common familiar to Web developers using custom fonts (i.e. basically all of them):
In many browsers, if a custom typeface is declared but hasn’t finished downloading and parsing yet, browsers will leave space for the text but not render it until the file is ready.
This is often called a Flash Of Invisible Text (or FOIT).
29% of page loads on Chrome for Android displayed blank text: the user agent knew the text it needed to paint, but was blocked from doing so due to the unavailable font resource. In
the median case the blank text time was ~350 ms, ~750 ms for the 75th percentile, and a scary ~2300 ms for the 95th.
To make matters worse, some mobile browsers never timeout a failed font file, and therefore never show text in a fallback typeface if the custom one fails to load. You get nothing at
all.
Let’s talk about how to fix that.
…
Chris is right…
He’s right that the FOIT is annoying, and he’s right that for most text (and especially body text) the best result would be if
a fallback system font was used immediately and swapped-out for the designer’s preferred font as soon as it becomes available: this maximises usability, especially on slower devices and
connections. His solution is this:
Set the font to a fallback font initially.
Set the font to the preferred font once a CSS class is applied to a root element.
Use Javascript to set apply that CSS class either when FontFaceSet.load() indicates that the font is available,
and (via a cookie) for as long as the font file is expected to appear in the browser cache.
This approach is not without its problems. It requires Javascript (users for whom Javascript fails for some reason won’t see the font at all, but may still have to download the font
file!), conflates cookie lifetime with cache lifetime (the two can be cleared independently, cookies can sometimes be synchronised across devices that don’t necessarily share caches,
etc.), and uses Javascript techniques that don’t work in some browsers (Edge and Internet Explorer are both capable of showing custom web fonts but both will use the fallback font
unless either (a) further Javascript is added (which Chris doesn’t supply) or (b) browser detection and/or conditional comments are used to trigger different behaviour in these
browsers (which is icky).
…but he’s also wrong…
If only there was a better way to prevent the FOIT. One which degrades gracefully in older browsers, doesn’t require
Javascript, doesn’t make assumptions about user cookie/cache configuration, and ideally involves a lot less code. It turns out, there is!
The font-display CSS directive exists to solve this exact issue [MDN]. Here’s what it looks like being used to solve the problem Chris presents (example taken from my
blog’s CSS!):
Setting font-display: swap in the @font-face block tells the browser to use fallback fonts in place of this font while it loads. That’s probably exactly what
you want for text fonts and especially body text; it means that the user sees the text as soon as possible and it’s swapped-out for the preferred font the moment it becomes available:
no Javascript necessary! Conversely, font-display: block is a better choice for icon fonts where you want to force the browser to wait as long as possible for the
font file to load (because any content rendered using it makes no sense otherwise).
font-display works out-of-the-box with Chrome, Firefox, and Safari and with the next version of Edge; older
versions of Edge and Internet Explorer will simply fall-back to their default behaviour (FOIT where-necessary) – this is a
progressive enhancement technique. But instead of a couple of dozen lines of Javascript, it’s a single line of CSS.
I copied the code, dropped it into a post of mine, created a static image of an animated GIF, and
turned on the “reduce motion” preference (System Preferences > Accessibility > Display).
And then BOOM. Just worked. In real time!
…
I added reduced-motion support to DanQ.me earlier this year, but I only bothered to pay attention to the animated parts of the layout
and design itself (the “bounce” on the menus and the cutesy motion of the logo, for example) and considered the (few) GIF animations and the like that I’ve added to be out-of-scope. But
this approach is really quite simple and elegant, and I’ll bear it in mind if I ever have need of such a thing!