The feature here is that you can take a color you already have and manipulate its components. Which things you can change vary by the color space you choose, so for an RGB color you
can change the red, green, blue, and alpha channels, for an HSL color you can change hue, saturation, lightness, and alpha, and for
my belovedOKLCH you can change lightness, chroma, hue, and yes, opacity.
The syntax if you wanted to use this and not change anything about the color is:
oklch(fromvar(--color)lch/1)
But of course you can change each component, either swapping them entirely as with this which sets the lightness to 20%:
oklch(fromvar(--color)20%ch/1)
…
This is really something. I was aware that new colour functions were coming to CSS but kinda dropped the ball and didn’t notice that oklch(...) is, for the most
part, usable in any modern browser. That’s a huge deal!
The OKLCH colour model makes more sense than RGB, covers a wider spectrum than HSL, and – on screens that support it – describes a (much) larger spectrum, providing access to a wider
array of colours (with sensible fallbacks where they’re not supported). But more than that, the oklch(...) function provides good colour adaptation.
If you’ve ever used e.g. Sass’s darken(...) function and been disappointed when it seems to have a bigger impact on some colours than others… that’s because simple
mathematical colour models don’t accurately reflect the complexities of human vision: some colours just look brighter, to us, thanks quirks of biochemistry, psychology, and
evolution!
This colour vision curve feels to me a little like how pianos aren’t always tuned to equal-temper – i.e. how the maths of harmonics says that should be – but are instead tuned
so that the lowest notes are tuned slightly flat and the highest notes slightly sharp to compensate for
inharmonicity resulting from the varying stiffness of the strings. This means that their taut length alone doesn’t dictate what note humans think they hear: my understanding is
that at these extremes, the difference in the way the wave propagates within the string results in an inharmonic overtone that makes these notes sound out-of-tune with the rest
of the instrument unless compensated for with careful off-tuning! Humans experience something other than what the simple maths predicts, and so we compensate for it! (The quirk isn’t
unique to the piano, but it’s most-obvious in plucked or struck strings, rather than in bowed strings, and for instruments with a wide range, of which a piano is of course both!)
OKLCH is like that. And with it as a model (and a quick calc(...) function), you can tell your
CSS “make this colour 20% lighter” and get something that, for most humans, will actually look “20% lighter”, regardless of the initial hue. That’s cool.
I spent way too long playing with this colour picker while I understood this concept. And now I want to use it
everywhere!
In anticipation of WWW Day on 1 August, some work colleagues and I were
sharing pictures of the first (or early) websites we worked on. I was pleased to be able to pull out a screenshot of how my blog looked back in 1999!
Because I’m such a digital preservationist, many of those ancient posts are still available on my blog, so I also shared a photo of me browsing the same content on my
blog as it is today, side-by-side with that 25+-year-old screenshot.1
I’ve even applied img { image-rendering: crisp-edges; } to try to compensate for modern browsers’ capability for subpixel rendering when rescaling images: let them
eat pixels!5
I’ve added 1999 Mode to my April Fools gags so, like this year, if you happen to visit my site on or around 1 April,
there’s a change you’ll see it in 1999 mode anyway. What fun!
I think there’s a possible future blog post about Web design challenges of the 1990s. Things like: what it the user agent doesn’t support images? What if it supports GIFs, but not
animated ones (some browsers would just show the first frame, so you’d want to choose your first frame appropriately)? How do I ensure that people see the right content if they skip my
frameset? Which browser-specific features can I safely use, and where do I need a fallback6? Will this
work well on all resolutions down to 640×480 (minus browser chrome)? And so on.
Any interest in that particular rabbit hole of digital history?
Footnotes
1 Some of the addresses have changed, but from Summer 2003 onwards I’ve had a solid chain
of redirects in place to try to keep content available via whatever address it was at. Because Cool URIs Don’t Change. This occasionally turns out to be useful!
2 Actually, the entire theme is just a CSS change, so no tables are added. But I’ve tried to make it look like I’m using tables for layout, because that (and spacer GIFs) were all
we had back in the day.
3 Obviously the title saying “Dan Q” is modern, because that
wasn’t even my name back then, but this is more a reimagining of how my site would have looked if I were transported back to 1999 and made to do it all again.
4 I was slightly obsessed for a couple of years in the late 90s with flaming text on black
marble backgrounds. The hit counter in my screenshot above – with numbers on fire – was one I made, not a third-party one; and because mine was the only one of my friends’
hosts that would let me run CGIs, my Perl script powered the hit counters for most of my friends’ sites too.
5 I considered, but couldn’t be bothered, implementing an SVG CSS filter: to posterize my images down to 8-bit colour, for that real
“I’m on an old graphics card” feel! If anybody’s already implemented such a thing under a license that I can use, let me know and I’ll integrate it!
Yesterday, I wrote the stupidest bit of CSS of my entire career.
Owners of online shops powered by WooCommerce can optionally “connect” their stores back to Woo.com. This enables them to manage their subscriptions to
any extensions they use to enhance their store1. They can also browse a
marketplace of additional extensions they might like to consider, which is somewhat-tailored to them based on e.g. their geographical location2
In the future, we’ll be adding sponsored products to the marketplace listing, but we want to be transparent about it so yesterday I was working on some code that would determine from
the appropriate API whether an extension was sponsored and then style it differently to make this clear. I took
a look at the proposal from the designer attached to the project, which called for
the word “Sponsored” to appear alongside the name of the extension’s developer,
a stripe at the top in the brand colour of the extension, and
a strange green blob alongside it
That third thing seemed like an odd choice, but I figured that probably I just didn’t have the design or marketing expertise to understand it, and I diligently wrote some appropriate code.3
After some minor tweaks, my change was approved. The designer even swung by and gave it a thumbs-up. All I needed to do was wait for the automated end-to-end tests to complete, and I’d
be able to add it to WooCommerce ready to be included in the next-but-one release. Nice.
In the meantime, I got started on my next bit of work. This one also included some design work by the same designer, and wouldn’t you know it… this one also had a little green
blob on it?
Then it hit me. The blobs weren’t part of the design at all, but the designer’s way of saying “look at this bit, it’s important!”. Whoops!
So I got to rush over to my (already-approved, somehow!) changeset and rip out
the offending CSS: the stupidest bit of CSS of my entire career.
Not bad code per se, but reasonable code resulting from a damn-stupid misinterpretation of a designer’s wishes. Brilliant.
3 A fun side-effect of working on open-source software is that my silly mistake gets
immortalised somewhere where you can go and see it any time you like!
You can click an image and see a full-window popup dialog box containing a larger version of the image.
The larger version of the image isn’t loaded until it’s needed.
You can close the larger version with a close button. You can also use your browser’s back button.
You can click again to download the larger version/use your browser to zoom in further.
You can share/bookmark etc. the URL of a zoomed-in image and the recipient will see the same image (and return to the
image, in the right blog post, if they press the close button).
No HTTP round trip is required when opening/closing a lightbox: it’s functionally-instantaneous.2
No JavaScript is used at all.
Here’s how it works –
The Markup
For each lightboxed image in a post, a <dialog> for that image is appended to the post. That dialog contains a larger copy of the image (set to
loading="lazy" so the browser have to download it until it’s needed), and a “close” button.
The image in the post contains an anchor link to the dialog; the close button in the dialog links back to the image in the post.3 I wrap the lightbox image itself in a link to the full version of the
image, which makes it easier for users to zoom in further using their browser’s own tools, if they like.
Even without CSS, this works (albeit with “scrolling” up and down to the larger image). But the clever bit’s yet to
come:
The Style
Lightboxes are hidden by default (display: none), but configured to fill the window when shown.
They’re shown by the selector .lightbox:target, which is triggered by the id of the <dialog> being referenced by the anchor part of
the URL in your address bar!
Summary
It’s neither the most-elegant nor cleanest solution to the problem, but for me it hits a sweet spot between developer experience and user experience. I’m always disappointed when
somebody’s “lightbox” requires some heavyweight third-party JavaScript (often loaded from a CDN), because that seems to be the
epitome of the “take what the Web gives you for free, throw it away, and reimplement it badly in JavaScript” antipattern.
There’s things I’ve considered adding to my lightbox. Progressively-enhanced JavaScript that adds extra value and/or uses the Popover API where available, perhaps? View Transitions to animate the image “blowing up” to the larger size, while the full-size image loads in the
background? Optimistic preloading when hovering over the image4? “Previous/next” image links when lightboxing a gallery? There’s lots of potential to expand it
without breaking the core concept here.
I’d also like to take a deeper dive into the accessibility implications of this approach: I think it’s pretty good, but accessibility is a big topic and there’s always more to
learn.
I hope the idea’s of use to somebody else looking to achieve this kind of thing, too.
Footnotes
1 Where JavaScript is absolutely necessary, I (a) host it on the same domain, for
performance and privacy-respecting reasons, and (b) try to provide a functional alternative that doesn’t require JavaScript, ideally seamlessly.
2 In practice, the lightbox images get lazy-loaded, so there can be a short round
trip to fetch the image the first time. But after that, it’s instantaneous.
3 The pair – post image and lightbox image – work basically the same way as footnotes,
like this one.
4 I already do this with links in general using the excellent instant.page.
It all started when I saw no-ht.ml, Terence Eden‘s hilarious response to Salma
Alam-Naylor‘s excellent HTML is all you need to make a website. The latter is an
argument against both the silly amount of JavaScript with which websites routinely burden their users, but also even against depending on CSS. As a fan of CSS Naked Day and a firm
believer in using JS only for progressive enhancement, I’m obviously in favour.
Terence’s site works by delivering a document with a
claimed MIME type of text/html, but which contains only the (invalid) “HTML” code
<!doctype UNICODE><meta charset="UTF-8"><plaintext> (to work around browsers’ wish to treat the page as HTML). This is followed by a block of UTF-8 plain text making use of spacing
and emoji to illustrate and decorate the content. It’s frankly very silly, and I love it.1
I think it’s possible to go one step further, though, and create a web page with no code whatsoever. That is, one that you can read as if it were a regular web page, but where
using View Source or e.g. downloading the page with curl will show you… nothing.
I present: The Page With No Code! (It’ll probably only work if you’re using Firefox, for reasons that will become apparent later.)
Once you’ve had a look for yourself and had a chance to form an opinion, here’s an explanation of the black magic that makes this atrocity possible:
The page is blank. It’s delivered with Content-Type: text/html. Your browser interprets a completely-blank page as faulty and corrects it to a functionally-blank
minimal HTML page: <html><head></head><body></body></html>.
<body> and <html> elements can be styled with CSS; this includes the ability to add
content:::before and ::after each
element. If only we could load a stylesheet then content injection is possible.
We use the fourth way to inject
CSS – a Link: HTTP header – to deliver a CSS payload (this, unfortunately, only works in Firefox). To further obfuscate what’s happening and remove the need for a round-trip, this is encoded
as a data: URI.
My server-side implementation of this broke in 2023 after I upgraded Nginx; my new version doesn’t support the super-long Link: header needed
to make this hack work, so I’ve updated the page to use the Link: to reference the CSS file rather than embed it via a data URI. It’s not as cool, but it at least means you can
still see the page. Thanks to Thomas Bradshaw for pointing out the problem.
Footnotes
1 My first reaction was “why not just deliver something with Content-Type:
text/plain; charset=utf-8 and dispense with the invalid code, but perhaps that’s just me overthinking the non-existent problem.
While talking about external CSS, he hinted at what I consider to be a distinct fourth way with its own unique use
cases:; using the Link: HTTP header. I’d like to share with you how it works and why I think it needs to be
kept in people’s minds, even if it’s not suitable for widespread deployment today.
Injecting CSS using the Link: HTTP Header
Every one of Jeremy’s suggestions involve adding markup to the HTML document itself. Which makes sense; you almost always
want to associate styles with a document regardless of the location it’s stored or the medium over which it’s transmitted. The most popular approach to adding CSS to a page uses the <link> HTML element, but did you know… the <link> element has a semantically-equivalent HTTP header,Link:.
According to the specifications, the following HTTP
responses are equivalent in terms of the CSS that would be loaded and applied to the document:
This isn’t something you should put on your website right now. This (21-year-old!) standard is still only really supported in Firefox and pre-Blink Opera, so you lose perhaps 95% of the
Web (it could be argued that because CSSought to be considered progressive enhancement, it’s tolerable so long as your
HTML is properly-written).
If it were widely-supported, though, that would be a really good thing: HTTP headers beat meta/link tags for configurability, performance management, and separation of concerns. Need some specific examples? Sure:
here’s what you could use HTTP stylesheet linking for:
Performance improvement using aggressively preloaded “top” stylesheets before the DOM parser even fires up.
Stylesheet injection by edge caches to provide regionalised/localised changes to brand identity.
Strong separation of content and design by hosting content and design elements in different systems.
Branding your staff intranet differently when it’s accessed from outside the network than inside it.
Rebranding proprietary services on your LAN without deep inspection, using reverse proxies.
Less-destructive user stylesheet injection by plugins etc. that doesn’t risk breaking icky on-page Javascript (e.g. theme switchers).
Browser detection? 😂 You could use this technique today to detect Firefox. But you absolutely
shouldn’t; if you think you need browser detection in CSS, use this instead.
Unfortunately right now though, stylesheet Link: headers remain consigned to the bin of “cool stylesheet standards that we could probably use if it weren’t for fucking Google”; see also
alternate stylesheets.
What this now does is instead of saying “add margin to the left”, it says “regardless of direction, put margin on the starting side”. If the language of the document was
right to left, like Arabic, that margin would be on the right hand side.
…
This is clever. If you use e.g. margin-left on every list element after the first to put space “between” them, the spacing isn’t quite right when the order of the elements
is reversed, for example because your page has been automatically translated into a language that reads in the opposite direction (e.g. right-to-left, rather than left-to-right). When
you use margin-left in this way you’re imposing a language-direction-centric bias on your content, and there’s no need: margin-inline-start and its friends
are widely-supported and says what you mean: “place a margin before this element”. I’ll be trying to remember to
use this where it’s appropriate from now on.
You see what that’s doing? It’s loading the stylesheet for the print medium, but then when the document finishes loading it’s switching the media type from “print” to “all”.
Because it didn’t apply to begin with the stylesheet isn’t render-blocking. You can use this to delay secondary styles so the page essentials can load at full speed.
I don’t like this approach. I mean: I love the elegance… I just don’t like the implications.
Why I don’t like lazy-loading CSS using Javascript
Using Javascript to load CSS, in order to prevent that CSS
blocking rendering, feels to me like it conceptually breaks the Web. It certainly violates the expectations of progressive enhancement, because it introduces a level of
fault-intolerance that I consider (mostly) unacceptable.
CSS and Javascript are independent of one another. A well-designed progressively-enhanced page should function with
HTML only, HTML-and-CSS only, HTML-and-JS only, or all
three.CSS adds style, and JS adds behvaiour to a page; and when
you insist that the user agent uses Javascript in order to load stylistic elements, you violate the separation of these technologies (I’m looking at you, the majority of heavyweight
front-end frameworks!).
If you’re thinking that the only people affected are nerds like me who browse with Javascript wholly or partially disabled, you’re wrong: gov.uk research shows that around 1% of your visitors have Javascript fail for some reason or another: because it’s disabled
(whether for preference, privacy, compatibility with accessibility technologies, or whaterver), blocked, firewalled, or they’re using a browser that you didn’t expect.
Can we lazy-load CSS in a way that doesn’t depend on Javascript? (spoiler: yes)
Chris’s daily tip got me thinking: could there exist a way to load CSS in a non-render-blocking way but which degraded
gracefully in the event that Javascript was unavailable? I.e. if Javascript is working, lazy-load CSS, otherwise: load
conventionally as a fallback. It turns out, there is!
In principle, it’s this:
Link your stylesheet from within a <noscript> block, thereby only exposing it where Javascript is disabled. Give it a custom attribute to make it easy to find
later, e.g. <noscript lazyload> (if you’re a standards purist, you might prefer to use a data- attribute).
Have your Javascript extract the contents of these <noscript> blocks and reinject them. In modern browsers, this is as simple as e.g.
[...document.querySelectorAll('noscript[lazyload]')].forEach(ns=>ns.outerHTML=ns.innerHTML).
If you need support for Internet Explorer, you need a little more work, because Internet Explorer doesn’t expose<noscript> blocks to the DOM in a helpful way. There are a variety of possible workarounds; I’ve implemented one but not put too much thought into it because I rarely have to
think about Internet Explorer these days.
In any case, I’ve implemented a proof of concept/demonstration if you’d like to see it in action: just take a look and view source (or read the page)
for details. Or view the source alone via this gist.
Lazy-loading CSS using my approach provides most of the benefits of other approaches… but works properly in environments without
Javascript too.
Update: Chris Ferdinandi’s refined this into an even cleaner approach that takes the best of both worlds.
I first got into web design/development in the late 90s, and only as I type this sentence do I realize how long ago that was.
And boy, it was horrendous. I mean, being able to make stuff and put it online where other people could see it was pretty slick, but we did not have very much to work with.
I’ve been taking for granted that most folks doing web stuff still remember those days, or at least the decade that followed, but I think that assumption might be a wee bit
out of date. Some time ago I encountered a tweet marvelling at what we had to do without
border-radius. I still remember waiting with bated breath for it to be unprefixed!
But then, I suspect I also know a number of folks who only tried web design in the old days, and assume nothing about it has changed since.
I’m here to tell all of you to get off my lawn. Here’s a history of CSS and web design, as I remember it.
(Please bear in mind that this post is a fine blend of memory and research, so I can’t guarantee any of it is actually correct, especially the bits about causality. You may
want to try the W3C’s history of CSS, which is considerably shorter,
has a better chance of matching reality, and contains significantly less swearing.)
(Also, this would benefit greatly from more diagrams, but it took long enough just to write.)
…
I too remember the bad-old days of the pre-CSS and early-CSS Web. Back
then, when we were developing for it, we thought that it was magical. We tolerated issues like having to copy-paste our navigation around a stack of static pages, manually change our
design all over the place etc…. but man… I wouldn’t want to go back to working that way!
This is an excellent long-read for an up-close-and-personal look at how CSS has changed over the decades. Well worth a look if
you’ve any interest in the topic.
I think that CSS would be greatly helped if we solemnly state that “CSS4 is here!” In this post I’ll try to convince you of my viewpoint.
I am proposing that we web developers, supported by the W3C CSS WG, start saying “CSS4 is here!” and excitedly chatter about how it will hit the market any moment now and transform
the practice of CSS.
Of course “CSS4” has no technical meaning whatsoever. All current CSS specifications have their own specific
versions ranging from 1 to 4, but CSS as a whole does not have a version, and it doesn’t need one, either.
Regardless of what we say or do, CSS 4 will not hit the market and will not transform anything. It also does not describe any technical reality.
But if you’ve got more than a little web savvy you might still be surprised to hear me say that CSS4 is here, or even
that it’s a “thing” at all. Welll… that’s because it isn’t. Not officially. Just like JavaScript’s versioning has gone all evergreen these last few years,
CSS has gone the same way, with different “modules” each making their way through the standards and implementation processes
independently. Which is great, in general, by the way – we’re seeing faster development of long-overdue features now than we have through most of the Web’s history – but it
does make it hard to keep track of what’s “current” unless you follow along watching very closely. Who’s got time for that?
When CSS2 gained prominence at around the turn of the millennium it was revolutionary, and part of the reason for that
– aside from the fact that it gave us all some features we’d wanted for a long time – was that it gave us a term to rally behind. This browser already supports it, that browser’s
getting there, this other browser supports it but has a f**ked-up box model (you all know the one I’m talking about)… we at last had an overarching term to discuss what was supported,
what was new, what was ready for people to write articles and books about. Nobody’s going to buy a book that promises to teach them “CSS3 Selectors Level 3, Fonts Level 3, Writing Modes
Level 3, and Containment Level 1”: that title’s not even going to fit on the cover. But if we wrapped up a snapshot of what’s current and called it CSS4… now that’s going to sell.
Can we show the CSS WG that there’s mileage in this idea and make it happen? Oh, I hope so. Because while the
modular approach to CSS is beautiful and elegant and progressive… I’m afraid that we can’t use it to inspire junior developers.
Also: I don’t want this joke to forever remain among the top results
when searching for CSS4…
As part of the preparing to leave the Bodleian I’ve been revisiting a lot of the documentation I’ve written over the last eight
years. It occurred to me that I’ve never written publicly about how the Bodleian’s digital signage/interactives actually work; there are possible lessons to learn.
The Bodleian‘s digital signage is perhaps more-diverse, both in terms of technology and audience, than that of most organisations. We’ve got
signs in areas that are exclusively reader-facing to help students and academics find what they’re looking for, signs in publicly accessible rooms that advertise and educate, and signs
in gallery spaces upon which we try to present engaging and often-interactive content to support exhibitions.
Throughout those three spheres, we’ve routinely delivered a diversity of content (let’s just ignore the countdown clock, for now…). Traditional
directional signage, advertisements, games, digital exhibitions, interpretation, feedback surveys…
In the vast majority of cases – and this is where the Bodleian’s been unusual (though certainly not unique) among cultural sector institutions – we’ve created
those in-house rather than outsourcing them.
To do this economically – the volume of work on interactive signage is inconsistent throughout the year – we needed to align the skills required with skills used elsewhere in the
organisation. To do this, we use the web as our medium! Collectively, the Bodleian’s Digital Communications team already had at least some experience in programming, web design, graphic
design, research, user testing, copyediting etc.: the essential toolkit for web application development.
By shifting our digital signage platform to lean heavily on web technologies, we were able to leverage talented people we already had to produce things that we might otherwise
have had to outsource. This, in turn, meant that more exhibitions and displays get digital enhancement, on a shorter turnaround.
It also means that there’s a tighter integration between exhibition content and content for web and social media: it’s easier for us to re-use content across multiple platforms.
Sometimes we’ve even made our digital interactives, or adapted version of them, available directly online, allowing our exhibitions to reach people that can’t get to our physical spaces
at all.
On to the technology! We’re using a real mixture of tech: when it’s donated or reclaimed from previous projects (and when the bidding and acquisition processes are, well… as you’d
expect at the University of Oxford), you learn not to say no to freebies. Our fleet includes:
Samsung Android tablets with freestanding kiosk frames. We run the excellent-value Kiosk Browser Lockdown app on
these, which loads on boot and prevents access to anything but a specified website.
OnelanNTBs connected to a mixture of
touch and non-touch screens, wall-mounted or in kiosk frames. We use Onelan’s standard digital signage features as well as – for interactive content – their built-in touch-capable web
browser.
Dell PCs of the standard variety supplied by University IT services, connected to wall-mounted touch screens, running Google Chrome in Kiosk Mode. More on this below.
When you’re developing content for a very small number of browsers and a limited set of screen sizes, you quickly learn to throw a lot of “best practice” web development out of the
window. You’ll never come across a text browser or screen reader, so alt-text doesn’t matter. You’ll never have to rescale responsively, so you might as well absolutely-position almost
everything. The devices are all your own, so you never need to ask permission to store cookies. And because you control the platform, you can get away with making configuration tweaks
to e.g. allow autoplaying videos with audio. Coming from a conventional web developer background to producing digital signage content makes feels incredibly lazy.
Using Chrome to run digital signage requires, in the Bodleian’s case, a couple of configuration tweaks and the right command-line switches. We use:
chrome://flags/#overscroll-history-navigation – disabling this prevents users from triggering “back”/”forward” by swiping with two fingers
chrome://flags/#pull-to-refresh – disabling this prevents the user from triggering a “refresh” by scrolling up beyond the top of the page (this only happens on some
kinds of devices)
chrome://flags/#system-keyboard-lock – we don’t use attached keyboards, but if you do, you might want to set this flag so you can use the keyboard.lock()
API to intercept e.g. ALT+F4 so users can’t escape the application
running on startup with e.g. chrome --kiosk --noerrdialogs --allow-file-access-from-files --disable-touch-drag-drop --incognito https://example.com/some/url
Kisok mode makes the browser run fullscreen and prevents e.g. opening additional tabs, giving an instant “app-like” experience. As we don’t have keyboards attached to our
digital signage, this also prevents visitors from closing Chrome.
Turning off error dialogs reduces the risk that an error will result in an unslightly message to the user.
Enabling “file access from files” allows content hosted at file:// addresses to access content at other file:// addresses, which makes it possible to write “offline” sites
(sometimes useful where we’re serving large videos or on previous occasions when WiFi has been shaky) that can still take advantage of features like the Fetch API.
Unless you need drag-and-drop, it’s simpler to disable it; this prevents a user long-press-and-dragging an image around the screen.
Incognito mode ensures that the browser doesn’t remember what site was showing last time it ran; our computers often end up switched off at the wall at the end of the day, and
without this the browser will offer to load the site it had open last time, when it runs.
We usually host our interactives directly on the web, at “secret” addresses, and this is generally preferable to us as we can more-easily make on-the-fly adjustments to
content (plus it makes it easier to hook up analytic tools).
Meanwhile, in the application’s CSS code, we set * { user-select: none; } to prevent the user from highlighting
text by selecting it with their finger. We also make heavy use of absolutely-sized/positioned, overflow: hidden blocks to ensure that scrollbars never appear, and
CSS animations to make content feel dynamic and to draw attention to particular elements.
Altogether, this approach gives the Bodleian the capability to produce engaging interactive content at low cost and using the existing skills of their digital and exhibitions teams.
It’s not an approach that would work for every cultural institution: in particular, some of the Bodleian’s sister institutions already
outsource the technical parts of their web work, and so don’t have the expertise in-house to share with a web-powered digital signage solution.
But for those museums that can fit into this model – or can adapt to do so in future – using the web to produce interactive digital content and digital signage is a highly
cost-effective way to engage with visitors, even (or especially!) when dealing with short-lived and/or rotating displays.
It’s also been among my favourite parts of my job at the Bod these last 8½ years, and I’m sure I’ll miss it!
The vast majority of respondents are still using Sass and vanilla CSS? Wow! This made me pause and think. Because I feel there’s an analogy here between that unseen dark matter,
and the huge crowd of web developers who are using such “boring” technology stacks.
…
This! As a well-established developer who gets things done with a handful of solid, reliable, tried-and-tested toolsets, I’ve sometimes felt like I must be “falling behind” on the
hot-new-tech curve because I can’t keep up with whichever yet-another-Javascript-framework is supposed to be hipthis week. Earlier in my career, I didn’t have this problem. And it’s not just that we’re inventing new libraries, frameworks, and (even) languages faster than ever before –
and I’m pretty sure we are – nor is it that my thirty-something brain is less-plastic than the brain of my twenty-something younger self… it’s simpler than that: it’s that the level of
productivity that’s expected of an engineer of my level of seniority precludes me from playing with more than a couple of new approaches each year. I try, and I manage, to get a working
understanding of a new language and a framework or two most years, and I appreciate that that’s more than I’m expected to do (and more than many will), but it still feels like a drop in
the ocean: there’s always a “new hotness”.
But when I take the time to learn a “new hotness”, these days, nine times out of ten it doesn’t “stick” for me. Why? Because most of the new technologies we seem to be
inventing don’t actually add anything to the vast majority of use cases. Hipper (and often smarter) developers than me might latch on to the latest post-reational database or
the most-heavyweight CSS-in-JS-powered realtime web framework, and they dominate the online discussion, but that doesn’t make their ideas right for my projects. They’re a loud
minority with a cool technology, and I’m a little bit jealous that they have the time to learn and play with it… but I’ll just keep delivering value with the tools I’ve got,
thanks.
Chris Ferdinandi‘s daily tip
for yesterday addressed a common familiar to Web developers using custom fonts (i.e. basically all of them):
In many browsers, if a custom typeface is declared but hasn’t finished downloading and parsing yet, browsers will leave space for the text but not render it until the file is ready.
This is often called a Flash Of Invisible Text (or FOIT).
29% of page loads on Chrome for Android displayed blank text: the user agent knew the text it needed to paint, but was blocked from doing so due to the unavailable font resource. In
the median case the blank text time was ~350 ms, ~750 ms for the 75th percentile, and a scary ~2300 ms for the 95th.
To make matters worse, some mobile browsers never timeout a failed font file, and therefore never show text in a fallback typeface if the custom one fails to load. You get nothing at
all.
Let’s talk about how to fix that.
…
Chris is right…
He’s right that the FOIT is annoying, and he’s right that for most text (and especially body text) the best result would be if
a fallback system font was used immediately and swapped-out for the designer’s preferred font as soon as it becomes available: this maximises usability, especially on slower devices and
connections. His solution is this:
Set the font to a fallback font initially.
Set the font to the preferred font once a CSS class is applied to a root element.
Use Javascript to set apply that CSS class either when FontFaceSet.load() indicates that the font is available,
and (via a cookie) for as long as the font file is expected to appear in the browser cache.
This approach is not without its problems. It requires Javascript (users for whom Javascript fails for some reason won’t see the font at all, but may still have to download the font
file!), conflates cookie lifetime with cache lifetime (the two can be cleared independently, cookies can sometimes be synchronised across devices that don’t necessarily share caches,
etc.), and uses Javascript techniques that don’t work in some browsers (Edge and Internet Explorer are both capable of showing custom web fonts but both will use the fallback font
unless either (a) further Javascript is added (which Chris doesn’t supply) or (b) browser detection and/or conditional comments are used to trigger different behaviour in these
browsers (which is icky).
…but he’s also wrong…
If only there was a better way to prevent the FOIT. One which degrades gracefully in older browsers, doesn’t require
Javascript, doesn’t make assumptions about user cookie/cache configuration, and ideally involves a lot less code. It turns out, there is!
The font-display CSS directive exists to solve this exact issue [MDN]. Here’s what it looks like being used to solve the problem Chris presents (example taken from my
blog’s CSS!):
Setting font-display: swap in the @font-face block tells the browser to use fallback fonts in place of this font while it loads. That’s probably exactly what
you want for text fonts and especially body text; it means that the user sees the text as soon as possible and it’s swapped-out for the preferred font the moment it becomes available:
no Javascript necessary! Conversely, font-display: block is a better choice for icon fonts where you want to force the browser to wait as long as possible for the
font file to load (because any content rendered using it makes no sense otherwise).
font-display works out-of-the-box with Chrome, Firefox, and Safari and with the next version of Edge; older versions of Edge and Internet Explorer will simply fall-back to
their default behaviour (FOIT where-necessary) – this is a progressive enhancement technique. But instead of a couple of dozen
lines of Javascript, it’s a single line of CSS.