Ladybird Browser

I’ve been playing with the (pre-Alpha version of) Ladybird, and it fills me with such joy and excitement.

This page, as rendered by Ladybird.
As you can see, Ladybird does a perfectly adequate job of rendering this page, including most of its CSS and virtually all of its JavaScript.

Browser diversity

Back in 2018, while other Web developers were celebrating, I expressed my dismay at the news that Microsoft Edge was on the cusp of switching from using Microsoft’s own browser engine EdgeHTML to using Blink. Blink is the engine that powers almost all other mainstream browsers; all but Firefox, which continues to stand atop Gecko.

The developers who celebrated this loss of rendering engine diversity were, I suppose, happy to have one fewer browser in which they must necessarily test their work. I guess these are the same developers who don’t test the sites they develop for accessibility (does your site work if you can’t see the images? what about with a keyboard but without a pointing device? how about if you’re colourblind?), or consider what might happen if a part of their site fails (what if the third-party CDN that hosts your JavaScript libraries goes down or is blocked by the user’s security software or their ISP?).

This blog post viewed in Lynx.
When was the last time you tested your site in a text-mode browser?

But I was sad, because – as I observed after Andre Alves Garzia succinctly spelled it outbrowser engines are an endangered species. Building a new browser that supports the myriad complexities of the modern Web is such a huge endeavour that it’s unlikely to occur from scratch: from this point on, all “new” browsers are likely to be based upon an existing browser engine.

Engine diversity is important. Last time we had a lull in engine diversity, the Web got stuck, stagnating in the shadow of Internet Explorer 6’s dominance and under the thumb of Microsoft’s interests. I don’t want those days to come back; that’s a big part of why Firefox is my primary web browser.

A Ladybird book browser

Spoof cover for "The Ladybird Book of The Browser"
I actually still own a copy of the book from which I adapted this cover!

Ladybird is a genuine new browser engine. Y’know, that thing I said that we might never see happen again! So how’ve they made it happen?

It helps that it’s not quite starting from scratch. It’s starting point is the HTML viewer component from SerenityOS. And… it’s pretty good. It’s DOM processing’s solid, it seems to support enough JavaScript and CSS that the modern Web is usable, even if it’s not beautiful 100% of the time.

Acid3 test score of 97/100 in Ladybird.
I’ve certainly seen browsers do worse than this at Acid3 and related tests…

They’re not even expecting to make an Alpha release until next year! Right now if you want to use it at all, you’re going to need to compile the code for yourself and fight with a plethora of bugs, but it works and that, all by itself, is really exciting.

They’ve got four full-time engineers, funded off donations, with three more expected to join, plus a stack of volunteer contributors on Github. I’ve raised my first issue against the repo; sadly my C++ probably isn’t strong enough to be able to help more-directly, even if I somehow did have enough free time, which I don’t. But I’ll be watching-from-afar this wonderful, ambitious, and ideologically-sound initiative.

#100DaysToOffload

Woop! This is my 100th post of the year (stats), even using my more-conservative/pedant-friendly “don’t count checkins/reposts/etc. rule. If you’re not a pedant, I achieved #100DaysToOffload when I found a geocache alongside Regents Canal while changing trains to go to Amsterdam where I played games with my new work team, looked at windows and learned about how they’ve been taxed, and got nerdsniped by a bus depot. In any case: whether you’re a pedant or not you’ve got to agree I’ve achieved it now!

× × × ×

Test your site in Lynx

When was the last time you tested your website in a text-only browser like Lynx (or ELinks, or one of several others)? Perhaps you should.

I’m a big fan of CSS Naked Day. I love the idea of JS Naked Day, although I missed it earlier this month (I was busy abroad, plus my aggressive caching, including in service workers, makes it hard to reliably make sweeping changes for short periods). I’m a big fan of the idea that, for the vast majority of websites, if it isn’t at least usable without any CSS or JavaScript, it should probably be considered broken.

This year, I thought I’d celebrate the events by testing DanQ.me in the most-limited browser I had to-hand: Lynx. Lynx has zero CSS or JavaScript support, along with limited-to-no support for heading levels, tables, images, etc. That may seem extreme, but it’s a reasonable analogue for the level of functionality you might routinely expect to see in the toughest environments in which your site is accessed: slow 2G connections from old mobile hardware, people on the other side of highly-restrictive firewalls or overenthusiastic privacy and security software, and of course users of accessibility technologies.

Here’s what broke (and some other observations):

<link rel="alternate">s at the top

I see the thinking that Lynx (and in an even more-extreme fashion, ELinks) have with showing “alternate versions” of a page at the top, but it’s not terribly helpful: most of mine are designed to help robots, not humans!

Screenshot showing four alternate links at the top of DanQ.me as viewed in Lynx.
Four alternates is pretty common for a WordPress site: post feed, comments feed, and two formats of oEmbed.

I wonder if switching from <link rel="alternate"> elements to Link: HTTP headers would indicate to Lynx that it shouldn’t be putting these URLs in humans’ faces, while still making them accessible to all the services that expect to find them? Doing so would require some changes to my caching logic, but might result in a cleaner, more human-readable HTML file as a side-effect. Possibly something worth investigating.

Fortunately, I ensure that my <link rel="alternate">s have a title attribute, which is respected by Lynx and ELinks and makes these scroll-past links slightly less-confusing.

Lynx screenshot from IKEA.com, showing no fewer than 113 anonymous "alternate" links at the top of the page.
Not all sites title their alternate links. IKEA.com requires you to scroll through 113 anonymous links for their alternate language versions, because Lynx doesn’t understand the hreflang attribute.

Post list indentation

Posts on the homepage are structured a little like this:

<li>
  <a href="...">
    <h2>Post Title</h2>
    <p>...post metadata, image, and things...</p>
  </a>
</li>

Strictly-speaking, that’s not valid. Heading elements are only permitted within flow elements. I chose to implement it that way because it seemed to be the most semantically-correct way to describe the literal “list of posts”. But probably my use of <h2> is not the best solution. Let’s see how Lynx handles it:

Screenshot from Lynx showing headings "outdented" from the list items they're children of.
Lynx “outdents” headings so they stand out, and “indents” lists so they look like lists. This causes a quirky clash where a heading is inside a list.

It’s not intolerable, but it’s a little ugly.

CSS lightboxes add a step to images

I use a zero-JavaScript approach to image lightboxes: you can see it by clicking on any of the images in this post! It works by creating a (closed) <dialog> at the bottom of the page, for each image. Each <dialog> has a unique id, and the inline image links to that anchor.

Originally, I used a CSS :target selector to detect when the link had been clicked and show the <dialog>. I’ve since changed this to a :has(:target) and directed the link to an element within the dialog, because it works better on browsers without CSS support.

It’s not perfect: in Lynx navigating on an inline image scrolls down to a list of images at the bottom of the page and selects the current one: hitting the link again now offers to download the image. I wonder if I might be better to use a JavaScript-powered lightbox after all!

gopher: and finger: links work perfectly!

I was pleased to discover that gopher: and finger: links to alternate copies of a post… worked perfectly! That shouldn’t be a surprise – Lynx natively supports these protocols.

Lynx screenshot showing DanQ.me via the Finger protocol.
I can’t conceive that many people access DanQ.me via the Finger protocol, but the option’s there.

In a fun quirk and unusually for a standard of its age, the Finger specification did not state the character encoding that ought to be used. I guess the authors just assumed everybody reading it would use ASCII. But both my WordPress-to-Finger bridge and Lynx instead assume that UTF-8 is acceptable (being a superset of ASCII, that seems fair!) which means that emoji work (as shown in the screenshot above). That’s nuts, isn’t it?

You can’t react to anything

Back in November I added the ability to “react” to a post by clicking an emoji, rather than typing out a full comment. Because I was feeling lazy, the feature was (and remains) experimental, and I didn’t consider it essential functionality, I implemented it mostly in JavaScript. Without JavaScript, all you can do is see what others have clicked.

Emoji reactions screenshot showing thumbs up, star eyes, surprise, muscle flex, nausea, and a dinosaur.
The available emoji vary from post to post; I sometimes like to throw a weird/fun one in there, knowing that it’ll invariably be Ruth that clicks it first.

In a browser with no JavaScript but with functional CSS, the buttons correctly appear disabled.

But with neither technology available, as in Lynx, they look like they should work, but just… don’t. Oops.

Screenshot from Lynx showing a "Bad HTML!! No form action defined." error when clicking an emoji reaction button.
Lynx is correct; this is sloppy code. Without CSS support, it even shows the instruction that implies the buttons will work, but they don’t.

If I decide to keep the reaction buttons long-term, I’ll probably reimplement them so that they function using plain-old HTML and HTTP, using a <form>, and refactor my JavaScript to properly progressively-enhance the buttons for those that support it. For now, this’ll do.

Comment form honeypot

The comment form on my blog posts works… but there’s a quirk:

At the end of the comments form, an additional <textarea> appears!

That’s an annoyance. It turns out it’s a honeypot added by Akismet: a fake comments field, normally hidden, that tries to trick spam bots into filling it (and thus giving themselves away): sort-of a “reverse CAPTCHA” where the robots do something extra, unintentionally, to prove their inhumanity. Lynx doesn’t understand the code that Akismet uses to hide the form, and so it’s visible to humans, which is suboptimal both because it’s confusing but also because a human who puts details into it is more-likely to be branded a spambot!

I might look into suppressing Akismet adding its honeypot field in the first place, or else consider one of the alternative anti-spam plugins for WordPress. I’ve heard good things about Antispam Bee; I ought to try it at some point.

Overall, it’s pretty good

On the whole, DanQ.me works reasonably well in browsers without any JavaScript or CSS capability, with only a few optional features failing to function fully. There’s always room for improvement, of course, and I’ve got a few things now to add to my “one day” to-do list for my little digital garden.

Obviously, this isn’t really about supporting people using text-mode browsers, who probably represent an incredible minority. It’s about making a real commitment to the semantic web, to accessibility, and to progressive enhancement! That making your site resilient, performant, and accessible also helps make it function in even the most-uncommon of browsers is just a bonus.

× × × × ×

Netscape’s Untold Webstories

I mentioned yesterday that during Bloganuary I’d put non-Bloganuary-prompt post ideas onto the backburner, and considered extending my daily streak by posting them in February. Here’s part of my attempt to do that:

Let’s take a trip into the Web of yesteryear, with thanks to our friends at the Internet Archive’s WayBack Machine.

The page we’re interested in used to live at http://www.netscape.com/comprod/columns/webstories/index.html, and promised to be a showcase for best practice in Web development. Back in October 1996, it looked like this:

Screenshot from Netscape Columns: Web Site Stories: a Coming Soon page which says "The series is scheduled to debut in November."

The page is a placeholder for Netscape Webstories (or Web Site Stories, in some places). It’s part of a digital magazine called Netscape Columns which published pieces written by Marc Andreeson, Jim Barksdale, and other bigwigs in the hugely-influential pre-AOL-acquisition Netscape Communications.

This new series would showcase best practice in designing and building Web sites1, giving a voice to the technical folks best-placed to speak on that topic. That sounds cool!

Those white boxes above and below the paragraph of text aren’t missing images, by the way: they’re horizontal rules, using the little-known size attribute to specify a thickness of <hr size=4>!2

Certainly you’re excited by this new column and you’ll come back in November 1996, right?

Screenshot from Netscape Columns: Web Site Stories: a Coming Soon page which says "The series is scheduled to begin in January."

Oh. The launch has been delayed, I guess. Now it’s coming in January.

The <hr>s look better now their size has been reduced, though, so clearly somebody’s paying attention to the page. But let’s take a moment and look at that page title. If you grew up writing web pages in the modern web, you might anticipate that it’s coded something like this:

<h2 style="font-variant: small-caps; text-align: center;">Coming Soon</h2>

There’s plenty of other ways to get that same effect. Perhaps you prefer font-feature-settings: 'smcp' in your chosen font; that’s perfectly valid. Maybe you’d use margin: 0 auto or something to centre it: I won’t judge.

But no, that’s not how this works. The actual code for that page title is:

<center>
  <h2>
    <font size="+3">C</font>OMING
    <font size="+3">S</font>OON
  </h2>
</center>

Back when this page was authored, we didn’t have CSS3. The only styling elements were woven right in amongst the semantic elements of a page4. It was simple to understand and easy to learn… but it was a total mess5.

Anyway, let’s come back in January 1997 and see what this feature looks like when it’s up-and-running.

Screenshot from Netscape Columns: Web Site Stories: a Coming Soon page which says "The series is scheduled to begin in the spring."

Nope, now it’s pushed back to “the spring”.

Under Construction pages were all the rage back in the nineties. Everybody had one (or several), usually adorned with one or more of about a thousand different animated GIFs for that purpose.6

Rotating animated "under construction" banner.

Building “in public” was an act of commitment, a statement of intent, and an act of acceptance of the incompleteness of a digital garden. They’re sort-of coming back into fashion in the interpersonal Web, with the “garden and stream” metaphor7 taking root. This isn’t anything new, of course – Mark Bernstein touched on the concepts in 1998 – but it’s not something that I can ever see returning to the “serious” modern corporate Web: but if you’ve seen a genuine, non-ironic “under construction” page published to a non-root page of a company’s website within the last decade, please let me know!

Under construction banner with an animated yellow-and-black tape banner between two "men at work" signs.

RSS doesn’t exist yet (although here’s a fun fact: the very first version of RSS came out of Netscape!). We’re just going to have to bookmark the page and check back later in the year, I guess…

Screenshot from Netscape Columns: Web Site Stories: a Coming Soon page identical to the previous version but with a search box ("To search the Netscape Columns, type a word or phrase here:") beneath.

Okay, so February clearly isn’t Spring, but they’ve updated the page… to add a search form.

It’s a genuine <form> tag, too, not one of those old-fashioned <isindex> tags you’d still sometimes find even as late as 1997. Interestingly, it specifies enctype="application/x-www-form-urlencoded". Today’s developers probably don’t think about the enctype attribute except when they’re doing a form that handles file uploads and they know they need to switch it to enctype="multipart/form-data", (or their framework does this automatically for them!).

But these aren’t the only options, and some older browsers at this time still defaulted to enctype="text/plain".  So long as you’re using a POST and not GET method, the distinction is mostly academic, but if your backend CGI program anticipates that special characters will come-in encoded, back then you’d be wise to specify that you wanted URL-encoding or you might get a nasty surprise when somebody turns up using LMB or something equally-exotic.

Anyway, let’s come back in June. The content must surely be up by now:

Screenshot from Netscape Columns: Web Site Stories: a Coming Soon page which says "The series is scheduled to begin in August."

Oh come on! Now we’re waiting until August?

At least the page isn’t abandoned. Somebody’s coming back and editing it from time to time to let us know about the still-ongoing series of delays. And that’s not a trivial task: this isn’t a CMS. They’re probably editing the .html file itself in their favourite text editor, then putting the appropriate file:// address into their copy of Netscape Navigator (and maybe other browsers) to test it, then uploading the file – probably using FTP – to the webserver… all the while thanking their lucky stars that they’ve only got the one page they need to change.

We didn’t have scripting languages like PHP yet, you see8. We didn’t really have static site generators. Most servers didn’t implement server-side includes. So if you had to make a change to every page on a site, for example editing the main navigation menu, you’d probably have to open and edit dozens or even hundreds of pages. Little wonder that framesets caught on, despite their (many) faults, with their ability to render your navigation separately from your page content.

Okay, let’s come back in August I guess:

Screenshot from Netscape Columns: Web Site Stories: a Coming Soon page which says "The series is scheduled to begin in the spring." Again.

Now we’re told that we’re to come back… in the Spring again? That could mean Spring 1998, I suppose… or it could just be that somebody accidentally re-uploaded an old copy of the page.

Hey: the footer’s gone too? This is clearly a partial re-upload: somebody realised they were accidentally overwriting the page with the previous-but-one version, hit “cancel” in their FTP client (or yanked the cable out of the wall), and assumed that they’d successfully stopped the upload before any damage was done.

They had not.

Screenshot of a Windows 95 dialog box, asking "Are you sure you want to delete index.html?" The cursor hovers over the "Yes" button.

I didn’t mention that top menu, did I? It looks like it’s a series of links, styled to look like flat buttons, right? But you know that’s not possible because you can’t rely on having the right fonts available: plus you’d have to do some <table> trickery to lay it out, at which point you’d struggle to ensure that the menu was the same width as the banner above it. So how did they do it?

The menu is what’s known as a client-side imagemap. Here’s what the code looks like:

<a href="/comprod/columns/images/nav.map">
  <img src="/comprod/columns/images/websitestories_ban.gif" width=468 height=32 border=0 usemap="#maintopmap" ismap>
</a><map name="mainmap">
  <area coords="0,1,92,24" href="/comprod/columns/mainthing/index.html">
  <area coords="94,1,187,24" href="/comprod/columns/techvision/index.html">
  <area coords="189,1,278,24" href="/comprod/columns/webstories/index.html">
  <area coords="280,1,373,24" href="/comprod/columns/intranet/index.html">
  <area coords="375,1,467,24" href="/comprod/columns/newsgroup/index.html">
</map>

The image (which specifies border=0 because back then the default behaviour for graphical browser was to put a thick border around images within hyperlinks) says usemap="#maintopmap" to cross-reference the <map> below it, which defines rectangular areas on the image and where they link to, if you click them! This ingenious and popular approach meant that you could transmit a single image – saving on HTTP round-trips, which were relatively time-consuming before widespread adoption of HTTP/1.1‘s persistent connections – along with a little metadata to indicate which pixels linked to which pages.

The ismap attribute is provided as a fallback for browsers that didn’t yet support client-side image maps but did support server-side image maps: there were a few! When you put ismap on an image within a hyperlink, then when the image is clicked on the href has appended to it a query parameter of the form ?123,456, where those digits refer to the horizontal and vertical coordinates, from the top-left, of the pixel that was clicked on! These could then be decoded by the webserver via a .map file or handled by a CGI program. Server-side image maps were sometimes used where client-side maps were undesirable, e.g. when you want to record the actual coordinates picked in a spot-the-ball competition or where you don’t want to reveal in advance which hotspot leads to what destination, but mostly they were just used as a fallback.9

Both client-side and server-side image maps still function in every modern web browser, but I’ve not seen them used in the wild for a long time, not least because they’re hard (maybe impossible?) to make accessible and they can’t cope with images being resized, but also because nowadays if you really wanted to make an navigation “image” you’d probably cut it into a series of smaller images and make each its own link.

Anyway, let’s come back in October 1997 and see if they’ve fixed their now-incomplete page:

Screenshot from Netscape Columns: Web Site Stories: the Coming Soon page is now laid out in two columns, but the expected launch date has been removed.

Oh, they have! From the look of things, they’ve re-written the page from scratch, replacing the version that got scrambled by that other employee. They’ve swapped out the banner and menu for a new design, replaced the footer, and now the content’s laid out in a pair of columns.

There’s still no reliable CSS, so you’re not looking at columns: (no implementations until 2014) nor at display: flex (2010) here. What you’re looking at is… a fixed-width <table> with a single row and three columns! Yes: three – the middle column is only 10 pixels wide and provides the “gap” between the two columns of text.10

This wasn’t Netscape’s only option, though. Did you ever hear of the <multicol> tag? It was the closest thing the early Web had to a semantically-sound, progressively-enhanced multi-column layout! The author of this page could have written this:

<multicol cols=2 gutter=10 width=301>
  <p>
    Want to create the best possible web site? Join us as we explore the newest
    technologies, discover the coolest tricks, and learn the best secrets for
    designing, building, and maintaining successful web sites.
  </p>
  <p>
    Members of the Netscape web site team, recognized designers, and technical
    experts will share their insights and experiences in Web Site Stories. 
  </p>
</multicol>

That would have given them the exact same effect, but with less code and it would have degraded gracefully. Browsers ignore tags they don’t understand, so a browser without support for <multicol> would have simply rendered the two paragraphs one after the other. Genius!

So why didn’t they? Probably because <multicol> only ever worked in Netscape Navigator.

Introduced in 1996 for version 3.0, this feature was absolutely characteristic of the First Browser War. The two “superpowers”, Netscape and Microsoft, both engaged in unilateral changes to the HTML specification, adding new features and launching them without announcement in order to try to get the upper hand over the other. Both sides would often refuse to implement one-another’s new tags unless they were forced to by widespread adoption by page authors, instead promoting their own competing mechanisms11.

Between adding this new language feature to their browser and writing this page, Netscape’s market share had fallen from around 80% to around 55%, and most of their losses were picked up by IE. Using <multicol> would have made their page look worse in Microsoft’s hot up-and-coming browser, which wouldn’t have helped them persuade more people to download a copy of Navigator and certainly wouldn’t be a good image on a soon-to-launch (any day now!) page about best-practice on the Web! So Netscape’s authors opted for the dominant, cross-platform solution on this page12.

Anyway, let’s fast-forward a bit and see this project finally leave its “under construction” phase and launch!

Screenshot showing the homepage of Netscape Columns from 15 February 1998; the first recorded copy NOT to have a header link to the Webstories / Web Site Stories page.

Oh. It’s gone.

Sometime between October 1997 and February 1998 the long promised “Web Site Stories” section of Netscape Columns quietly disappeared from the website. Presumably, it never published a single article, instead remaining a perpetual “Coming Soon” page right up until the day it was deleted.

I’m not sure if there’s a better metaphor for Netscape’s general demeanour in 1998 – the year in which they finally ceased to be the dominant market leader in web browsers – than the quiet deletion of a page about how Netscape customers are making the best of the Web. This page might not have been important, or significant, or even completed, but its disappearance may represent Netscape’s refocus on trying to stay relevant in the face of existential threat.

Of course, Microsoft won the First Browser War. They did so by pouring a fortune’s worth of developer effort into staying technologically one-step ahead, refusing to adopt standards proposed by their rival, and their unprecedented decision to give away their browser for free13.

Footnotes

1 Yes, we used to write “Web sites” as two words. We also used to consistently capitalise the words Web and Internet. Some of us still do so.

2 In case it’s not clear, this blog post is going to be as much about little-known and archaic Web design techniques as it is about Netscape’s website.

3 This is a white lie. CSS was first proposed almost at the same time as the Web! Microsoft Internet Explorer was first to deliver a partial implementation of the initial standard, late in 1996, but Netscape dragged their heels, perhaps in part because they’d originally backed a competing standard called JavaScript Style Sheets (JSSS). JSSS had a lot going for it: if it had enjoyed widespread adoption, for example, we’d have had the equivalent of CSS variables a full twenty years earlier! In any case, back in 1996 you definitely wouldn’t want to rely on CSS support.

4 Wondering where the text and link colours come from? <body bgcolor="#ffffff" text="#000000" link="#0000ff" vlink="#ff0000" alink="#ff0000">. Yes really, that’s where we used to put our colours.

5 Personally, I really loved the aesthetic Netscape touted when using Times New Roman (or whatever serif font was available on your computer: webfonts weren’t a thing yet) with temporary tweaks to font sizes, and I copied it in some of my own sites. If you look back at my 2018 blog post celebrating two decades of blogging, where I’ve got a screenshot of my blog as it looked circa 1999, you’ll see that I used exactly this technique for the ordinal suffixes on my post dates! On the same post, you’ll see that I somewhat replicated the “feel” of it again in my 2011 design, this time using a stylesheet.

6 There’s a whole section of Cameron’s World dedicated to “under construction” banners, and that’s a beautiful thing!

7 The idea of “garden and stream” is that you publish early and often, refining as you go, in your garden, which can act as an extension of whatever notetaking system you use already, but publish mostly “finished” content to your (chronological) stream. I see an increasing number of IndieWeb bloggers going down this route, but I’m not convinced that it’s for me.

8 Another white lie. PHP was released way back in 1995 and even the very first version supported something a lot like server-side includes, using the syntax <!--include /file/name.html-->. But it was a little computationally-intensive to run willy-nilly.

9 Server-side imagemaps are enjoying a bit of a renaissance on .onion services, whose visitors often keep JavaScript disabled, to make image-based CAPTCHAs. Simply show the visitor an image and describe the bit you want them to click on, e.g. “the blue pentagon with one side missing”, then compare the coordinates of the pixel they click on to the knowledge of the right answer. Highly-inaccessible, of course, but innovative from a purely-technical perspective.

10 Nowadays, use of tables for layout – or, indeed, for anything other than tabular data – is very-much frowned upon: it’s often bad for accessibility and responsive design. But back before we had the features granted to us by the modern Web, it was literally the only way to get content to appear side-by-side on a page, and designers got incredibly creative about how they misused tables to lay out content, especially as browsers became more-sophisticated and began to support cells that spanned multiple rows or columns, tables “nested” within one another, and background images.

11 It was a horrible time to be a web developer: having to make hacky workarounds in order to make use of the latest features but still support the widest array of browsers. But I’d still take that over the horrors of rendering engine monoculture!

12 Or maybe they didn’t even think about it and just copy-pasted from somewhere else on their site. I’m speculating.

13 This turned out to be the master-stroke: not only did it partially-extricate Microsoft from their agreement with Spyglass Inc., who licensed their browser engine to Microsoft in exchange for a percentage of sales value, but once Microsoft started bundling Internet Explorer with Windows it meant that virtually every computer came with their browser factory-installed! This strategy kept Microsoft on top until Firefox and Google Chrome kicked-off the Second Browser War in the early 2010s. But that’s another story.

× × × × × × × × × × ×

<blink> and <marquee>

I was chatting with a fellow web developer recently and made a joke about the HTML <blink> and <marquee> tags, only to discover that he had no idea what I was talking about. They’re a part of web history that’s fallen off the radar and younger developers are unlikely to have ever come across them. But for a little while, back in the 90s, they were a big deal.

Macromedia Dreamweaver 3 code editor window showing a <h2> heading wrapped in <marquee> and <blink> tags, for emphasis.
Even Macromedia Dreamweaver, which embodied the essence of 1990s web design, seemed to treat wrapping <blink> in <marquee> as an antipattern.

Invention of the <blink> element is often credited to Lou Montulli, who wrote pioneering web browser Lynx before being joining Netscape in 1994. He insists that he didn’t write any of the code that eventually became the first implementation of <blink>. Instead, he claims: while out at a bar (on the evening he’d first meet his wife!), he pointed out that many of the fancy new stylistic elements the other Netscape engineers were proposing wouldn’t work in Lynx, which is a text-only browser. The fanciest conceivable effect that would work across both browsers would be making the text flash on and off, he joked. Then another engineer – who he doesn’t identify – pulled a late night hack session and added it.

And so it was that when Netscape Navigator 2.0 was released in 1995 it added support for the <blink> tag. Also animated GIFs and the first inklings of JavaScript, which collectively would go on to define the “personal website” experience for years to come. Here’s how you’d use it:

<BLINK>This is my blinking text!</BLINK>

With no attributes, it was clear from the outset that this tag was supposed to be a joke. By the time HTML4 was published as a a recommendation two years later, it was documented as being a joke. But the Web of the late 1990s saw it used a lot. If you wanted somebody to notice the “latest updates” section on your personal home page, you’d wrap a <blink> tag around the title (or, if you were a sadist, the entire block).

Cameron's World website, screenshot, showing GIFS and bright pallette
If you missed this particular chapter of the Web’s history, you can simulate it at Cameron’s World.

In the same year as Netscape Navigator 2.0 was released, Microsoft released Internet Explorer 2.0. At this point, Internet Explorer was still very-much playing catch-up with the features the Netscape team had implemented, but clearly some senior Microsoft engineer took a look at the <blink> tag, refused to play along with the joke, but had an innovation of their own: the <marquee> tag! It had a whole suite of attributes to control the scroll direction, speed, and whether it looped or bounced backwards and forwards. While <blink> encouraged disgusting and inaccessible design as a joke, <marquee> did it on purpose.

<MARQUEE>Oh my god this still works in most modern browsers!</MARQUEE>

Oh my god this still works in most modern browsers!

If you see the text above moving… you’re looking at a living fossil in browser history.

But here’s the interesting bit: for a while in the late 1990s, it became a somewhat common practice to wrap content that you wanted to emphasise with animation in both a <blink> and a <marquee> tag. That way, the Netscape users would see it flash, the IE users would see it scroll or bounce. Like this:

<MARQUEE><BLINK>This is my really important message!</BLINK></MARQUEE>
Internet Explorer 5 showing a marquee effect.
Wrap a <blink> inside a <marquee> and IE users will see the marquee. Delightful.

The web has always been built on Postel’s Law: a web browser should assume that it won’t understand everything it reads, but it should provide a best-effort rendering for the benefit of its user anyway. Ever wondered why the modern <video> element is a block rather than a self-closing tag? It’s so you can embed within it code that an earlier browser – one that doesn’t understand <video> – can read (a browser’s default state when seeing a new element it doesn’t understand is to ignore it and carry on). So embedding a <blink> in a <marquee> gave you the best of both worlds, right? (welll…)

Netscape Navigator 5 showing a blink effect.
Wrap a <blink> inside a <marquee> and Netscape users will see the blink. Joy.

Better yet, you were safe in the knowledge that anybody using a browser that didn’t understand either of these tags could still read your content. Used properly, the web is about progressive enhancement. Implement for everybody, enhance for those who support the shiny features. JavaScript and CSS can be applied with the same rules, and doing so pays dividends in maintainability and accessibility (though, sadly, that doesn’t stop people writing sites that needlessly require these technologies).

Opera 5 showing no blinking nor marquee text.
Personally, I was a (paying! – back when people used to pay for web browsers!) Opera user so I mostly saw neither <blink> nor <marquee> elements. I don’t feel like I missed out.

I remember, though, the first time I tried Netscape 7, in 2002. Netscape 7 and its close descendent are, as far as I can tell, the only web browsers to support both <blink> and <marquee>. Even then, it was picky about the order in which they were presented and the elements wrapped-within them. But support was good enough that some people’s personal web pages suddenly began to exhibit the most ugly effect imaginable: the combination of both scrolling and flashing text.

Netscape 7 showing text that both blinks and marquee-scrolls.
If Netscape 7’s UI didn’t already make your eyes bleed (I’ve toned it down here by installing the “classic skin”), its simultaneous rendering of <blink> and <marquee> would.

The <blink> tag is very-definitely dead (hurrah!), but you can bring it back with pure CSS if you must. <marquee>, amazingly, still survives, not only in polyfills but natively, as you might be able to see above. However, if you’re in any doubt as to whether or not you should use it: you shouldn’t. If you’re looking for digital nostalgia, there’s a whole rabbit hole to dive down, but you don’t need to inflict <marquee> on the rest of us.

× × × × ×

Microsoft exec riles Firefox faithful by telling Mozilla to embrace Chrome – CNET

This is a repost promoting content originally published elsewhere. See more things Dan's reposted.

Microsoft staffer pokes finger into Twitter beehive by saying Firefox should join Chrome, not fight it (CNET)

Debate: Does Mozilla have more influence as a Chrome rival or ally?

“Thought: It’s time for @mozilla to get down from their philosophical ivory tower. The web is dominated by Chromium, if they really *cared* about the web they would be contributing instead of building a parallel universe that’s used by less than 5%?” He made it clear the viewpoint was his personal opinion, not Microsoft’s position.

Mozilla is indeed in a sticky situation, trying to improve the web when it comes to things like openness, privacy and new standards. That mission is harder with declining influence, though, and Firefox now accounts for 5 percent of web usage, according to analytics firm StatCounter. But without independent efforts like Firefox, and to an extent Apple’s Safari, the web will stop being an independent software foundation and become whatever Google says it is.

And plenty of people don’t like that one bit. Indeed, Mozilla defenders see the nonprofit’s mission as even more important with Chrome’s dominance.

“I couldn’t disagree with you more. It precisely *because* Chromium has such a large marketshare that is vital for Mozilla (or anyone else) to battle for diversity,” tweeted web developer Jeremy Keith in a response. “‘Building a parallel universe’? That *is* the contribution.”

Browsers

This is a repost promoting content originally published elsewhere. See more things Dan's reposted.

by Jeremy Keith

I’ve been using Firefox as my main browser for a while now, and I can heartily recommend it. You should try it (and maybe talk to your relatives about it at Christmas). At this point, which browser you use no longer feels like it’s just about personal choice—it feels part of something bigger; it’s about the shape of the web we want.

Very much this. I’ve been using Firefox as my primary browser since I began (gradually) switching from Opera in 2005, but it’s never been more important than it is now that people know about and use Firefox. The rest of his post, which summarises the news I was talking about the other week and everything people have said since, is well-worth reading too.

We need a new movement: a movement of developers, influencers, and tech enthusiasts who loudly, proudly, use Firefox as their primary web browser. We use it on our desktops. We use it on our laptops. We use it on our phones. All of us test sites in it. Some of us write plugins for it. The bravest of us write code for it. But none of us, not one, takes it for granted.

Chromedge and headcount

This is a repost promoting content originally published elsewhere. See more things Dan's reposted.

So Microsoft is going to retire EdgeHTML and use Chromium instead for Edge while not really answering the question if the web [is] better off with less engine diversity. This upset people, and Mozilla, especially, is worrying about the future:

Will Microsoft’s decision make it harder for Firefox to prosper? It could. Making Google more powerful is risky on many fronts. […] If one product like Chromium has enough market share, then it becomes easier for web developers and businesses to decide not to worry if their services and sites work with anything other than Chromium. That’s what happened when Microsoft had a monopoly on browsers in the early 2000s before Firefox was released. And it could happen again.

Before you lament the return to a Microsoft-like monopoly, remember what happened to Microsoft’s monopoly. In fact, remember what happened to the lineal descendant of that monopoly just last week. Near-monopolies do not necessarily mean the end of the web.

Yet more in the “EdgeHTML to be replaced by Chromium” story, on which I already shared my opinions. Peter-Paul does a good job of illustrating the differences between the reduction of diversity in/increasing monopolisation of the browser space this time around and last time (when Internet Explorer 6 became the de facto standard way to surf the Web), using it to provide a slightly less-pessimistic outlook (albeit one not without its warnings).

Browser diversity starts with us

This is a repost promoting content originally published elsewhere. See more things Dan's reposted.

Jeffrey Zeldman (zeldman.com)

Even if you love Chrome, adore Gmail, and live in Google Docs or Analytics, no single company, let alone a user-tracking advertising giant, should control the internet.

Diversity is as good for the web as it is for society. And it starts with us.

Yet more fallout from the Microsoft announcement that Edge will switch to Chromium, which I discussed earlier. This one’s pretty inspirational, and gives a good reminder about what our responsibilities are to the Web, as its developers.

Edge may be becoming Chromium-powered, and that’s terrible

Microsoft engineers have been spotted committing code to Chromium, the backend of Google Chrome and many other web browsers. This, among other things, has lead to speculation that Microsoft’s browser, Edge, might be planned to switch from its current rendering engine (EdgeHTML) to Blink (Chromium’s). This is bad news.

This page in Microsoft Edge
This post, as it would appear if you were looking at it in Edge. Which you might be, I suppose.

The younger generation of web developers are likely to hail this as good news: one fewer engine to develop for and test in, they’re all already using Chrome or something similar (and certainly not Edge) for development and debugging anyway, etc. The problem comes perhaps because they’re too young to remember the First Browser War and its aftermath. Let me summarise:

  1. Once upon a time – let’s call it the mid-1990s – there were several web browsers: Netscape Navigator, Internet Explorer, Opera, etc. They all used different rendering engines and so development was sometimes a bit of a pain, but only if you wanted to use the latest most cutting-edge features: if you were happy with the standard, established features of the Web then your site would work anywhere, as has always been the case.
    Best viewed with... any damn browser
  2. Then, everybody starting using just one browser: following some shady dealings and monopoly abuse, 90%+ of Web users started using just one web browser, Internet Explorer. By the time anybody took notice, their rivals had been economically crippled beyond any reasonable chance of recovery, but the worst had yet to come…
    Best viewed with Internet Explorer
  3. Developers started targeting only that one browser: instead of making websites, developers started making “Internet Explorer sites” which were only tested in that one browser or, worse yet, only worked at all in that browser, actively undermining the Web’s position as an open platform. As the grip of the monopoly grew tighter, technological innovation was centred around this single platform, leading to decade-long knock-on effects.
  4. The Web ceased to grow new features: from the release of Internet Explorer 6 there were no significant developments in the technology of the Web for many years. The lack of competition pushed us into a period of stagnation. A decade and a half later, we’re only just (finally) finishing shaking off this unpleasant bit of our history.
    "Netscape sux"

History looks set to repeat itself. Substitute Chrome in place of Internet Explorer and update the references to other web browsers and the steps above could be our future history, too. Right now, we’re somewhere in or around step #2 – Chrome is the dominant browser – and we’re starting to see the beginnings of step #3: more and more “Chrome only” sites. More-alarmingly this time around, Google’s position in providing many major Web services allows them to “push” even harder for this kind of change, even just subtly: if you make the switch from Chrome to e.g. Firefox (and you absolutely should) you might find that YouTube runs slower for you because YouTube’s (Google) engineers favour Google’s web browser.

Chrome is becoming the new Internet Explorer 6, and that’s a huge problem. Rachel Nabors wrote in her excellent article The Ecological Impact of Browser Diversity:

So these are the three browser engines we have: WebKit/Blink, Gecko, and EdgeHTML. We are unlikely to get any brand new bloodlines in the foreseeable future. This is it.

If we lose one of those browser engines, we lose its lineage, every permutation of that engine that would follow, and the unique takes on the Web it could allow for.

And it’s not likely to be replaced.

The Circle of Browsers, by Rachel Nabors

Imagine a planet populated only by hummingbirds, dolphins, and horses. Say all the dolphins died out. In the far, far future, hummingbirds or horses could evolve into something that could swim in the ocean like a dolphin. Indeed, ichthyosaurs in the era of dinosaurs looked much like dolphins. But that creature would be very different from a true dolphin: even ichthyosaurs never developed echolocation. We would wait a very long time (possibly forever) for a bloodline to evolve the traits we already have present in other bloodlines today. So, why is it ok to stand by or even encourage the extinction of one of these valuable, unique lineages?

We have already lost one.

We used to have four major rendering engines, but Opera halted development of its own rendering engine Presto before adopting Blink.

Three left. Spend them wisely.

As much as I don’t like having to work-around the quirks in all of the different browsers I test in, daily, it’s way preferable to a return to the dark days of the Web circa most of the first decade of this century. Please help keep browsers diverse: nobody wants to start seeing this shit –

Best viewed with Google Chrome

Update: this is now confirmed. A sad day for the Web.

× ×

Tomorrow’s Web, Today

Maybe it’s because I was at Render Conf at the end of last month or perhaps it’s because Three Rings DevCamp – which always gets me inspired – was earlier this month, but I’ve been particularly excited lately to get the chance to play with some of the more “cutting edge” (or at least, relatively-new) web technologies that are appearing on the horizon. It feels like the Web is having a bit of a renaissance of development, spearheaded by the fact that it’s no longer Microsoft that are holding development back (but increasingly Apple) and, perhaps for the first time, the fact that the W3C are churning out standards “ahead” of where the browser vendors are managing to implement technical features, rather than simply reflecting what’s already happening in the world.

Ben Foxall at Render Conf 2017 discusses the accompanying JSOxford Hackathon.
Ben Foxall at Render Conf 2017 discusses the accompanying JSOxford Hackathon. Hey, who’s that near the top-right?

It seems to me that HTML5 may well be the final version of HTML. Rather than making grand new releases to the core technology, we’re now – at last! – in a position where it’s possible to iteratively add new techniques in a resilient, progressive manner. We don’t need “HTML6” to deliver us any particular new feature, because the modern web is more-modular and is capable of having additional features bolted on. We’re in a world where browser detection has been replaced with feature detection, to the extent that you can even do non-hacky feature detection in pure CSS, now, and this (thanks to the nature of the Web as a loosely-coupled, resilient platform) means that it’s genuinely possible to progressively-enhance content and get on board with each hot new technology that comes along, if you want, while still delivering content to users on older browsers.

And that’s the dream! A web of progressive-enhancement stays true to Sir Tim’s dream of universal interoperability while still moving forward technologically. I’ve no doubt that there’ll always be people who want to break the Web – even Google do it, sometimes – with single-page Javascript-only web apps, “app shell” websites, mobile-only or desktop-only experiences and “apps” that really ought to have been websites (and perhaps PWAs) to begin with… but the fact that the tools to make a genuinely “progressively-enhanced” web, and those tools are mainstream, is a big deal. If you don’t think we’re at that point yet, I invite you to watch Rachel Andrews‘ fantastic presentation, “Start Using CSS Grid Layout Today”.

Three Rings' developers hard at work at this year's DevCamp.
Three Rings’ developers hard at work at this year’s DevCamp.

Some of the things I’ve been playing with recently include:

Intersection Observers

Only really supported in Chrome, but there’s a great polyfill, the Intersection Observer API is one of those technologies that make you say “why didn’t we have that already?” It’s very simple: all an Intersection Observer does is to provide event hooks for target objects entering or leaving the viewport, without resorting to polling or hacky code on scroll event captures.

Intersection Observer example (animated GIF)

What’s it for? Well the single most-obvious use case is lazy-loading images, a-la Medium or Google Image Search: delivering users a placeholder image or a low-resolution copy until they scroll far enough for the image to come into view (or almost into view) and then downloading the full-resolution version and dynamically replacing it. My first foray into Intersection Observers was to take Medium’s approach and then improve it with a Service Worker in order to make it behave nicely even if the user’s Internet connection was unreliable, but I’ve since applied it to my Reddit browser plugin MegaMegaMonitor: rather than hammering the browser with Javascript the plugin now waits until relevant content enters the viewport before performing resource-intensive tasks.

Web Workers

I’d briefly played with Service Workers before and indeed we’re adding a Service Worker to the next version of Three Rings, which, in conjunction with a manifest.json and the service’s (ongoing) delivery over HTTPS (over H2, where available, since last year), technically makes it a Progressive Web App… and I’ve been looking for opportunities to make use of Service Workers elsewhere in my work, too… but my first dive in to Web Workers was in introducing one to the next upcoming version of MegaMegaMonitor.

MegaMegaMonitor v155a Lists feature
MegaMegaMonitor’s processor-intensive “Lists” feature sees the most benefit from Web Workers

Web Workers add true multithreading to Javascript, and in the case of MegaMegaMonitor this means the possibility of pushing the more-intensive work that the plugin has to do out of the main thread and into the background, allowing the user to enjoy an uninterrupted browsing experience while the heavy-lifting goes on in the background. Because I don’t control the domain on which this Web Worker runs (it’s reddit.com, of course!), I’ve also had the opportunity to play with Blobs, which provided a convenient way for me to inject Worker code onto somebody else’s website from within a userscript. This has also lead me to the discovery that it ought to be possible to implement userscripts that inject Service Workers onto websites, which could be used to mashup additional functionality into websites far in advance of that which is typically possible with a userscript… more on that if I get around to implementing such a thing.

Fetch

The final of the new technologies I’ve been playing with this month is the Fetch API. I’m not pulling any punches when I say that the Fetch API is exactly what XMLHttpRequests should have been from the very beginning. Understanding them properly has finally given me the confidence to stop using jQuery for the one thing for which I always seemed to have had to depend on it for – that is, simplifying Ajax requests! I mean, look at this elegant code:

fetch('posts.json')
.then(function(response) {
  return response.json();
})
.then(function(json) {
  console.log(json.something.otherThing);
});

Whether or not you’re a fan of Javascript, you’ve got to admit that that’s infinitely more readable than XMLHttpRequest hackery (at least, without the help of a heavyweight library like jQuery).

Other things I've been up to include Laser Duck Hunt, but that's another story.
Other things I’ve been up to include Laser Duck Hunt, but that’s another story.

So that’s some of the stuff I’ve been playing with lately: Intersection Observers, Web Workers, Blobs, and the Fetch API. And I feel all full of optimism on behalf of the Web.

× × × ×

History of the browser user-agent string

This is a repost promoting content originally published elsewhere. See more things Dan's reposted.

In the beginning there was NCSA Mosaic, and Mosaic called itself NCSA_Mosaic/2.0 (Windows 3.1), and Mosaic displayed pictures along with text, and there was much rejoicing…

Have you ever wondered why every major web browser identifies itself as “Mozilla”? Wonder no longer…

Craziest Internet Explorer Bug Ever?

As web developers, we’re used to working around the bugs in Microsoft Internet Explorer. The older versions are worst, and I’m certainly glad to not have to write code that works in Internet Explorer 6 (or, increasingly, Internet Explorer 7) any more: even Microsoft are glad to see Internet Explorer 6 dying out, but even IE8 is pretty ropey too. And despite what Microsoft claim, I’m afraid IE9 isn’t really a “modern” browser either (although it is a huge step forwards over its predecessors).

But imagine my surprise when I this week found what I suspect might be a previously undiscovered bug in Internet Explorer 8 and below. Surely they’ve all been found (and some of them even fixed), but now? But no. It takes a very specific set of circumstances for the bug to manifest itself, but it’s not completely unbelievable – I ran into it by accident while refactoring parts of Three Rings.

A completely useless Internet Explorer error message.
A completely useless Internet Explorer error message. Thanks, IE.

Here’s the crux of it: if you’re –

  • Using Internet Explorer 8 or lower, and
  • You’re on a HTTPS (secure) website, and
  • You’re downloding one of a specific set of file types: Bitmap files, for example, are a problem, but JPEG files aren’t (Content-Type: image/bmp), and
  • The web server indicates that the file you’re downloading should be treated as something to be “saved”, rather than something to be viewed in your browser (Content-Disposition: attachment), and
  • The web server passes a particular header to ask that Internet Explorer does not cache a copy of the file (Cache-Control: no-cache),

Then you’ll see a dialog box like the one shown above. Switching any of the prerequisites in that list out makes the problem go away: even switching the header from a strict “no-cache” to a more-permissive “private” makes all the difference.

I’ve set up a test environment where you can see this for yourself: HTTP version; HTTPS version. The source code of my experiment (PHP) is also available. Of course, if you try it in a functional, normal web browser, it’ll all work fine. But if you’ve got access to a copy of Internet Explorer 8 on some old Windows XP box somewhere (IE8 is the last version of the browser made available for XP), then try it in that and see for yourself what a strange error you get.

×

KeePass for Opera

Opera 12 has been released, and brought with it a handful of new features. But there’s also been a feature removed – a little-known feature that allowed power users to have the web address appear in the title bar of the browser. I guess that the development team decided that, because the title bar is rarely seen nowadays (the space in which a title once occupied has for a long while now been used as a tab strip, in the style that Google Chrome and Mozilla Firefox eventually copied), this feature wasn’t needed.

But for users of the KeePass Password Safe, this has the knock-on effect of crippling the ability for this security tool to automatically type passwords and other form data into web pages, forcing users to take the long-winded route of manually copy-pasting them each time.

KeePass for Opera Plugin

To fix this problem, I’ve released the KeePass for Opera browser extension. It’s ludicrously simple: it injects a bit of Javascript (originally by Jean François) into every page you visit, which then appends the URL of the page to the title bar. This allows KeePass to detect what site you’re on, so the usual Global Auto-Type command (typically Left Ctrl + Alt + A) will work as normal.

[button link=”https://addons.opera.com/en-gb/extensions/details/keepass-auto-type/” align=”right” size=”medium” caption=”KeePass for Opera”]Install[/button]

Download KeePass for Opera (browser extension)

Open in Opera to install.

Further reading:

I’ve mentioned KeePass a few times before. See:

A New Look

Well, it’s been over a year since I last updated the look-and-feel of my blog, so it felt like it was time for a redesign. The last theme was made during a period that I was just recovering from a gloomy patch, and that was reflected the design: full of heavy, dark reds, blacks, and greys, and it’s well-overdue a new look!

The old Scatmania design: very serious-looking, and with dark, moody colours.
The old Scatmania design: very serious-looking, and with dark, moody colours.

I was also keen to update the site to in line with the ideas and technologies that are becoming more commonplace in web design, nowadays… as well as using it as a playground for some of the more-interesting CSS3 features!

This new design has elements in common with the theme before last: a big blue header, an off-white background, and sans-serif faces.
This new design has elements in common with the theme before last: a big blue header, an off-white background, and sans-serif faces.

Key features of the new look include:

  • A theme that uses strong colours in the footer and header, to “frame” the rest of the page content.
  • A responsive design that rescales dynamically all the way from a mobile phone screen through tablets, small 4:3 monitors, and widescreen ratios (try resizing your browser window!).
  • CSS transitions to produce Javascript-less dynamic effects: hover your cursor over the picture of me in the header to make me “hide”.
  • CSS “spriting” to reduce the number of concurrent downloads your browser has to make in order to see the content. All of the social media icons, for example, are one file, split back up again using background positioning. They’re like image maps, but a million times less 1990s.
  • Front page “feature” blocks to direct people to particular (tagged) areas of the site, dynamically-generated (from pre-made templates) based on what’s popular at any given time.
  • A re-arrangement of the controls and sections based on the most-popular use-cases of the site, according to visitor usage trends. For example, search has been made more-prominent, especially on the front page, the “next post”/”previous post” controls have been removed, and the “AddToAny” sharing tool has been tucked away at the very bottom.

[spb_message color=”alert-warning” width=”1/1″ el_position=”first last”]Note that some of these features will only work in modern browsers, so Internet Explorer users might be out of luck![/spb_message]

As always, I’m keen to hear your feedback (yes, even from those of you who subscribe by RSS). So let me know what you think!

× ×