The “Backendification” of Frontend Development

This is a repost promoting content originally published elsewhere. See more things Dan's reposted.

https://hackernoon.com/the-backendification-of-frontend-development-62f218a773d4 (hackernoon.com)

Asynchronous JavaScript in the form of Single Page Applications (SPA) offer an incredible opportunity for improving the user experience of your web applications. CSS frameworks like Bootstrap enable developers to quickly contribute styling as they’re working on the structure and behaviour of things.

Unfortunately, SPA and CSS frameworks tend to result in relatively complex solutions where traditionally separated concerns – HTML-structure, CSS-style, and JS-behaviour – are blended together as a matter of course — Counter to the lessons learned by previous generations.

This blending of concerns can prevent entry level developers and valued specialists (Eg. visual design, accessibility, search engine optimization, and internationalization) from making meaningful contributions to a project.

In addition to the increasing cost of the few developers somewhat capable of juggling all of these concerns, it can also result in other real world business implications.

What is a front-end developer? Does anybody know, any more? And more-importantly, how did we get to the point where we’re actively encouraging young developers into habits like writing (cough React cough) files containing a bloaty, icky mixture of content, HTML (markup), CSS (style), and Javascript (behaviour)? Yes, I get that the idea is that individual components should be packaged together (if you’re thinking in a React-like worldview), but that alone doesn’t justify this kind of bullshit antipattern.

It seems like the Web used to have developers. Then it got complex so we started differentiating back-end from front-end developers and described those who, like me, spanned the divide, as full-stack developers We gradually became a minority as more and more new developers, deprived of the opportunity to learn each new facet organically in this newly-complicated landscape, but that’s fine. But then… we started treating the front-end as the only end, and introducing all kinds of problems as a result… and most people don’t seem to have noticed, yet, exactly how much damage we’re doing to Web applications’ security, maintainability, future-proofibility, archivability, addressibility…

Modern CSS on DanQ.me

The current iteration of my blog diverges from an architectural principle common to most of previous versions of the last 20 years. While each previous change in design and layout was intended to provide a single monolithic upgrade, this version tries to provide me with a platform for continuous ongoing experimentation and change.

Debug console on DanQ.me showing Dan's head and a speech bubble.
Earlier this year I added experimental console art, for example. Click through for more details.

I’ve been trying to make better use of my blog as a vehicle for experimenting with web technologies, as I used to with personal sites back in the 1990s and early 2000s; to see a vanity site like this one as a living playground rather than something that – like most of the sites I’m paid to work on – something whose design is, for the most part, static for long periods of time.

"Blog" dropdown menu on DanQ.me.
The “popular” flag and associated background colour in the “Blog” top-level menu became permanent after a period of A/B testing. Thanks, unwitting testers!

Among the things I’ve added prior to the initial launch of this version of the design are gracefully-degrading grids, reduced-motion support, and dark-mode support – three CSS features of increasing levels of “cutting edge”-ness but each of which is capable of being implemented in a way that does not break the site’s compatibility. This site’s pages are readable using (simulations of) ancient rendering engines or even in completely text-based browsers, and that’s just great.

Here’s how I’ve implemented those three features:

Gracefully-degrading grids

Grid of recent notes and shares on DanQ.me
I’m not entirely happy with the design of these boxes, but that’s a job for another day.

The grid of recent notes, shares, checkins and videos on my homepage is powered by the display: grid; CSS directive. The number of columns varies by screen width from six on the widest screens down to three or just one on increasingly small screens. Crucially, grid-auto-flow: dense; is used to ensure an even left-to-right filling of the available space even if one of the “larger” blocks (with grid-column: span 2; grid-row: span 2;) is forced for space reasons to run onto the next line. This means that content might occasionally be displayed in a different order from that in which it is written in the HTML (which is reverse order of publication), but in exchange the items are flush with both sides.

Grid sample showing impact of dense flow.
The large “5 Feb” item in this illustration should, reverse-chronologically, appear before the “3 Feb” item, but there isn’t room for it on the previous line. grid-auto-flow: dense; means that the “3 Feb” item is allowed to bubble-up and fill the gap, appearing out-of-order but flush with the edge.

Not all web browsers support display: grid; and while that’s often only one of design and not of readability because these browsers will fall back to usually-very-safe default display modes like block and inline, as appropriate, sometimes there are bigger problems. In Internet Explorer 11, for example, I found (with thanks to @_ignatg) a problem with my directives specifying the size of these cells (which are actually <li> elements because, well, semantics matter). Because it understood the directives that ought to impact the sizing of the list items but not the one that redeclared its display type, IE made… a bit of a mess of things…

Internet Explorer scrambles a list/grid combination.
Thanks, Internet Explorer. That’s totally what I was looking for.

Do websites need to look the same in every browser? No. But the content should be readable regardless, and here my CSS was rendering my content unreadable. Given that Internet Explorer users represent a little under 0.1% of visitors to my site I don’t feel the need to hack it to have the same look-and-feel: I just need it to have the same content readability. CSS Feature Queries to the rescue!

CSS Feature Queries – the @supports selector – make it possible to apply parts of your stylesheet if and only if the browser supports specific CSS features, for example grids. Better yet, using it in a positive manner (i.e. “apply these rules only if the browser supports this feature”) is progressive enhancement, because browsers that don’t understand the  @supports selector act in the same way as those that understand it but don’t support the specified feature. Fencing off the relevant parts of my stylesheet in a @supports (display: grid) { ... } block instructed IE to fall back to displaying that content as a boring old list: exactly what I needed.

Internet Explorer's view of the "grid" on the DanQ.me homepage.
It isn’t pretty, but it’s pretty usable!

Reduced-motion support

I like to put a few “fun” features into each design for my blog, and while it’s nowhere near as quirky as having my head play peek-a-boo when you hover your cursor over it, the current header’s animations are in the same ballpark: hover over or click on some of the items in the header menu to see for yourself..

Main menu with "Dan Q" title in it's "bounced" position.
I’m most-pleased with the playful “bounce” of the letter Q when you hover over my name.

These kinds of animations are fun, but they can also be problematic. People with inner ear disorders (as well as people who’re just trying to maximise the battery life on their portable devices!) might prefer not to see them, and web designers ought to respect that choice where possible. Luckily, there’s an emerging standard to acknowledge that: prefers-reduced-motion. Alongside its cousins inverted-colors, prefers-reduced-transparency, prefers-contrast and prefers-color-scheme (see below for that last one!), these new CSS tools allow developers to optimise based on the accessibility features activated by the user within their operating system.

Motion-reducing controls in Windows 10 and MacOS X.
In Windows you turn off animations while in MacOS you turn on not-having animations, but the principle’s the same.

If you’ve tweaked your accessibility settings to reduce the amount of animation your operating system shows you, this website will respect that choice as well by not animating the contents of the title, menu, or the homepage “tiles” any more than is absolutely necessary… so long as you’re using a supported browser, which right now means Safari or Firefox (or the “next” version of Chrome). Making the change itself is pretty simple: I just added a @media screen and (prefers-reduced-motion: reduce) { ... } block to disable or otherwise cut-down on the relevant animations.

Dark-mode support

DanQ.me in dark mode.

Similarly, operating systems are beginning to support “dark mode”, designed for people trying to avoid eyestrain when using their computer at night. It’s possible for your browser to respect this and try to “fix” web pages for you, of course, but it’s better still if the developer of those pages has anticipated your need and designed them to acknowledge your choice for you. It’s only supported in Firefox and Safari so far and only on recent versions of Windows and MacOS, but it’s a start and a helpful touch for those nocturnal websurfers out there.

Enabling Dark Mode on Windows 10 and MacOS X
Come to the dark side, Luke. Or just get f.lux, I suppose.

It’s pretty simple to implement. In my case, I just stacked some overrides into a @media (prefers-color-scheme: dark) { ... } block, inverting the background and primary foreground colours, softening the contrast, removing a few “bright” borders, and darkening rather than lightening background images used on homepage tiles. And again, it’s an example of progressive enhancement: the (majority!) of users whose operating systems and/or browsers don’t yet support this feature won’t be impacted by its inclusion in my stylesheet, but those who can make use of it can appreciate its benefits.

This isn’t the end of the story of CSS experimentation on my blog, but it’s a part of the it that I hope you’ve enjoyed.

× × × × × × ×

What Does AJAX Even Stand For?

This is a repost promoting content originally published elsewhere. See more things Dan's reposted.

…Jesse James Garett of Adaptive Path finally gave this use of Javascript a name. He called it Asynchronous Javascript and XML (thankfully shortened to Ajax). Ajax wasn’t a single method or technology, but a group of principles based on the work being done at Google that described how to handle JavaScript in more demanding web applications. The short version is actually fairly simple. Use XMLHttpRequest to make request to the server, take the XML data returned, and lay it out on the page using JavaScript and semantic HTML.

Ajax took off after that. It underpins major parts of modern web development, and has spawned a number of frameworks and methodologies. The technology itself has been swapped out over time. XML was replaced by JSON, and XMLHttpRequest by fetch. That doesn’t matter though. Ajax showed a lot of developers that the web wasn’t just about documents. The web could be used to build applications. That might seem like a small notion, but trust me when I say, it was momentous.

The History of The Web is great, and you should read it, but this piece is particularly interesting. Ajax and its spiritual successors laid the groundwork for rich Internet applications, shell apps, and the real-time Web, but they also paved the way for a handful of the inevitable practices that went alongside: Javascript-required websites, API-driven web and mobile applications with virtually no HTML content whatsoever… and these things have begun to become problematic for the health of the Web as a whole.

I love Ajax, but it absolutely must be employed as progressive enhancement where possible.

Debugging WorldWideWeb

Earlier this week, I mentioned the exciting hackathon that produced a moderately-faithful reimagining of the world’s first Web browser. I was sufficiently excited about it that I not only blogged here but I also posted about it to MetaFilter. Of course, the very first thing that everybody there did was try to load MetaFilter in it, which… didn’t work.

MetaFilter failing to load on the reimagined WorldWideWeb.
500? Really?

People were quick to point this out and assume that it was something to do with the modernity of MetaFilter:

honestly, the disheartening thing is that many metafilter pages don’t seem to work. Oh, the modern web.

Some even went so far as to speculate that the reason related to MetaFilter’s use of CSS and JS:

CSS and JS. They do things. Important things.

This is, of course, complete baloney, and it’s easy to prove to oneself. Firstly, simply using the View Source tool in your browser on a MetaFilter page reveals source code that’s quite comprehensible, even human-readable, without going anywhere near any CSS or JavaScript.

MetaFilter in Lynx: perfectly usable browing experience
As late as the early 2000s I’d occasionally use Lynx for serious browsing, but any time I’ve used it since it’s been by necessity.

Secondly, it’s pretty simple to try browsing MetaFilter without CSS or JavaScript enabled! I tried in two ways: first, by using Lynx, a text-based browser that’s never supported either of those technologies. I also tried by using Firefox but with them disabled (honestly, I slightly miss when the Web used to look like this):

MetaFilter in Firefox (with CSS and JS disabled)
It only took me three clicks to disable stylesheets and JavaScript in my copy of Firefox… but I’ll be the first to admit that I don’t keep my browser configured like “normal people” probably do.

And thirdly: the error code being returned by the simulated WorldWideWeb browser is a HTTP code 500. Even if you don’t know your HTTP codes (I mean, what kind of weirdo would take the time to memorise them all anyway <ahem>), it’s worth learning this: the first digit of a HTTP response code tells you what happened:

  • 1xx means “everything’s fine, keep going”;
  • 2xx means “everything’s fine and we’re done”;
  • 3xx means “try over there”;
  • 4xx means “you did something wrong” (the infamous 404, for example, means you asked for a page that doesn’t exist);
  • 5xx means “the server did something wrong”.

Simple! The fact that the error code begins with a 5 strongly implies that the problem isn’t in the (client-side) reimplementation of WorldWideWeb: if this had have been a CSS/JS problem, I’d expect to see a blank page, scrambled content, “filler” content, or incomplete content.

So I found myself wondering what the real problem was. This is, of course, where my geek flag becomes most-visible: what we’re talking about, let’s not forget, is a fringe problem in an incomplete simulation of an ancient computer program that nobody uses. Odds are incredibly good that nobody on Earth cares about this except, right now, for me.

Dan's proposed "Geek Flag"
I searched for a “Geek Flag” and didn’t like anything I saw, so I came up with this one based on… well, if you recognise what it’s based on, good for you, you’re certainly allowed to fly it. If not… well, you can too: there’s no geek-gatekeeping here.

Luckily, I spotted Jeremy’s note that the source code for the WorldWideWeb simulator was now available, so I downloaded a copy to take a look. Here’s what’s happening:

  1. The (simulated) copy of WorldWideWeb is asked to open a document by reference, e.g. “https://www.metafilter.com/”.
  2. To work around same-origin policy restrictions, the request is sent to an API which acts as a proxy server.
  3. The API makes a request using the Node package “request” with this line of code: request(url, (error, response, body) => { ... }).  When the first parameter to request is a (string) URL, the module uses its default settings for all of the other options, which means that it doesn’t set the User-Agent header (an optional part of a Web request where the computer making the request identifies the software that’s asking).
  4. MetaFilter, for some reason, blocks requests whose User-Agent isn’t set. This is weird! And nonstandard: while web browsers should – in RFC2119 terms – set their User-Agent: header, web servers shouldn’t require that they do so. MetaFilter returns a 403 and a message to say “Forbidden”; usually a message you only see if you’re trying to access a resource that requires session authentication and you haven’t logged-in yet.
  5. The API is programmed to handle response codes 200 (okay!) and 404 (not found), but if it gets anything else back it’s supposed to throw a 400 (bad request). Except there’s a bug: when trying to throw a 400, it requires that an error message has been set by the request module and if there hasn’t… it instead throws a 500 with the message “Internal Server Fangle” and  no clue what actually went wrong. So MetaFilter’s 403 gets translated by the proxy into a 400 which it fails to render because a 403 doesn’t actually produce an error message and so it gets translated again into the 500 that you eventually see. What a knock-on effect!
Illustration showing conversation between simulated WorldWideWeb and MetaFilter via an API that ultimately sends requests without a User-Agent, gets a 403 in response, and can't handle the 403 and so returns a confusing 500.
If you’re having difficulty visualising the process, this diagram might help you to continue your struggle with that visualisation.

The fix is simple: simply change the line:

request(url, (error, response, body) => { ... })

to:

request({ url: url, headers: { 'User-Agent': 'WorldWideWeb' } }, (error, response, body) => { ... })

This then sets a User-Agent header and makes servers that require one, such as MetaFilter, respond appropriately. I don’t know whether WorldWideWeb originally set a User-Agent header (CERN’s source file archive seems to be missing the relevant C sources so I can’t check) but I suspect that it did, so this change actually improves the fidelity of the emulation as a bonus. A better fix would also add support for and appropriate handling of other HTTP response codes, but that’s a story for another day, I guess.

I know the hackathon’s over, but I wonder if they’re taking pull requests…

× × × × ×

WorldWideWeb, 30 years on

This month, a collection of some of my favourite geeks got invited to CERN in Geneva to participate in a week-long hackathon with the aim of reimplementing WorldWideWeb – the first web browser, circa 1990-1994 – as a web application. I’m super jealous, but I’m also really pleased with what they managed to produce.

DanQ.me as displayed by the reimagined WorldWideWeb browser circa 1990
With the exception of a few character entity quirks, this site remains perfectly usable in the simulated WorldWideWeb browser. Clearly I wasn’t the only person to try this vanity-check…

This represents a huge leap forward from their last similar project, which aimed to recreate the line mode browser: the first web browser that didn’t require a NeXT computer to run it and so a leap forward in mainstream appeal. In some ways, you might expect reimplementing WorldWideWeb to be easier, because its functionality is more-similar that of a modern browser, but there were doubtless some challenges too: this early browser predated the concept of the DOM and so there are distinct processing differences that must be considered to get a truly authentic experience.

Geeks hacking on WorldWideWeb reborn
It’s just like any other hackathon, if you ignore the enormous particle collider underneath it.

Among their outputs, the team also produced a cool timeline of the Web, which – thanks to some careful authorship – is as legible in WorldWideWeb as it is in a modern browser (if, admittedly, a little less pretty).

WorldWideWeb screenshot by Sir Tim Berners-Lee
When Sir Tim took this screenshot, he could never have predicted the way the Web would change, technically, over the next 25-30 years. But I’m almost more-interested in how it’s stayed the same.

In an age of increasing Single Page Applications and API-driven sites and “apps”, it’s nice to be reminded that if you develop right for the Web, your content will be visible (sort-of; I’m aware that there are some liberties taken here in memory and processing limitations, protocols and negotiation) on machines 30 years old, and that gives me hope that adherence to the same solid standards gives us a chance of writing pages today that look just as good in 30 years to come. Compare that to a proprietary technology like Flash whose heyday 15 years ago is overshadowed by its imminent death (not to mention Java applets or ActiveX <shudders>), iOS apps which stopped working when the operating system went 64-bit, and websites which only work in specific browsers (traditionally Internet Explorer, though as I’ve complained before we’re getting more and more Chrome-only sites).

The Web is a success story in open standards, natural and by-design progressive enhancement, and the future-proof archivability of human-readable code. Long live the Web.

Update 24 February 2019: After I submitted news of the browser to MetaFilter, I (and others) spotted a bug. So I came up with a fix…

× × ×

You probably don’t need a single-page application

This is a repost promoting content originally published elsewhere. See more things Dan's reposted.

The meteoric rise of front-end frameworks like React, Angular, Vue.js, Elm, etc. has made single-page applications ubiquitous on the web. For many developers, these have become part of their ‘default’ toolset. When they start a new project, they grab the tools they know already: a REST API on the backend, and a React/Angular/Vue/Elm frontend.

Is there something wrong with these tools? Absolutely not. In fact, I love working with them. However, I would only choose this architecture when an actual requirement is pushing me in that direction. If there are no specific reasons to build a single-page application, I will go with a traditional server-rendered architecture every day of the week. It is simpler and allows you to move faster.

There’s been an increasing trend towards delivering web applications as SPAs backed by an API. I can see the attraction: disposing of the browser’s navigation cycle lets you develop that coveted “app-like” interaction experience, pushing only data around lets you implement multiple clients backed by the same single middleware, and it results in a development workflow that fits tightly with many of the hippest frameworks (go jamstack, backendless, Node-backed, or whatever). I love REST and all, but I feel that it works best when it’s used to deliver multiformat results (whether by content negotiation or whatever): web pages for the humans, JSON or whatever for the computers.

For an increasing number of developers, SPAs are a golden hammer. Let’s fix that.

Post-it Note Affirmations and the Amazon Dash

This is a repost promoting content originally published elsewhere. See more things Dan's reposted.

The amazon dash is a pinnacle of modern web design. It’s one of the most intrusive, complex, and resource-dependent devices we’ve introduced into our homes, yet it appears as a simple oval with a single button for a single use. The use is absurdly narrow: the button will have a picture of Tide detergent, and when you press the button, Tide detergent is sent to your door.

Barely a week goes by between the times that I discover some horrifically over-engineered “solution” on the Internet. Amazon’s Dash buttons are terrible: disposable (plastic) single-purpose computers that could so easily have been made into something “more” – more-versatile, more-open, more-configurable, more-flexible. Indeed: people have been doing exactly that kind of thing! But the vanilla Dash button remains little more than selling you convenience (and not much convenience, if we’re honest) in exchange for more and more of your feeling of digital freedom. Yet another example of what replaced the Web we lost…

By hiding the technical processes, and simplifying the onboarding and engagement of their services, Amazon can continually reinforce your depression for a profit— and you can get name-brand laundry detergent faster.

Also, can I just take a moment to point out how awesome Zach’s website is. Not only is it the perfect example of how fun and weird the Internet can be and having a mixture of fascinating and curious content, it’s also available via dat:// for those of you who’ve got some love for the datbaseiverse.

Shouldn’t We All Have Seamless Micropayments By Now?

This is a repost promoting content originally published elsewhere. See more things Dan's reposted.

Shouldn’t We All Have Seamless Micropayments By Now? (WIRED)

The web’s founders fully expected some form of digital payment to be integral to its functioning. But nearly three decades later, we’re still waiting.

Back in the 1990s, when Tim Berners-Lee and his team were creating the infrastructure of the World Wide Web, they made a list of the error codes that would pop up when something went wrong. You’ve surely encountered many of them: “404 Not Found,” which pops up if you click on a dead link; “401 Unauthorized” when you hit a page that needs a password; and so on.

Here’s one you probably haven’t seen—and its absence from your life speaks to why the promise of the early web seems increasingly out of reach: “402 Payment Required.”

That’s right: The web’s founders fully expected some form of digital payment to be integral to its functioning, just as integral as links, web pages, and passwords. After all, without a way to quickly and smoothly exchange money, how would a new economy be able to flourish online? Of course there ought to be a way to integrate digital cash into browsing and other activities. Of course.

Yet after almost three decades, that 402 error code is still “reserved for future use.” So I still have to ask: Where are my digital micropayments? Where are those frictionless, integrated ways of exchanging money online—cryptographically protected to allow commerce but not surveillance?

In response to this article being discussed on MetaFilter, I wrote:

The Web Payments Working Group published a specification for a standardised mechanism for the collection of card payment details online, a couple of years ago. It’s not quite the same thing because it’s done in the page application rather than at the HTTP(S) level, but it goes a long way towards solving a lot of the problems with our existing approach to payment processing online.

It’s already seeing adoption in browsers, but merchants and payment processors are unlikely to start rolling it out until adoption until later because (a) they want critical mass and (b) they’re wary of change. But within a few years, you’ll probably see it for the first time, and you might not even notice.

The idea is that instead of asking you to fill out an (arbitrary) form, a web page will ask your browser to get payment details from you in a standardised format. That might mean entering your card details if that’s how you prefer to work (but even if you choose to do this, the form you fill in will look the same every time) but it would instead allow you to use a payment tool built in to your browser, operating system, or password safe to do it for you. I know that browsers and password safes will offer to try to do this today, but standardising the format means that they’ll always be able to achieve it.

Once this technology’s in place, there’s nothing to stop HTTP 402’s implementation being completed: all the infrastructure will exist.

The thing about the future is that when it arrives, you don’t even notice. It’s never jetpacks and flying cars: it’s a series of iterative changes, each one predictable after the completion of the last but the entire ensemble seeming innovative and surprising when taken as a whole.

Let’s bring Fan Sites and webrings back!

This is a repost promoting content originally published elsewhere. See more things Dan's reposted.

Let’s bring Fan Sites and webrings back! – bryanlrobinson.com (bryanlrobinson.com)

In the days before the web was mainstream, it was a place of creation. First for education, then for every random idea that any creator had!

As the web transitioned from a network of educational institutions to the consumer force it is today, the early adopters were technologists… AKA geeks!

Promo image of various Fan Sites

A hallmark of geek culture is fandom – a deep knowledge of whatever topic interests them. This could be about a book, TV show, movie or band. With this passion comes a desire to share it with the world. Before the internet, there was no clear path. After the web started gaining traction, it was the biggest and easiest megaphone you could want.

It wasn’t always easy to be found, though. There was no search algorithm. Google was not ubiquitous with search. To be found, you needed to be listed on a site that aggregated other sites about your topic.

There was always a certain joy to a well-kept webring, back in the day. I’d love to see a return to this kind of “Indieweb dream”, but I don’t think that just wishing for it nor even telling people to go out and do it goes far enough, alone. Hopefully Bryan’s post will help nudge a few people in the right direction, though.

Browser diversity starts with us

This is a repost promoting content originally published elsewhere. See more things Dan's reposted.

Jeffrey Zeldman (zeldman.com)

Even if you love Chrome, adore Gmail, and live in Google Docs or Analytics, no single company, let alone a user-tracking advertising giant, should control the internet.

Diversity is as good for the web as it is for society. And it starts with us.

Yet more fallout from the Microsoft announcement that Edge will switch to Chromium, which I discussed earlier. This one’s pretty inspirational, and gives a good reminder about what our responsibilities are to the Web, as its developers.

Goodbye, EdgeHTML

This is a repost promoting content originally published elsewhere. See more things Dan's reposted.

Microsoft is officially giving up on an independent shared platform for the internet. By adopting Chromium, Microsoft hands over control of even more of online life to Google.

This may sound melodramatic, but it’s not. The “browser engines” — Chromium from Google and Gecko Quantum from Mozilla — are “inside baseball” pieces of software that actually determine a great deal of what each of us can do online. They determine core capabilities such as which content we as consumers can see, how secure we are when we watch content, and how much control we have over what websites and services can do to us. Microsoft’s decision gives Google more ability to single-handedly decide what possibilities are available to each one of us.

From a business point of view Microsoft’s decision may well make sense. Google is so close to almost complete control of the infrastructure of our online lives that it may not be profitable to continue to fight this. The interests of Microsoft’s shareholders may well be served by giving up on the freedom and choice that the internet once offered us. Google is a fierce competitor with highly talented employees and a monopolistic hold on unique assets. Google’s dominance across search, advertising, smartphones, and data capture creates a vastly tilted playing field that works against the rest of us.

From a social, civic and individual empowerment perspective ceding control of fundamental online infrastructure to a single company is terrible. This is why Mozilla exists. We compete with Google not because it’s a good business opportunity. We compete with Google because the health of the internet and online life depend on competition and choice. They depend on consumers being able to decide we want something better and to take action.

Will Microsoft’s decision make it harder for Firefox to prosper? It could. Making Google more powerful is risky on many fronts. And a big part of the answer depends on what the web developers and businesses who create services and websites do. If one product like Chromium has enough market share, then it becomes easier for web developers and businesses to decide not to worry if their services and sites work with anything other than Chromium. That’s what happened when Microsoft had a monopoly on browsers in the early 2000s before Firefox was released. And it could happen again.

If you care about what’s happening with online life today, take another look at Firefox. It’s radically better than it was 18 months ago — Firefox once again holds its own when it comes to speed and performance. Try Firefox as your default browser for a week and then decide. Making Firefox stronger won’t solve all the problems of online life — browsers are only one part of the equation. But if you find Firefox is a good product for you, then your use makes Firefox stronger. Your use helps web developers and businesses think beyond Chrome. And this helps Firefox and Mozilla make overall life on the internet better — more choice, more security options, more competition.

Scathing but well-deserved dig at Microsoft by Mozilla, following on from the Edge-switch-to-Chromium I’ve been going on about. Chris is right: more people should try Firefox (it’s been my general-purpose browser on desktop and mobile ever since Opera threw in the towel and joined the Chromium hivemind in 2013, and on-and-off plenty before then) – not just because it’s a great browser (and it is!) but also now because it’s important for the diversity and health of the Web.

(Reprinted in full under a creative commons license.)

While we Blink, we loose [sic] the Web

This is a repost promoting content originally published elsewhere. See more things Dan's reposted.

We used to have much more diversity in terms of browser engines years ago than we do today. This is easy to understand as the Web in 2018 is far more complex than it was in the early noughties. It is very costly to develop and maintain a Web engine and few companies have the necessary talent and cash to do it. Microsoft is one of those companies but the fact that it might be throwing in the towel on its engine signals a bad development for all of us.

Further evaluation of the dangers of the disappearing diversity on the Web, following in the theme of my thoughts the other day about Microsoft’s adoption of Chromium instead of EdgeHTML in its browser.

Andre raises a real point: how will we fight for a private and decentralised Web when it becomes “the Google Web”?

Why You Should Never, Ever Use Quora

This is a repost promoting content originally published elsewhere. See more things Dan's reposted.

Yesterday, Quora announced that 100 million user accounts were compromised, including private activity like downvotes and direct messages, by a “malicious third party.”

Data breaches are a frustrating part of the lifecycle of every online service — as they grow in popularity, they become a bigger and bigger target. Nearly every major online service has had a security breach: Facebook, Google, Twitter, Yahoo, Tumblr, Uber, Evernote, eBay, Adobe, Target, Twitter, and Sony all extensively leaked user data in the last few years.

Security breaches like these are a strong argument for using a password manager, but not a compelling reason to avoid a service you love, unless you plan to quit the internet entirely.

But this does seem like a good time to remind you of all the other reasons why you should never, ever use Quora.

Short summary of why you shouldn’t use Quora (even ignoring the recent security scare), for those who can’t be bothered clicking-through:

  • They claim to want to share knowledge, but they hoard and restrict access to knowledge
  • They’re actively hostile to the free exchange of data, both technically and politically
  • They directly oppose the archiving and backup of the knowledge they hoard
  • They won’t last (even remotely) forever

Just don’t use Quora.

Bodleian Advent Calendar

Hot on the tail of Pong, I wanted to share another mini-project I’ve developed for the Bodleian: this year’s digital advent calendar:

Bodleian 2018 digital advent calendar
If you look closely, you’ll find I’ve shown you a sneak-peek at some of what’s behind tomorrow’s door. Shh. Don’t tell our social media officer.

As each door is opened, a different part of a (distinctly-Bodleian/Oxford) winter scene unfolds, complete with an array of fascinating characters connected to the history, tradition, mythology and literature of the area. It’s pretty cool, and you should give it a go.

If you want to make one of your own – for next year, presumably, unless you’ve an inclination to count-down in this fashion to something else that you’re celebrating 25 days hence – I’ve shared a version of the code that you can adapt for yourself.

Sample advent calendar
The open-source version doesn’t include the beautiful picture that the Bodleian’s does, so you’ll have to supply your own.

Features that make this implementation a good starting point if you want to make your own digital advent calendar include:

  • Secure: your server’s clock dictates which doors are eligible to be opened, and only content legitimately visible on a given date can be obtained (no path-traversal, URL-guessing, or traffic inspection holes).
  • Responsive: calendar adapts all the way down to tiny mobiles and all the way up to 4K fullscreen along with optimised images for key resolutions.
  • Friendly: accepts clicks and touches, uses cookies to remember the current state so you don’t have to re-open doors you opened yesterday (unless you forgot to open one yesterday), “just works”.
  • Debuggable: a password-protected debug mode makes it easy for you to test, even on a production server, without exposing the secret messages behind each door.
  • Expandable: lots of scope for the future, e.g. a progressive web app version that you can keep “on you” and which notifies you when a new door’s ready to be opened, was one of the things I’d hoped to add in time for this year but didn’t quite get around to.

I’ve no idea if it’s any use to anybody, but it’s available on GitHub if you want it.

×