The problem with single page apps

This article is a repost promoting content originally published elsewhere. See more things Dan's reposted.

Single-page apps (or SPAs as they’re sometimes called) serve all of the code for an entire multi-UI app from a single index.html file.

They use JavaScript to handle URL routing with real URLs. For this to work, you need to:

  1. Configure the server to point all paths on a domain back to the root index.html file. For example, todolist.com and todolist.com/lists should both point to the same file.
  2. Suppress the default behavior when someone clicks a link that points to another page in the app.
  3. Use more JavaScript—history.pushState()—to update the URL without triggering a page reload.
  4. Match the URL against a map of routes, and serve the right content based on it.
  5. If your URL has variable information in it (like a todolist ID, for example), parse that data out of the URL.
  6. Detect when someone clicks the browser’s back button/forward button, and update the URL and UI.
  7. Update the title element on the page.
  8. Use even more JavaScript to dynamically focus the content area when the content changes (for screen-reader users).

(Shoutout to Ashley Bischoff for those last two!)

You end up recreating with JavaScript a lot of the features the browser gives you out-of-the-box.

This becomes more code to maintain, more complexity to manage, and more things to break. It makes the whole app more fragile and bug-prone than it has to be.

I’m going to share some alternatives that I prefer.

Like – it seems – Chris Ferdinandi, I’ve got nothing against Single Page Applications in their place.

My biggest concern with SPAs is that they’re routinely seen as an inevitable progression of web development: that is, that an increasing number of web developers have been brainwashed into thinking that they’re intrinsically superior to traditional multi-page websites. As Adam Silver observed the other year, using your heavyweight Javascript framework to Ajaxify your page loads does make the application feel faster… but only because the download and processing time of the heavyweight Javascript framework made it feel slow in the first place! The net result: web bloat, penalising of mobile users, and brittle applications with many failure points.

Whenever I see a new front-end framework sing the praises of its routing engine I wonder how we got to this point. After all: the Web’s had a routing engine since 1990, and most efforts to reinvent it invariably make it worse: less-accessible, less-archivable, less-sharable, less-discoverable, less-reliable, or several of these.

Going Critical

This article is a repost promoting content originally published elsewhere. See more things Dan's reposted.

If you’ve spent any time thinking about complex systems, you surely understand the importance of networks.
Networks rule our world. From the chemical reaction pathways inside a cell, to the web of relationships in an ecosystem, to the trade and political networks that shape the course of history.
Or consider this very post you’re reading. You probably found it on a social network, downloaded it from a computer network, and are currently deciphering it with your neural network.
But as much as I’ve thought about networks over the years, I didn’t appreciate (until very recently) the importance of simple diffusion.
This is our topic for today: the way things move and spread, somewhat chaotically, across a network. Some examples to whet the appetite:
  • Infectious diseases jumping from host to host within a population
  • Memes spreading across a follower graph on social media
  • A wildfire breaking out across a landscape
  • Ideas and practices diffusing through a culture
  • Neutrons cascading through a hunk of enriched uranium
A quick note about form.
Unlike all my previous work, this essay is interactive. There will be sliders to pull, buttons to push, and things that dance around on the screen. I’m pretty excited about this, and I hope you are too.
So let’s get to it. Our first order of business is to develop a visual vocabulary for diffusion across networks.

A simple model

I’m sure you all know the basics of a network, i.e., nodes + edges.
To study diffusion, the only thing we need to add is labeling certain nodes as active. Or, as the epidemiologists like to say, infected:
This activation or infection is what will be diffusing across the network. It spreads from node to node according to rules we’ll develop below.
Now, real-world networks are typically far bigger than this simple 7-node network. They’re also far messier. But in order to simplify — we’re building a toy model here — we’re going to look at grid or lattice networks throughout this post.
(What a grid lacks in realism, it makes up for in being easy to draw ;)
Except where otherwise specified, the nodes in our grid will have 4 neighbors, like so:
And we should imagine that these grids extend out infinitely in all directions. In other words, we’re not interested in behavior that happens only at the edges of the network, or as a result of small populations.
Given that grid networks are so regular, we can simplify by drawing them as pixel grids. These two images represent the same network, for example:
Alright, let’s get interactive.

Fabulous (interactive! – click through for the full thing to see for yourself) exploration of network interactions with applications for understanding epidemics, memes, science, fashion, and much more. Plus Kevin’s made the whole thing CC0 so everybody can share and make use of his work. Treat as a longread with some opportunities to play as you go along.

JavaScript and the mobile-only user

This article is a repost promoting content originally published elsewhere. See more things Dan's reposted.

It’s 2019. No one disables JavaScript.

Actually (cue Adam Conover!)… some people do. They represent about a tenth of a percent (0.1%) of the people who don’t get your JavaScript file, though, and they’ve chosen to browse the web that way, so let’s ignore them for a second.

I see this argument—that one one disables JS in 2019—as an argument for not bothering to care or worry about progressive enhancement. But it’s wrong!

There are plenty of other reasons why people don’t get your JavaScript.

  • Your CDN fails
  • An ad blocker or filewall got a little overly aggressive
  • A JS error (either in your code or something third-party) stops all of the JS from rendering
  • The file times out because it’s too big or too slow to parse

Today, I want to focus on that last one.

So very much this. Web bloat is becoming a huge issue (incidentally, I was pleased to see that DanQ.me’s homepage Web Bloat Score is in the region of a nice, clean 0.1, but I’m confident that there’s still plenty I could do to improve it); it’s easy to see how developers on their powerful desktops and laptops and with their WiFi-connected high-end smartphones might overlook people on older, less-powerful devices and slower, lower-bandwidth connections.

I’m not saying that Javascript is bad: it’s not! I’m saying that where functionality exists in simpler, more-lightweight technologies (like good old-fashioned links and regular <input> elements, both of which are routinely reimplemented in the front-end), then those technologies should be used in the first instance. If you want to build on top of that with Javascript, that’s great! But starting from the ground floor when building for the web is the surest way to support the widest diversity of potential users (and it makes it easier to achieve your accessibility goals, too!)

Why do people choose frameworks over vanilla JS?

This article is a repost promoting content originally published elsewhere. See more things Dan's reposted.

This week on Twitter, Maxime Euzière asked why people choose large frameworks over vanilla JS. There are quite a few reasons. Some of them are really valid. Many of them aren’t. Here are the ones I see most often (with commentary). Vanilla JS is harder. No, it’s often not. Modern vanilla JS has taken many…

Like many people who were already developing for the Web when Javascript first reared its (ugly) head, I would later be delighted when libraries like Prototype and later jQuery would arrive and start doing the “heavy lifting” for me. Not having to do DOM parsing or (especially) Ajax the “long way” (which was particularly long given the workarounds that needed to be done for cross-compatibility) was a huge boon and made it possible for me to write applications that I wouldn’t otherwise have been able to.

But in recent years, I’ve really been enjoying “vanilla” JS. As a language, JavaScript has really grown-up lately, and with modern (and evergreen) browsers dominating the landscape, everybody benefits from these new features relatively soon after they become available. Of course, it’s still important to see any JavaScript as a progressive enhancement that not everybody will experience, but it’s still true, now, that the traditional barriers to writing excellent code in the language are rapidly evaporating.

I no longer add jQuery to a project as a matter of course (and in fact I think it’s been over a year since I deliberately added it to a new project), and that’s great.

Yet Another JavaScript Framework

This article is a repost promoting content originally published elsewhere. See more things Dan's reposted.

It is impossible to answer all of these questions simply. They can, however, be framed by the ideological project of the web itself. The web was built to be open, both technologically as a decentralized network, and philosophically as a democratizing medium. These questions are tricky because the web belongs to no one, yet was built for everyone. Maintaining that spirit takes a lot of work, and requires sometimes slow, but always deliberate decisions about the trajectory of web technologies. We should be careful to consider the mountains of legacy code and libraries that will likely remain on the web for its entire existence. Not just because they are often built with the best of intentions, but because many have been woven into the fabric of the web. If we pull on any one thread too hard, we risk unraveling the whole thing.

A great story about how Firefox nearly broke tens of thousands of websites by following standards, and then didn’t. tl;dr: Javascript has a messy history.

How many people are missing out on JavaScript enhancement?

This article is a repost promoting content originally published elsewhere. See more things Dan's reposted.

few weeks back, we were chatting about the architecture of the Individual Electoral Registration web service.  We started discussing the pros and cons of an approach that would provide a significantly different interaction for any people not running JavaScript.

“What proportion of people is that?” an inquisitive mind asked.

Silence.

We didn’t really have any idea how many people are experiencing UK government web services without the enhancement of JavaScript. That’s a bad thing for a team that is evangelical about data driven design, so I thought we should find out.

The answer is:

1.1% of people aren’t getting Javascript enhancements (1 in 93)

This article by the GDS is six years old now, but its fundamental point is still as valid as ever: a small proportion (probably in the region of 1%) of your users won’t experience some or all of the whizzy Javascript stuff on your website, and it’s not because they’re a power user who disables Javascript.

There are so many reasons a user won’t run your Javascript, including:

  • They’re using a browser that doesn’t support Javascript (or doesn’t support the version you’re using)
  • They, or somebody they share their device with, has consciously turned-off Javascript either wholesale or selectively, in order to for example save bandwidth, improve speed, reinforce security, or improve compatibility with their accessibility technologies
  • They’re viewing a locally-saved, backed-up, or archived version of your page (possibly in the far future long after your site is gone)
  • Their virus scanner mis-classified your Javascript as potentially malicious
  • One or more of your Javascript files contains a bug which, on their environment, stops execution
  • One or more of your Javascript files failed to be delivered, for example owing to routing errors, CDN downtime, censorship, cryptographic handshake failures, shaky connections, cross-domain issues, stale caches…
  • On their device, your Javascript takes too long to execute or consumes too many resources and is stopped by the browser

Fundamentally, you can’t depend on Javascript and so you shouldn’t depend on it being there, 100% of the time, when it’s possible not to. Luckily, the Web already gives us all the tools we need to develop the vast, vast majority of web content in a way that doesn’t depend on Javascript. Back in the 1990s we just called it “web development”, but nowadays Javascript (and other optional/under-continuous-development web technologies like your favourite so-very-2019 CSS hack) is so ubiquitous that we give it the special name “progressive enhancement” and make a whole practice out of it.

The Web was designed for forwards- and backwards-compatibility. When you break that, you betray your users and you make work for yourself.

(by the way: I know I plugged the unpoly framework already, the other day, but you should really give it a look if you’re just learning how to pull off progressive enhancement)

Minimal Google Analytics Snippet

This article is a repost promoting content originally published elsewhere. See more things Dan's reposted.

<script>
(function(a,b,c){var d=a.history,e=document,f=navigator||{},g=localStorage,
h=encodeURIComponent,i=d.pushState,k=function(){return Math.random().toString(36)},
l=function(){return g.cid||(g.cid=k()),g.cid},m=function(r){var s=[];for(var t in r)
r.hasOwnProperty(t)&&void 0!==r[t]&&s.push(h(t)+"="+h(r[t]));return s.join("&")},
n=function(r,s,t,u,v,w,x){var z="https://www.google-analytics.com/collect",
A=m({v:"1",ds:"web",aip:c.anonymizeIp?1:void 0,tid:b,cid:l(),t:r||"pageview",
sd:c.colorDepth&&screen.colorDepth?screen.colorDepth+"-bits":void 0,dr:e.referrer||
void 0,dt:e.title,dl:e.location.origin+e.location.pathname+e.location.search,ul:c.language?
(f.language||"").toLowerCase():void 0,de:c.characterSet?e.characterSet:void 0,
sr:c.screenSize?(a.screen||{}).width+"x"+(a.screen||{}).height:void 0,vp:c.screenSize&&
a.visualViewport?(a.visualViewport||{}).width+"x"+(a.visualViewport||{}).height:void 0,
ec:s||void 0,ea:t||void 0,el:u||void 0,ev:v||void 0,exd:w||void 0,exf:"undefined"!=typeof x&&
!1==!!x?0:void 0});if(f.sendBeacon)f.sendBeacon(z,A);else{var y=new XMLHttpRequest;
y.open("POST",z,!0),y.send(A)}};d.pushState=function(r){return"function"==typeof d.onpushstate&&
d.onpushstate({state:r}),setTimeout(n,c.delay||10),i.apply(d,arguments)},n(),
a.ma={trackEvent:function o(r,s,t,u){return n("event",r,s,t,u)},
trackException:function q(r,s){return n("exception",null,null,null,null,r,s)}}})
(window,"XX-XXXXXXXXX-X",{anonymizeIp:true,colorDepth:true,characterSet:true,screenSize:true,language:true});
</script>

This is cute: a Google Analytics code snippet that results in a payload about a fiftieth of the size of the one provided by Google but still provides most of the important features.

Bassoontracker – Release 0.3.0 is out!

This article is a repost promoting content originally published elsewhere. See more things Dan's reposted.

Steffest Digitale Pulptuur (steffest.com)

BassoonTracker

During the past months I have been tinkering along on Bassoontracker – My browser based Retro Music Tool.
Today, it’s ready for a next big release: Version 0.3.0 is out!

This is just about the coolest thing in the world. I played with Scream Tracker and a handful of other trackers in the 1990s and I loved the scene, even though I was never talented enough to participate as more than an observer and fan. This is a reimagining of a classic tracker… written entirely in Javascript! So cool!

CSS-driven console graphics

If you’re reading this post via my blog and using a desktop computer, try opening your browser’s debug console (don’t worry; I’ll wait). If you don’t know how, here’s instructions for Firefox and instructions for Chrome. Other browsers may vary. You ought to see something like this in your debugger:

Debug console on DanQ.me showing Dan's head and a speech bubble.
I’m in your console, eating your commands!

What sorcery is this?

The debug console is designed to be used by web developers so that they can write Javascript code right in their browser as well as to investigate any problems with the code run by a web page. The web page itself can also output to the console, which is usually used for what I call “hello-based debugging”: printing out messages throughout a process so that the flow and progress can be monitored by the developer without having to do “proper” debugging. And it gets used by some web pages to deliver secret messages to any of the site users who open their debugger.

Facebook console messaging advising against the use of the console.
Facebook writes to the console a “stop” message, advising against using the console unless you know what you’re doing in an attempt to stop people making themselves victims of console-based social engineering attacks.

Principally, though, the console is designed for textual content and nothing else. That said, both Firefox and Chrome’s consoles permit the use of CSS to style blocks of debug output by using the %c escape sequence. For example, I could style some of a message with italic text:

>> console.log('I have some %citalic %ctext', 'font-style: italic;', '');
   I have some italic text

Using CSS directives like background, then, it’s easy to see how one could embed an image into the console, and that’s been done before. Instead, though, I wanted to use the lessons I’d learned developing PicInHTML 8¾ years ago to use text and CSS (only) to render a colour picture to the console. First, I created my template image – a hackergotchi of me and an accompanying speech bubble, shrunk to a tiny size and posterised to reduce the number of colours used and saved as a PNG.

Hackergotchi of Dan with a speech bubble, "squashed".
The image appears “squashed” to compensate for console monospace letters not being “square”.

Next, I wrote a quick Ruby program, consolepic.rb, to do the hard work. It analyses each pixel of the image and for each distinct colour assigns to a variable the CSS code used to set the background colour to that colour. It looks for “strings” of like pixels and combines them into one, and then outputs the Javascript necessary to write out all of the above. Finally, I made a few hand-tweaks to insert the text into the speech bubble.

The resulting output weighs in at 31.6kB – about a quarter of the size of the custom Javascript on the frontend of my site and so quite a bit larger than I’d have liked and significantly less-efficient than the image itself, even base64-encoded for embedding directly into the code, but that really wasn’t the point of the exercise, was it? (I’m pretty sure there’s significant room for improvement from a performance perspective…)

Scatmania.org in 2012
I’ll be first to admit it’s not as cool as the “pop-up Dan” in the corner of my 2012 design. You might enjoy my blog post about my 20 years of blogging or the one about how “pop-up Dan” worked.

What it achieved was an interesting experiment into what can be achieved with Javascript, CSS, the browser console, and a little imagination. An experiment that can live here on my site, for anybody who looks in the direction of their debugger, for the foreseeable future (or until I get bored of it). Anybody with any more-exotic/silly ideas about what this technique could be used for is welcome to let me know!

Update: 17 April 2019 – fun though this was, it wasn’t worth continuing to deliver an additional 25% Javascript payload to every visitor just for this, so I’ve stopped it for now. You can still read the source code (and even manually run it in the console) if you like. And I have other ideas for fun things to do with the console, so keep an eye out for that…

× × × ×

Pong

I’ve recently been reimplementing retro arcade classic Pong to show off during a celebration of World Digital Preservation Day 2018 yesterday at the Bodleian Libraries. Here’s how that went down.

Frak on the BBC Micro, amongst the rest of a pile of computing nostalgia
The Bodleian has a specific remit for digital archiving… but sometimes they just like collecting stuff, too, I’m sure.

The team responsible for digital archiving had plans to spend World Digital Preservation Day running a stand in Blackwell Hall for some time before I got involved. They’d asked my department about using the Heritage Window – the Bodleian’s 15-screen video wall – to show a carousel of slides with relevant content over the course of the day. Or, they added, half-jokingly, “perhaps we could have Pong up there as it’ll be its 46th birthday?”

Parts of the Digital Archiving display table
Free reign to play about with the Heritage Window while smarter people talk to the public about digital archives? Sure, sign me up.

But I didn’t take it as a joke. I took it as a challenge.

Emulating Pong is pretty easy. Emulating Pong perfectly is pretty hard. Indeed, a lot of the challenge in the preservation of (especially digital) archives in general is in finding the best possible compromise in situations where perfect preservation is not possible. If these 8″ disks are degrading, is is acceptable to copy them onto a different medium? If this video file is unreadable in modern devices, is it acceptable to re-encode it in a contemporary format? These are the kinds of questions that digital preservation specialists have to ask themselves all the damn time.

Pong prototype with a SNES controller on my work PC
The JS Gamepad API lets your web browser talk to controller devices.

Emulating Pong in a way that would work on the Heritage Window but be true to the original raised all kinds of complications. (Original) Pong’s aspect ratio doesn’t fit nicely on a 16:9 widescreen, much less on a 27:80 ultrawide. Like most games of its era, the speed is tied to the clock rate of the processor. And of course, it should be controlled using a “dial”.

By the time I realised that there was no way that I could thoroughly replicate the experience of the original game, I decided to take a different track. Instead, I opted to reimplement Pong. A reimplementation could stay true to the idea of Pong but serve as a jumping-off point for discussion about how the experience of playing the game may be superficially “like Pong” but that this still wasn’t an example of digital preservation.

Two participants play Pong on the Heritage Window
Bip… boop… boop… bip… boop… bip…

Here’s the skinny:

  • A web page, displayed full-screen, contains both a <canvas> (for the game, sized appropriately for a 3 × 3 section of the video wall) and a <div> full of “slides” of static content to carousel alongside (filling a 2 × 3 section).
  • Javascript writes to the canvas, simulates the movement of the ball and paddles, and accepts input from the JS Gamepad API (which is awesome, by the way). If there’s only one player, a (tough! – only three people managed to beat it over the course of the day!) AI plays the other paddle.
  • A pair of SNES controllers adapted for use as USB controllers which I happened to own already.
My Javascript-powered web applications dominate the screens in Blackwell Hall.
Increasingly, the Bodleian’s spaces seem to be full of screens running Javascript applications I’ve written.

I felt that the day, event, and game were a success. A few dozen people played Pong and explored the other technology on display. Some got nostalgic about punch tape, huge floppy disks, and even mechanical calculators. Many more talked to the digital archives folks and I about the challenges and importance of digital archiving. And a good time was had by all.

I’ve open-sourced the entire thing with a super-permissive license so you can deploy it yourself (you know, on your ultrawide video wall) or adapt it as you see fit. Or if you’d just like to see it for yourself on your own computer, you can (but unless you’re using a 4K monitor you’ll probably need to use your browser’s mobile/responsive design simulator set to 3200 × 1080 to make it fit your screen). If you don’t have controllers attached, use W/S to control player 1 and the cursor keys for player 2 in a 2-player game.

Happy 46th birthday, Pong.

× × × × ×

When to use CSS vs. JavaScript

This article is a repost promoting content originally published elsewhere. See more things Dan's reposted.

CSS before JS #

My general rule of thumb is…

If something I want to do with JavaScript can be done with CSS instead, use CSS.

CSS parses and renders faster.

For things like animations, it more easily hooks into the browser’s refresh rate cycle to provide silky smooth animations (this can be done in JS, too, but CSS just makes it so damn easy).

And it fails gracefully.

A JavaScript error can bring all of the JS on a page to screeching halt. Mistype a CSS property or miss a semicolon? The browser just skips the property and moves on. Use an unsupported feature? Same thing.

This exactly! If you want progressive enhancement (and you should), performance, and the cleanest separation of behaviour and presentation, the pages you deliver to your users (regardless of what technology you use on your server) should consist of:

  • HTML, written in such a way that that they’re complete and comprehensible alone – from an information science perspective, your pages shouldn’t “need” any more than this (although it’s okay if they’re pretty ugly without any more)
  • CSS, adding design, theme, look-and-feel to your web page
  • Javascript, using progressive enhancement to add functionality in-the-browser (e.g. validation on the client-side in addition to the server side validation, for speed and ease of user experience) and, where absolutely necessary, to add functionality not possible any other way (e.g. if you’re looking to tap into the geolocation API, you’re going to need Javascript… but it’s still desirable to provide as much of the experience as possible without)

Developers failing to follow this principle is making the Web more fragile and harder to archive. It’s not hard to do things “right”: we just need to make sure that developers learn what “right” is and why it’s important.

Incidentally, I just some enhancements to the header of this site, including some CSS animations on the logo and menu (none of them necessary, but all useful) and some Javascript to help ensure that users of touch-capable devices have an easier time. Note that neither Javascript nor CSS are required to use this site; they just add value… just the way the Web ought to be (where possible).

Push without notifications

This article is a repost promoting content originally published elsewhere. See more things Dan's reposted.

On the first day of Indie Web Camp Berlin, I led a session on going offline with service workers. This covered all the usual use-cases: pre-caching; custom offline pages; saving pages for offline reading.

But on the second day, Sebastiaan spent a fair bit of time investigating a more complex use of service workers with the Push API.

While I’m very unwilling to grant permission to be interrupted by intrusive notifications, I’d be more than willing to grant permission to allow a website to silently cache timely content in the background. It would be a more calm technology.

Then when I’m on a plane, or in the subway, or in any other situation without a network connection, I could still visit these websites and get content that’s fresh to me. It’s kind of like background sync in reverse.

Yes, yes, yes.The Push API’s got incredible potential for precaching, or even re-caching existing content. How about if you could always instantly open my web site, whether you were on or off-line, and know that you’d always be able to read the front page and most-recent articles. You should be able to opt-in to “hot” push notifications if that’s what you really want, but there should be no requirement to do so.

By the time you’re using the Push API for things like this, why not go a step further? How about PWA feed readers or email clients that use web-pushes to keep your Inbox full? What about social network clients that always load instantly with the latest content? Or even analytics packages to push your latest stats to your device? Or turn-based online games that push the latest game state, ready for you to make your next move (which can be cached offline and pushed back when online)?

There are so many potential uses for “quiet” pushing, and now I’m itching for an opportunity to have a play with them.

The thing about ad-blocker popups

This article is a repost promoting content originally published elsewhere. See more things Dan's reposted.

I’ve been, in the past, a firm distruster of ad blocking software. I still am, to a large extent. I don’t trust any company whose finance model is based on inserting exceptions for advertisers they like. But I installed Ghostery, whose model is to use the stats of what gets blocked to offer consultancy to companies to make their adverts less horrific. I like this idea, so I support it. My Ghostery install is fairly open, blocking only sites that offer page-takeover, popups, autoplaying videos, and other stuff that annoys me a lot. So I get a bit annoyed when I’m scrolling through a Wired article and get something like this:

Fine. I don’t disagree with the sentiment, but I don’t read Wired often enough to care about being a member, so yeah, ad supported isn’t unreasonable. Do you know what’s unreasonable, Wired? This is what happens when I whitelist your site:

I’ve gone full-nuclear these last few years and I just keep Javascript disabled for most domains, most of the time (I’m using uMatrix). The Web is a lot faster, for it, and I can just enable it for domains that “need” it as-and-when. I also keep a userscript to-hand that I can tweak as-and-when to block anti-ad-blocker scripts, so that enabling Javascript on your domain (but not the domains of your dozen trackers/advertisers) doesn’t mean that I see your anti-ad-blocker popups either.

If your site nags gently (e.g. by mentioning where ads would be that they’re blocked, perhaps with a sad face emoticon) I’ll consider adding the ads, if your site has value. But more likely, if your site’s good, I’ll be looking for the donate link. You can make more money out of me with donations than you ever would be showing me ads: I’m more than happy to pay for the Web… I’m not happy to have 75% of the work my computer does when I’m reading your content be about your advertising partners tracking me nor about trying to “block” me from seeing your content.

The full article helps show how bad the Web’s gotten. When it starts to get better again, perhaps I’ll stop blocking ads and trackers so aggressively.