As I’ve previously mentioned (sadly), Microsoft Edge is to drop its own rendering engine EdgeHTML and replace it with Blink, Google’s one (more
of my and others related sadness here, here, here, and here). Earlier this month, Microsoft made available the first prerelease versions of the browser, and I gave it a go.
All of the Chrome-like features you’d expect are there, including support for Chrome plugins, but Microsoft have also clearly worked to try to integrate as much as possible of the
important features that they felt were distinct to Edge in there, too. For example, Edge Blink supports SmartScreen filtering and uses Microsoft accounts for sync, and Incognito is of
course rebranded InPrivate.
But what really interested me was the approach that Edge Dev has taken with Progressive Web Apps.
Edge Dev may go further than any other mainstream browser in its efforts to make Progressive Web Apps visible to the user, putting a plus sign (and sometimes an extended
install prompt) right in the address bar, rather than burying it deep in a menu. Once installed, Edge PWAs “just work” in
exactly the way that PWAs ought to, providing a simple and powerful user experience. Unlike some browsers, which
make installing PWAs on mobile devices far easier than on desktops, presumably in a misguided belief in the importance of
mobile “app culture”, it doesn’t discriminate against desktop users. It’s a slick and simple user experience all over.
Feature support is stronger than it is for Progressive Web Apps delivered as standalone apps via the Windows Store, too, with the engine not falling over at the first sign of a modal
dialog for example. Hopefully (as I support one of these hybrid apps!) these too will begin to be handled properly when Edge Dev eventually achieves mainstream availability.
But perhaps most-impressive is Edge Dev’s respect for the importance of URLs. If, having installed the progressive “app”
version of a site you subsequently revisit any address within its scope, you can switch to the app version via a link in the menu. I’d rather have seen a nudge in the address bar, where
the user might expect to see such things (based on that being where the original install icon was), but this is still a great feature… especially given that cookies and other
state maintainers are shared between the browser, meaning that performing such a switch in a properly-made application will result in the user carrying on from almost exactly where they
left off.
Similarly, and also uncommonly forward-thinking, Progressive Web Apps installed as standalone applications from Edge Dev enjoy a “copy URL” option in their menu, even if the app runs without an address bar (e.g. as a result of a "display": "standalone" directive
in the manifest.json). This is a huge boost to sharability and is enormously (and unusually) respectful of the fact that addresses are the
Web’s killer feature! Furthermore, it respects the users’ choice to operate their “apps” in whatever way suits them best: in a browser (even a competing browser!), on their
mobile device, or wherever. Well done, Microsoft!
I’m still very sad overall that Edge is becoming part of the Chromium family of browsers. But if the silver lining is that we get a pioneering and powerful new Progressive Web App
engine then it can’t be all bad, can it?
It is impossible to answer all of these questions simply. They can, however, be framed by the ideological project of the web itself. The web was built to be open, both technologically
as a decentralized network, and philosophically as a democratizing medium. These questions are tricky because the web belongs to no one, yet was built for everyone. Maintaining that
spirit takes a lot of work, and requires sometimes slow, but always deliberate decisions about the trajectory of web technologies. We should be careful to consider the mountains of
legacy code and libraries that will likely remain on the web for its entire existence. Not just because they are often built with the best of intentions, but because many have been
woven into the fabric of the web. If we pull on any one thread too hard, we risk unraveling the whole thing.
…
A great story about how Firefox nearly broke tens of thousands of websites by following standards, and then didn’t. tl;dr: Javascript has a messy history.
We here at unlike kinds decided that we had to implement Google AMP. We have to be in the Top Stories section because otherwise we’re punted down the page and away from potential
readers. We didn’t really want to; our site is already fast because we made it fast, largely with a combination of clever caching and minimal code. But hey, maybe AMP would speed
things up. Maybe Google’s new future is bright.
It isn’t. According to Google’s own Page Speed Insights audit (which Google recommends to check your performance), the AMP version of articles got an average performance score of 87.
The non-AMP versions? 95. (Note: I updated these numbers recently with an average after running the test 6 times per version.)
…
I’ve complained about AMP before plenty – starting here, for example – but it’s even harder to
try to see the alleged “good sides” of the technology when it doesn’t even deliver the one thing it was supposed to. The Internet should be boycotting this shit, not drinking
the Kool-Aid.
few weeks back, we were chatting about the architecture of the Individual Electoral Registration web service. We started discussing the pros and cons of an approach that would provide a significantly different
interaction for any people not running JavaScript.
“What proportion of people is that?” an inquisitive mind asked.
Silence.
We didn’t really have any idea how many people are experiencing UK government web services without the enhancement of JavaScript. That’s a bad thing for a team that is evangelical
about data driven design, so I thought we should find out.
The answer is:
1.1% of people aren’t getting Javascript enhancements (1 in 93)
…
This article by the GDS is six years old now, but its fundamental point is still as valid as ever: a small proportion
(probably in the region of 1%) of your users won’t experience some or all of the whizzy Javascript stuff on your website, and it’s not because they’re a power user who disables
Javascript.
There are so many reasons a user won’t run your Javascript, including:
They’re using a browser that doesn’t support Javascript (or doesn’t support the version you’re using)
They, or somebody they share their device with, has consciously turned-off Javascript either wholesale or selectively, in order to for example save bandwidth, improve speed,
reinforce security, or improve compatibility with their accessibility technologies
They’re viewing a locally-saved, backed-up, or archived version of your page (possibly in the far future long after your site is gone)
Their virus scanner mis-classified your Javascript as potentially malicious
One or more of your Javascript files contains a bug which, on their environment, stops execution
One or more of your Javascript files failed to be delivered, for example owing to routing errors, CDN downtime,
censorship, cryptographic handshake failures, shaky connections, cross-domain issues, stale caches…
On their device, your Javascript takes too long to execute or consumes too many resources and is stopped by the browser
Fundamentally, you can’t depend on Javascript and so you shouldn’t depend on it being there, 100% of the time, when it’s possible not to. Luckily, the Web already
gives us all the tools we need to develop the vast, vast majority of web content in a way that doesn’t depend on Javascript. Back in the 1990s we just called it “web
development”, but nowadays Javascript (and other optional/under-continuous-development web technologies like your favourite so-very-2019 CSS hack) is so ubiquitous that we give it the special name “progressive enhancement” and make a whole practice out of it.
The Web was designed for forwards- and backwards-compatibility. When you break that, you betray your users and you make work for yourself.
(by the way: I know I plugged the unpoly framework already, the other day, but you should
really give it a look if you’re just learning how to pull off progressive enhancement)
But as much as we developers hope for it to go away, it just. Won’t. Die. IE8 continues to show up in browser stats, especially outside of the bubble of the Western
world.
…
Sure, you aren’t developing for IE8 any more. But you should be developing with progressive enhancement, and
if you do that right, you get all kinds of compatibility, accessibility, future- and past-proofing built-in. This isn’t just about supporting the (many) African countries where
IE8 usage remains at over 1%… it’s about supporting the Web’s openness and archivibility and following best-practice in
your support of new technologies.
Google can’t be trusted to maintain the services of theirs that you depend upon (relevant XKCD?). That’s not a phenomenon that’s unique to Google,
of course: it’s perhaps just that they produce so many new and often-experimental services that they inevitably cease supporting more of them than some of the many other providers who’ve killed the silos that people depended upon.
How could things be better? For a start, Google could make a better commitment to open-source and developing standards rather than platforms. But if you don’t think you can trust them
to do that – and you can’t – then the only solution for individuals is to use fewer Google products to break the Google-monoculture. Encourage the competition to weaken their
position, and break free from silos in general where it’s possible to do so.
148+ projects and services dead. But hey, we’re getting Stadia so everything’s okay, right? <sigh>
One of the central themes of my talk on The Lean Web is that we as developers repeatedly take all of the
great things the web and browsers give us out-of-the-box, break them, and then re-implement them poorly with JavaScript.
Pages containing popular JavaScript frameworks were more likely to have accessibility errors than those that didn’t use those frameworks.
…
JavaScript routing has always perplexed me.
You take something the browser just gives you for free, break it with JavaScript, then reimplement it with more JavaScript, often poorly. You have to account for on-page
clicks, on-site clicks, off-site clicks, forward and back button usage, and so on.
…
JavaScript routing has always perplexed me, too. Back when SPA-centric front-end frameworks started taking off I thought that
there must be something wrong with me, as a developer. Why was I unable to see why this “new hotness” was so popular, so immediately ubiquitous? I taught myself a
couple of different frameworks in the hope that in learning to use them in anger I’d “click” and understand why this approach to routing made any sense, but I still couldn’t get it.
That’s when I remembered, later than I ought to have, that just because something is popular doesn’t mean that it’s a good idea.#
Front-end routing isn’t necessarily poisonous. By building on-top of what you already have in a progressive-enhancement kind-of way (like unpoly does for example!) you can potentially provide some minor performance or look-and-feel improvements to people in ideal circumstances (right browser(s),
right compatibility, no bugs, no blocks, no accessibility needs, no “power users” who like to open-in-new-tab and the like, speedy connection, etc.) without damaging the fundamentals of
what makes your web application work… but you’ve got to appreciate that doing this is going to be more work. For some applications, that’s worthwhile.
But when you do it at the expense of the underlying fundamentals… when you say “we’re moving everything to the front-end so we’re not going to bother with real URLs any more”…
that’s when you break the web. And in doing so, you break a lot of other things too:
You break your user experience for people who don’t fit into your perfect vision of what your users look like in terms of technology, connection, or able-bodiedness
You break the sustainability and archivability of your site, making it into another piece of trash that’ll be lost to the coming digital dark age
You break the usability of the site by anything but your narrow view of what’s right
You break a lot of the technology that’s made the web as great as it is already: caching, manipulatable URLs, widespread
compatibility… and many other things become harder when you have to re-invent the wheel to get basic features like preloading, sharability/bookmarking, page saving, the back button,
stateful refreshes, SEO, hyperlinks…
With 20+ years of kottke.org archives, I’ve been thinking about this issue [continuing to host old content that no longer reflects its authors views] as well. There are many
posts in the archive that I am not proud of. I’ve changed my mind in some cases and no longer hold the views attributed to me in my own words. I was too frequently a young
and impatient asshole, full of himself and knowing it all. I was unaware of my privilege and too frequently assumed things of other people and groups that were incorrect and
insensitive. I’ve amplified people and ideas in the past that I wouldn’t today.
…
Very much this! As another blogger with a 20+ year archive, I often find myself wondering how much of an impression of me is made upon my readers by some of my older posts, and what it
means to retain them versus the possibility – never yet exercised – of deleting them. I certainly have my fair share of posts that don’t represent me well or that are frankly
embarrassing, in hindsight!
I was thinking about this recently while following a thread on BoardGameGeek in which a poster
advocated for the deletion of a controversial article from the site because, as they said:
…people who stumble on our site and see this game listed could get a very (!!!) bad impression of the hobby…
This is a similar concern: a member of an online community is concerned that a particular piece of archived content does not reflect well on them. They don’t see any way in which the
content can be “fixed”, and so they propose that it is removed from the community. Parallels can be drawn to the deletionist faction within Wikipedia (if you didn’t know that Wikipedia had large-scale philosophical
disputes before now, you’re welcome: now go down the meta-wiki rabbit hole).
As for my own blog, I fall on the side of retention: it’s impossible to completely “hide” my past by self-censorship anyway as there’s sufficient archives and metadata to reconstruct
it, and moreover it feels dishonest to try. Instead, though, I do occasionally append rebuttals to older content – where I’ve time! – to help contextualise them and show that they’re
outdated. I’ve even considered partially automating this by e.g. adding a “tag” that I can rapidly apply to older posts that haven’t aged well which would in turn add a disclaimer to
the top of them.
Cool URIs don’t change. But the content behind them can. The fundamental message ought to be preserved, where possible, and so
appending and retaining history seems to be a more-valid approach than wholesale deletion.
Asynchronous JavaScript in the form of Single Page Applications (SPA) offer an incredible opportunity for improving the user experience of your web
applications. CSS frameworks like Bootstrap enable developers to quickly contribute styling as they’re working on the structure and behaviour of things.
Unfortunately, SPA and CSS frameworks tend to result in relatively complex solutions where traditionally separated concerns – HTML-structure, CSS-style, and JS-behaviour – are blended
together as a matter of course — Counter to the lessons learned by previous generations.
This blending of concerns can prevent entry level developers and valued specialists (Eg. visual design, accessibility, search engine optimization, and internationalization) from
making meaningful contributions to a project.
In addition to the increasing cost of the few developers somewhat capable of juggling
all of these concerns, it can also result in other real world business implications.
…
What is a front-end developer? Does anybody know, any more? And more-importantly, how did we get to the point where we’re actively encouraging young developers into habits like
writing (cough React cough) files containing a bloaty, icky mixture of content, HTML (markup), CSS (style), and Javascript (behaviour)? Yes, I get that the idea is that individual components should be packaged
together (if you’re thinking in a React-like worldview), but that alone doesn’t justify this kind of bullshit antipattern.
It seems like the Web used to have developers. Then it got complex so we started differentiating back-end from front-end developers and described those who, like me, spanned the divide,
as full-stack developers We gradually became a minority as more and more new developers, deprived of the opportunity to learn each new facet organically in this newly-complicated
landscape, but that’s fine. But then… we started treating the front-end as the only end, and introducing all kinds of problems as a result… and most people don’t seem to have
noticed, yet, exactly how much damage we’re doing to Web applications’ security, maintainability, future-proofibility, archivability, addressibility…
The current iteration of my blog diverges from an architectural principle common to most of previous versions of the last 20 years. While
each previous change in design and layout was intended to provide a single monolithic upgrade, this version tries to provide me with a platform for continuous ongoing
experimentation and change.
I’ve been trying to make better use of my blog as a vehicle for experimenting with web technologies, as I used to with personal sites back in the 1990s and early 2000s; to see a vanity
site like this one as a living playground rather than something that – like most of the sites I’m paid to work on – something whose design is, for the most part, static for
long periods of time.
The grid of recent notes, shares, checkins and videos on my
homepage is powered by the display: grid; CSS directive. The number of columns varies by screen width from six
on the widest screens down to three or just one on increasingly small screens. Crucially, grid-auto-flow: dense; is used to ensure an even left-to-right filling of the
available space even if one of the “larger” blocks (with grid-column: span 2; grid-row: span 2;) is forced for space reasons to run onto the next line. This means that
content might occasionally be displayed in a different order from that in which it is written in the HTML (which is reverse
order of publication), but in exchange the items are flush with both sides.
Not all web browsers support display: grid; and while that’s often only one of design and not of readability because these browsers will fall back to usually-very-safe
default display modes like block and inline, as appropriate, sometimes there are bigger problems. In Internet Explorer 11, for example, I found (with thanks to
@_ignatg) a problem with my directives specifying the size of these cells (which are actually <li> elements because, well,
semantics matter). Because it understood the directives that ought to impact the sizing of the list items but not
the one that redeclared its display type, IE made… a bit of a mess of things…
Do websites need to look the same in every browser? No. But the content should be readable
regardless, and here my CSS was rendering my content unreadable. Given that Internet Explorer users represent a little
under 0.1% of visitors to my site I don’t feel the need to hack it to have the same look-and-feel: I just need it to have the same content readability. CSS Feature Queries to the rescue!
CSS Feature Queries – the @supports selector – make it possible to apply parts of your stylesheet if and only if
the browser supports specific CSS features, for example grids. Better yet, using it in a positive manner (i.e. “apply these
rules only if the browser supports this feature”) is progressive enhancement, because browsers that don’t understand the @supports selector act in
the same way as those that understand it but don’t support the specified feature. Fencing off the relevant parts of my stylesheet in a @supports (display: grid) { ... }
block instructed IE to fall back to displaying that content as a boring old list: exactly what I needed.
Reduced-motion support
I like to put a few “fun” features into each design for my blog, and while it’s nowhere near as quirky as having my head play peek-a-boo when you
hover your cursor over it, the current header’s animations are in the same ballpark: hover over or click on some of the items in the header menu to see for yourself..
These kinds of animations are fun, but they can also be problematic. People with inner ear disorders (as well as people who’re just trying to maximise the battery life on their portable
devices!) might prefer not to see them, and web designers ought to respect that choice where possible. Luckily, there’s an emerging standard to acknowledge that: prefers-reduced-motion. Alongside its cousins inverted-colors, prefers-reduced-transparency, prefers-contrast and
prefers-color-scheme (see below for that last one!), these new CSS tools allow developers to optimise based on the accessibility
features activated by the user within their operating system.
If you’ve tweaked your accessibility settings to reduce the amount of animation your operating system shows you, this website will respect that choice as well by not animating the
contents of the title, menu, or the homepage “tiles” any more than is absolutely necessary… so long as you’re using a supported browser, which right now means Safari or Firefox (or the
“next” version of Chrome). Making the change itself is pretty simple: I just added a @media screen and (prefers-reduced-motion: reduce) { ... } block to disable or
otherwise cut-down on the relevant animations.
Dark-mode support
Similarly, operating systems are beginning to
support “dark mode”, designed for people trying to avoid eyestrain when using their computer at night. It’s possible for your browser to respect this and try to “fix” web pages for
you, of course, but it’s better still if the developer of those pages has anticipated your need and designed them to acknowledge your choice for you. It’s only supported in Firefox and
Safari so far and only on recent versions of Windows and MacOS, but it’s a start and a helpful touch for those nocturnal websurfers out there.
It’s pretty simple to implement. In my case, I just stacked some overrides into a @media (prefers-color-scheme: dark) { ... } block, inverting the background and primary
foreground colours, softening the contrast, removing a few “bright” borders, and darkening rather than lightening background images used on homepage tiles. And again, it’s an example of
progressive enhancement: the (majority!) of users whose operating systems and/or browsers don’t yet support this feature won’t be impacted by its inclusion in my stylesheet, but those
who can make use of it can appreciate its benefits.
This isn’t the end of the story of CSS experimentation on my blog, but it’s a part of the it that I hope you’ve enjoyed.
…Jesse James Garett of Adaptive Path finally gave this use of Javascript a name. He called it Asynchronous Javascript and XML (thankfully shortened to Ajax). Ajax wasn’t a single
method or technology, but a group of principles based on the work being done at Google that described how to handle JavaScript in more demanding web applications. The short version is
actually fairly simple. Use XMLHttpRequest to make request to the server, take the XML data returned, and lay it out on the page using JavaScript and semantic HTML.
Ajax took off after that. It underpins major parts of modern web development, and has spawned a number of frameworks and methodologies. The technology itself has been swapped out over
time. XML was replaced by JSON, and XMLHttpRequest by fetch. That doesn’t matter though. Ajax showed a lot of developers that the web wasn’t just about documents.
The web could be used to build applications. That might seem like a small notion, but trust me when I say, it was momentous.
The History of The Web is great, and you should read it, but this piece is particularly interesting. Ajax and its spiritual successors laid the groundwork for rich Internet
applications, shell apps, and the real-time Web, but they also paved the way for a handful of the inevitable practices that went alongside: Javascript-required websites, API-driven web
and mobile applications with virtually no HTML content whatsoever… and these things have begun to become problematic for the health of the Web as a whole.
I love Ajax, but it absolutely must be employed as progressive enhancement where possible.
People were quick to point this out and assume that it was something to do with the modernity of MetaFilter:
honestly, the disheartening thing is that many metafilter pages don’t seem to work. Oh, the modern web.
Some even went so far as to speculate that the reason related to MetaFilter’s use of CSS and JS:
CSS and JS. They do things. Important things.
This is, of course, complete baloney, and it’s easy to prove to oneself. Firstly, simply using the View Source tool in your browser on a MetaFilter page reveals source code that’s quite
comprehensible, even human-readable, without going anywhere near any CSS or JavaScript.
Secondly, it’s pretty simple to try browsing MetaFilter without CSS or JavaScript enabled! I tried in two ways: first,
by using Lynx, a text-based browser that’s never supported either of those technologies. I also tried by using
Firefox but with them disabled (honestly, I slightly miss when the Web used to look like this):
And thirdly: the error code being returned by the simulated WorldWideWeb browser is a HTTP code 500. Even if you don’t
know your HTTP codes (I mean, what kind of weirdo would take the time to memorise them all anyway <ahem>),
it’s worth learning this: the first digit of a HTTP response code tells you what happened:
1xx means “everything’s fine, keep going”;
2xx means “everything’s fine and we’re done”;
3xx means “try over there”;
4xx means “you did something wrong” (the infamous 404, for example, means you asked for a page that doesn’t exist);
5xx means “the server did something wrong”.
Simple! The fact that the error code begins with a 5 strongly implies that the problem isn’t in the (client-side) reimplementation of WorldWideWeb: if this had have been a
CSS/JS problem, I’d expect to see a blank page, scrambled content, “filler”
content, or incomplete content.
So I found myself wondering what the real problem was. This is, of course, where my geek flag becomes most-visible: what we’re talking about, let’s not forget, is a fringe
problem in an incomplete simulation of an ancient computer program that nobody uses. Odds are incredibly good that nobody on Earth cares about this except, right now, for me.
The (simulated) copy of WorldWideWeb is asked to open a document by reference, e.g. “https://www.metafilter.com/”.
To work around same-origin policy restrictions, the request is sent to an API which acts as a proxy server.
The API makes a request using the Node package “request” with this line of code: request(url, (error, response, body) =>
{ ... }). When the first parameter to request is a (string) URL, the module uses its default settings for all of
the other options, which means that it doesn’t set the User-Agent header (an optional part of a Web request where the computer making the request identifies the software
that’s asking).
MetaFilter, for some reason, blocks requests whose User-Agent isn’t set. This is weird! And nonstandard: while web browsers should – in RFC2119 terms – set their User-Agent: header, web servers shouldn’t require
that they do so. MetaFilter returns a 403 and a message to say “Forbidden”; usually a message you only see if you’re trying to access a resource that requires session authentication and
you haven’t logged-in yet.
The API is programmed to handle response codes 200 (okay!) and 404 (not found), but if it gets anything else back
it’s supposed to throw a 400 (bad request). Except there’s a bug: when trying to throw a 400, it requires that an error message has been set by the request module and if there
hasn’t… it instead throws a 500 with the message “Internal Server Fangle” and no clue what actually went wrong. So MetaFilter’s 403 gets translated by the proxy into a 400 which
it fails to render because a 403 doesn’t actually produce an error message and so it gets translated again into the 500 that you eventually see. What a knock-on effect!
This then sets a User-Agent header and makes servers that require one, such as MetaFilter, respond appropriately. I don’t know whether WorldWideWeb originally set a User-Agent header
(CERN’s source file archive seems to be missing the relevant C sources so I can’t check) but I
suspect that it did, so this change actually improves the fidelity of the emulation as a bonus. A better fix would also add support for and appropriate handling of other HTTP response
codes, but that’s a story for another day, I guess.
I know the hackathon’s over, but I wonder if they’re taking pull requests…
This month, a collection of some of my favourite geeks got invited to CERN in Geneva to
participate in a week-long hackathon with the aim of reimplementing WorldWideWeb –
the first web browser, circa 1990-1994 – as a web application. I’m super jealous, but I’m also really pleased with what they managed
to produce.
This represents a huge leap forward from their last similar project, which aimed to recreate the line mode browser: the first web browser that
didn’t require a NeXT computer to run it and so a leap forward in mainstream appeal. In some ways, you might expect
reimplementing WorldWideWeb to be easier, because its functionality is more-similar that of a modern browser, but there were doubtless some challenges too: this early browser predated the concept of the DOM and so there are distinct
processing differences that must be considered to get a truly authentic experience.
Among their outputs, the team also produced a cool timeline of the Web, which – thanks to some careful authorship – is as legible in WorldWideWeb as it is in a modern browser (if, admittedly, a little less pretty).
In an age of increasing Single Page Applications and API-driven sites and “apps”, it’s nice to be reminded that if you develop right for the Web, your content will be visible
(sort-of; I’m aware that there are some liberties taken here in memory and processing limitations, protocols and negotiation) on machines 30 years old, and that gives me hope that
adherence to the same solid standards gives us a chance of writing pages today that look just as good in 30 years to come. Compare that to a proprietary technology like Flash whose heyday 15 years ago is overshadowed by its imminent death (not to
mention Java applets or ActiveX <shudders>), iOS apps which stopped working when the operating system went 64-bit, and websites which only work
in specific browsers (traditionally Internet Explorer, though as I’ve complained before we’re getting more and more Chrome-only sites).
The Web is a success story in open standards, natural and by-design progressive enhancement, and the future-proof archivability of human-readable code. Long live the Web.
The meteoric rise of front-end frameworks like React, Angular, Vue.js, Elm, etc. has made single-page applications ubiquitous on the web. For many developers, these have become part
of their ‘default’ toolset. When they start a new project, they grab the tools they know already: a REST API on the backend, and a React/Angular/Vue/Elm frontend.
Is there something wrong with these tools? Absolutely not. In fact, I love working with them. However, I would only choose this architecture when an actual requirement is pushing me
in that direction. If there are no specific reasons to build a single-page application, I will go with a traditional server-rendered architecture every day of the week. It is simpler
and allows you to move faster.
…
There’s been an increasing trend towards delivering web applications as SPAs backed by an API. I can see the attraction: disposing of the browser’s navigation cycle lets you develop that coveted “app-like” interaction experience,
pushing only data around lets you implement multiple clients backed by the same single middleware, and it results in a development workflow that fits tightly with many of the hippest
frameworks (go jamstack, backendless, Node-backed, or whatever). I love REST and all, but I feel that it works
best when it’s used to deliver multiformat results (whether by content negotiation or whatever): web pages for the humans, JSON or whatever for the computers.
For an increasing number of developers, SPAs are a golden hammer.
Let’s fix that.