…Jesse James Garett of Adaptive Path finally gave this use of Javascript a name. He called it Asynchronous Javascript and XML (thankfully shortened to Ajax). Ajax wasn’t a single
method or technology, but a group of principles based on the work being done at Google that described how to handle JavaScript in more demanding web applications. The short version is
actually fairly simple. Use XMLHttpRequest to make request to the server, take the XML data returned, and lay it out on the page using JavaScript and semantic HTML.
Ajax took off after that. It underpins major parts of modern web development, and has spawned a number of frameworks and methodologies. The technology itself has been swapped out over
time. XML was replaced by JSON, and XMLHttpRequest by fetch. That doesn’t matter though. Ajax showed a lot of developers that the web wasn’t just about documents.
The web could be used to build applications. That might seem like a small notion, but trust me when I say, it was momentous.
The History of The Web is great, and you should read it, but this piece is particularly interesting. Ajax and its spiritual successors laid the groundwork for rich Internet
applications, shell apps, and the real-time Web, but they also paved the way for a handful of the inevitable practices that went alongside: Javascript-required websites, API-driven web
and mobile applications with virtually no HTML content whatsoever… and these things have begun to become problematic for the health of the Web as a whole.
I love Ajax, but it absolutely must be employed as progressive enhancement where possible.
Human beings leave physical impressions upon the things they love and use just as much as their do upon the lives of people and the planet they live upon. For every action, there’s a
reaction. For every pressure, there’s an affect on mass and volume. And in the impressions left by that combination, particularly if you’re lucky enough to see the sides of a rare,
unrestored vintage Pac-Man cabinet, lies the never before told story of how we really played the game.
Until now, I don’t believe anyone has ever written about it.
…
Interesting exploration of the history of the cabinets housing Pac-Man, observing the ergonomic impact of the controls on the way that people would hold the side of the machine and, in
turn, how that would affect where and how the paint would wear off.
People were quick to point this out and assume that it was something to do with the modernity of MetaFilter:
honestly, the disheartening thing is that many metafilter pages don’t seem to work. Oh, the modern web.
Some even went so far as to speculate that the reason related to MetaFilter’s use of CSS and JS:
CSS and JS. They do things. Important things.
This is, of course, complete baloney, and it’s easy to prove to oneself. Firstly, simply using the View Source tool in your browser on a MetaFilter page reveals source code that’s quite
comprehensible, even human-readable, without going anywhere near any CSS or JavaScript.
As late as the early 2000s I’d occasionally use Lynx for serious browsing, but any time I’ve used it since it’s been by necessity.
Secondly, it’s pretty simple to try browsing MetaFilter without CSS or JavaScript enabled! I tried in two ways: first,
by using Lynx, a text-based browser that’s never supported either of those technologies. I also tried by using
Firefox but with them disabled (honestly, I slightly miss when the Web used to look like this):
It only took me three clicks to disable stylesheets and JavaScript in my copy of Firefox… but I’ll be the first to admit that I don’t keep my browser configured like “normal people”
probably do.
And thirdly: the error code being returned by the simulated WorldWideWeb browser is a HTTP code 500. Even if you don’t
know your HTTP codes (I mean, what kind of weirdo would take the time to memorise them all anyway <ahem>),
it’s worth learning this: the first digit of a HTTP response code tells you what happened:
1xx means “everything’s fine, keep going”;
2xx means “everything’s fine and we’re done”;
3xx means “try over there”;
4xx means “you did something wrong” (the infamous 404, for example, means you asked for a page that doesn’t exist);
5xx means “the server did something wrong”.
Simple! The fact that the error code begins with a 5 strongly implies that the problem isn’t in the (client-side) reimplementation of WorldWideWeb: if this had have been a
CSS/JS problem, I’d expect to see a blank page, scrambled content, “filler”
content, or incomplete content.
So I found myself wondering what the real problem was. This is, of course, where my geek flag becomes most-visible: what we’re talking about, let’s not forget, is a fringe
problem in an incomplete simulation of an ancient computer program that nobody uses. Odds are incredibly good that nobody on Earth cares about this except, right now, for me.
I searched for a “Geek Flag” and didn’t like anything I saw, so I came up with this one based on… well, if you recognise what it’s based on, good for you, you’re certainly allowed to
fly it. If not… well, you can too: there’s no geek-gatekeeping here.
The (simulated) copy of WorldWideWeb is asked to open a document by reference, e.g. “https://www.metafilter.com/”.
To work around same-origin policy restrictions, the request is sent to an API which acts as a proxy server.
The API makes a request using the Node package “request” with this line of code: request(url, (error, response, body) =>
{ ... }). When the first parameter to request is a (string) URL, the module uses its default settings for all of
the other options, which means that it doesn’t set the User-Agent header (an optional part of a Web request where the computer making the request identifies the software
that’s asking).
MetaFilter, for some reason, blocks requests whose User-Agent isn’t set. This is weird! And nonstandard: while web browsers should – in RFC2119 terms – set their User-Agent: header, web servers shouldn’t require
that they do so. MetaFilter returns a 403 and a message to say “Forbidden”; usually a message you only see if you’re trying to access a resource that requires session authentication and
you haven’t logged-in yet.
The API is programmed to handle response codes 200 (okay!) and 404 (not found), but if it gets anything else back
it’s supposed to throw a 400 (bad request). Except there’s a bug: when trying to throw a 400, it requires that an error message has been set by the request module and if there
hasn’t… it instead throws a 500 with the message “Internal Server Fangle” and no clue what actually went wrong. So MetaFilter’s 403 gets translated by the proxy into a 400 which
it fails to render because a 403 doesn’t actually produce an error message and so it gets translated again into the 500 that you eventually see. What a knock-on effect!
If you’re having difficulty visualising the process, this diagram might help you to continue your struggle with that visualisation.
This then sets a User-Agent header and makes servers that require one, such as MetaFilter, respond appropriately. I don’t know whether WorldWideWeb originally set a User-Agent header
(CERN’s source file archive seems to be missing the relevant C sources so I can’t check) but I
suspect that it did, so this change actually improves the fidelity of the emulation as a bonus. A better fix would also add support for and appropriate handling of other HTTP response
codes, but that’s a story for another day, I guess.
I know the hackathon’s over, but I wonder if they’re taking pull requests…
This month, a collection of some of my favourite geeks got invited to CERN in Geneva to
participate in a week-long hackathon with the aim of reimplementing WorldWideWeb –
the first web browser, circa 1990-1994 – as a web application. I’m super jealous, but I’m also really pleased with what they managed
to produce.
This represents a huge leap forward from their last similar project, which aimed to recreate the line mode browser: the first web browser that
didn’t require a NeXT computer to run it and so a leap forward in mainstream appeal. In some ways, you might expect
reimplementing WorldWideWeb to be easier, because its functionality is more-similar that of a modern browser, but there were doubtless some challenges too: this early browser predated the concept of the DOM and so there are distinct
processing differences that must be considered to get a truly authentic experience.
It’s just like any other hackathon, if you ignore the enormous particle collider underneath it.
Among their outputs, the team also produced a cool timeline of the Web, which – thanks to some careful authorship – is as legible in WorldWideWeb as it is in a modern browser (if, admittedly, a little less pretty).
When Sir Tim took this screenshot, he could never have predicted the way the Web would change, technically, over the next
25-30 years. But I’m almost more-interested in how it’s stayed the same.
In an age of increasing Single Page Applications and API-driven sites and “apps”, it’s nice to be reminded that if you develop right for the Web, your content will be visible
(sort-of; I’m aware that there are some liberties taken here in memory and processing limitations, protocols and negotiation) on machines 30 years old, and that gives me hope that
adherence to the same solid standards gives us a chance of writing pages today that look just as good in 30 years to come. Compare that to a proprietary technology like Flash whose heyday 15 years ago is overshadowed by its imminent death (not to
mention Java applets or ActiveX <shudders>), iOS apps which stopped working when the operating system went 64-bit, and websites which only work
in specific browsers (traditionally Internet Explorer, though as I’ve complained before we’re getting more and more Chrome-only sites).
The Web is a success story in open standards, natural and by-design progressive enhancement, and the future-proof archivability of human-readable code. Long live the Web.
Every morning, Lena Forsen wakes up beneath a brass-trimmed wooden mantel clock dedicated to “The First Lady of the Internet.”
It was presented to her more than two decades ago by the Society for Imaging Science and Technology, in recognition of the pivotal—and altogether
unexpected—role she played in shaping the digital world as we know it.
Among some computer engineers, Lena is a mythic figure, a mononym on par with Woz or Zuck. Whether or not you know her face, you’ve used the technology it helped create; practically
every photo you’ve ever taken, every website you’ve ever visited, every meme you’ve ever shared owes some small debt to Lena. Yet today, as a 67-year-old retiree living in her native
Sweden, she remains a little mystified by her own fame. “I’m just surprised that it never ends,” she told me recently.
…
While I’m not sure that it’s fair to say that Lena “remained a mystery” until now – the article itself identifies several events she’s attended in her capacity of “first lady of the
Internet” – but this is still a great article about a picture that you might have seen but never understood the significance of nor the person in front of the lens. Oh, and it’s
pronounced “lee-na”; did you know?
The choice of this encoding has made ASCII-compatible standards the language that computers use to communicate to this day.
Even casual internet users have probably encountered a URL with “%20” in it where there logically ought to be a space character. If we look at this RFC we see this:
Column/Row Symbol Name
2/0 SP Space (Normally Non-Printing)
Hey would you look at that! Column 2, row 0 (2,0; 20!) is what stands for “space”. When you see that “%20”, it’s because of this RFC, which exists because of some bureaucratic
decisions made in the 1950s and 1960s.
…
Darius Kazemi is reading a single RFC every day throughout 2019 and writing up his understanding as to the content and importance of each. It’s good reading if you’re “into” RFCs and it’s probably pretty interesting if you’re just a casual Internet historian.
Microsoft engineers have been spotted committing code to Chromium, the backend of Google Chrome
and many other web browsers. This, among other things, has lead to speculation that Microsoft’s browser, Edge, might be planned to switch from its
current rendering engine (EdgeHTML) to Blink (Chromium’s). This is bad news.
This post, as it would appear if you were looking at it in Edge. Which you might be, I suppose.
The younger generation of web developers are likely to hail this as good news: one fewer engine to develop for and test in, they’re all already using Chrome or
something similar (and certainly not Edge) for development and debugging anyway, etc. The problem comes perhaps because they’re too young to remember the First Browser War and its aftermath. Let me summarise:
Once upon a time – let’s call it the mid-1990s – there were several web browsers: Netscape Navigator, Internet Explorer, Opera, etc. They all used different rendering
engines and so development was sometimes a bit of a pain, but only if you wanted to use the latest most cutting-edge features: if you were happy with the standard, established
features of the Web then your site would work anywhere, as has always been the case.
Then, everybody starting using just one browser: following some shady dealings and monopoly abuse, 90%+ of Web users started using just one web browser, Internet Explorer. By the time anybody took
notice, their rivals had been economically crippled beyond any reasonable chance of recovery, but the worst had yet to come…
Developers started targeting only that one browser: instead of making websites, developers started making “Internet Explorer sites” which were only tested in that one
browser or, worse yet, only worked at all in that browser, actively undermining the Web’s position as an open platform. As the grip of the monopoly grew tighter,
technological innovation was centred around this single platform, leading to decade-long knock-on
effects.
The Web ceased to grow new features: from the release of Internet Explorer 6 there were no significant developments in the technology of the Web for many years.
The lack of competition pushed us into a period of stagnation. A
decade and a half later, we’re only just (finally) finishing shaking off this unpleasant bit of our history.
History looks set to repeat itself. Substitute Chrome in place of Internet Explorer and update the references to other web browsers and the steps above could be our future history, too.
Right now, we’re somewhere in or around step #2 – Chrome is the dominant browser – and we’re starting to see the beginnings of step #3: more and more “Chrome only” sites.
More-alarmingly this time around, Google’s position in providing many major Web services allows them to “push” even harder for this kind of change, even just subtly: if you make the
switch from Chrome to e.g. Firefox (and you absolutely should) you might find that
YouTube runs slower for you because YouTube’s (Google) engineers favour Google’s web browser.
So these are the three browser engines we have: WebKit/Blink, Gecko, and EdgeHTML. We are unlikely to get any brand new bloodlines in the foreseeable future. This is it.
If we lose one of those browser engines, we lose its lineage, every permutation of that engine that would follow, and the unique takes on the Web it could allow for.
And it’s not likely to be replaced.
Imagine a planet populated only by hummingbirds, dolphins, and horses. Say all the dolphins died out. In the far, far future, hummingbirds or horses could evolve into something that
could swim in the ocean like a dolphin. Indeed, ichthyosaurs in the era of dinosaurs looked much like dolphins. But that creature would be very different from a true dolphin: even
ichthyosaurs never developed echolocation. We would wait a very long time (possibly forever) for a bloodline to evolve the traits we already have present in other bloodlines today.
So, why is it ok to stand by or even encourage the extinction of one of these valuable, unique lineages?
We have already lost one.
We used to have four major rendering engines, but Opera halted development of its own rendering engine Presto before adopting Blink.
Three left. Spend them wisely.
As much as I don’t like having to work-around the quirks in all of the different browsers I test in, daily, it’s way preferable to a return to the dark days of the Web circa
most of the first decade of this century. Please help keep browsers diverse: nobody wants to start seeing this shit –
The history of the organisation known as OS is not merely that of a group of earnest blokes with a penchant for triangulation and an ever-present soundtrack of rustling cagoules.
From its roots in military strategy to its current incarnation as producer of the rambler’s navigational aid, the government-owned company has been checking and rechecking all 243,241 sq km (93,916 sq miles) of Great Britain for
227 years. Here are some of the more peculiar elements in the past of the famous map-makers.
Now, it’s Saturday morning and you’re eager to try out what
you’ve learned. One of the first things the manual teaches you how to do is change the colors on the display. You follow the instructions, pressing CTRL-9 to enter reverse type mode and then holding down the space bar to create long lines. You swap between colors using CTRL-1 through CTRL-8, reveling in your sudden new power over the TV screen.
As cool as this is, you realize it doesn’t count as programming. In order to program the computer, you learned last night, you have to speak to it in a language called BASIC. To you,
BASIC seems like something out of Star Wars, but BASIC is, by 1983, almost two decades old. It was invented by two Dartmouth professors, John Kemeny and Tom Kurtz, who wanted
to make computing accessible to undergraduates in the social sciences and humanities. It was widely available on minicomputers and popular in college math classes. It then became
standard on microcomputers after Bill Gates and Paul Allen wrote the MicroSoft BASIC interpreter for the Altair. But the manual doesn’t explain any of this and you won’t learn it for
many years.
One of the first BASIC commands the manual suggests you try is the PRINT command. You type in PRINT "COMMODORE
64", slowly, since it takes you a while to find the quotation mark symbol above the 2 key. You hit RETURN and this time, instead of complaining, the computer does exactly what you told it to do and displays “COMMODORE 64” on the next line.
Now you try using the PRINT command on all sorts of different things: two numbers added together, two numbers multiplied together, even several
decimal numbers. You stop typing out PRINT and instead use ?, since the manual has advised you that
? is an abbreviation for PRINT often used by expert programmers. You feel like an expert already, but
then you remember that you haven’t even made it to chapter three, “Beginning BASIC Programming.”
…
I had an Amstrad CPC, myself, but I had friends with C64s and ZX Spectrums and – being slightly older than the author – I got the
opportunity to experiment with BASIC programming on all of them (and went on to write all manner of tools on the CPC 464, 664, and 6128 models). I’m fortunate to have been able to get
started in programming in an era when your first experience of writing code didn’t have to start with an
examination of the different language choices nor downloading and installing some kind of interpreter or compiler: microcomputers used to just drop you at a prompt which
was your interpreter! I think it’s a really valuable experience for a child to have.
Fundamentally, people haven’t changed much in tens of thousands of years. If ancient Egyptians had smartphones, you know full well that they’d have been posting cat pictures too. What
can we learn from this and how should we look at our role when developing front-end Web experiences?
A century ago, one of the world’s first hackers used Morse code insults to disrupt a public demo of Marconi’s wireless telegraph
LATE one June afternoon in 1903 a hush fell across an expectant audience in the Royal Institution’s celebrated lecture theatre in London. Before the crowd, the physicist John
Ambrose Fleming was adjusting arcane apparatus as he prepared to demonstrate an emerging technological wonder: a long-range wireless communication system developed by his boss, the
Italian radio pioneer Guglielmo Marconi. The aim was to showcase publicly for the first time that Morse code messages could be sent wirelessly over long distances. Around 300 miles
away, Marconi was preparing to send a signal to London from a clifftop station in Poldhu, Cornwall, UK.
Yet before the demonstration could begin, the apparatus in the lecture theatre began to tap out a message. At first, it spelled out just one word repeated over and over. Then it
changed into a facetious poem accusing Marconi of “diddling the public”. Their demonstration had been hacked – and this was more than 100 years before the mischief playing out on
the internet today. Who was the Royal Institution hacker? How did the cheeky messages get there? And why?
…
An early example of hacking and a great metaphor for what would later become hacker-culture, found in the history of the wireless telegraph.
Underappreciated video presentation on how only a few small changes to the timeline of the Internet and early Web results in a completely different set of technologies and companies
becoming dominant today. Thought-provoking.
Today, just about all monitors and screens are digital (typically using an LCD or Plasma technology), but a decade or two ago, computer displays were based on the analog technology
inherited from TV sets.
These analog displays were constructed around Cathode Rays Tubes (commonly referred to as CRTs).
Analog TV has a fascinating history from when broadcasts were first started (in Black and White), through to the adoption of color TV (using a totally backwards-compatible system with
the earlier monochrome standard), through to cable, and now digital.
Analog TV transmissions and their display technology really were clever inventions (and the addition of colour is another inspiring innovation). It’s worth taking a look about how
these devices work, and how they were designed, using the technology of the day.
After a couple of false starts, an analog colour TV system, that was backwards compatible with black and white, became standard in 1953, and remained unchanged until the take-over by
digital TV broadcasts in the early 2000’s.
Nowadays, fraudulent online stock-trading schemes are common. But even before the first electric telegraph, two bankers committed the equivalent of modern-day Internet stock fraud.
Nowadays, fraudulent online stock-trading schemes are common. But even before the first electric telegraph, two bankers committed the equivalent of modern-day Internet stock fraud.
…
Fabulous article from 1999 about how two bankers in 1837 hacked additional data into the fledgling telegraph system to surreptitiously (and illicitly) send messages to give
them an edge at the stock exchange. Their innovative approach is similar to modern steganographic systems that hide information in headers, metadata, or within the encoding of invisible characters.