With IndieWebCamp Oxford 2019 scheduled to take place during the
Summer of Hacks, I drew a diagram (click to embiggen) of the current ecosystem that powers and propogates the
content on DanQ.me. It’s mostly for my own benefit – to be able to get a big-picture view of the ways my website talks to the world and plan for what improvements I
might be able to make in the future… but it also works as a vehicle to explain what my personal corner of the IndieWeb does and how it does it.
Here’s a summary:
DanQ.me
Since fifteen years ago today, DanQ.me has been powered by a self-hosted WordPress installation. I
know that WordPress isn’t “hip” on the IndieWeb this week and that if you’re not on the JAMstack you’re yesterday’s news, but at 15 years and counting my
love affair with WordPress has lasted longer than any romantic relationship I’ve ever had with another human being, so I’m sticking with it. What’s cool in Web technologies comes and
goes, but what’s important is solid, dependable tools that do what you need them to, and between WordPress, half a dozen off-the-shelf plugins and about a dozen homemade ones I’ve got
everything I need right here.
I write articles (long posts like this) and notes (short, “tweet-like” updates) directly into the site, and just occasionally
other kinds of content. But for the most part, different kinds of content come from different parts of the ecosystem, as described below.
RSS reader
DanQ.me sits at the centre of the diagram, but it’s worth remembering that the diagram is deliberately incomplete: it only contains information flows directly relevant to my blog (and
it doesn’t even contain all of those!). The last time I tried to draw a diagram like this that described my online life in general, then my RSS reader found its way to the centre. Which figures: my RSS reader is usually the first
and often the last place I visit on the Internet, and I’ve worked hard to funnel everything through it.
Right now I’m using FreshRSS – plus a handful of plugins, including some homemade ones – as my RSS reader: I switched from Tiny Tiny RSS about a year ago to take advantage of FreshRSS’s excellent responsive
themes, among other features. Because some websites don’t have RSS feeds, even where they ought to, I use my own tool
RSSey to retroactively “fix” people’s websites for them, dynamically adding feeds for my
consumption. It’s also a nice reminder that open source and remixability were cornerstones of the original Web. My RSS reader
collates information from a variety of sources and additionally gives me a one-click mechanism to push content I enjoy to my blog as a repost.
QTube
QTube is my video hosting platform; it’s a PeerTube node. If you haven’t seen it, that’s fine: most content
on it is consumed indirectly either through my YouTube channel or directly on my blog as posts of the “video” kind. Also, I don’t actually vlog very often. When I do publish videos onto QTube, their republication onto YouTube or DanQ.me is optional: sometimes I plan to
use a video inside an article post, for example, and so don’t need to republish it by itself.
I’m gradually exporting or re-uploading my backlog of YouTube videos from my current and previous channels to QTube in an effort to
recentralise and regain control over their hosting, but I’m in no real hurry. PeerTube certainly makes it easy, though!
Link Shortener
I operate a private link shortener which I mostly use for the expected purpose: to make links shorter and so easier to read out and memorise or else to make them take up less space in a
chat window. But soon after I set it up, many years ago, I realised that it could also act as a mechanism to push content to my RSS reader to “read later”. And by the time I’m using it for that, I figured, I might as well also be using it to repost content to my blog
from sources that aren’t things my RSS reader subscribes to. This leads to a process that’s perhaps unnecessarily
complex: if I want to share a link with you as a repost, I’ll push it into my link shortener and mark it as going “to me”, then I’ll tell my RSS reader to push it to my blog and there it’ll be published to the world! But it works and it’s fast enough: I’m not in the habit
of reposting things that are time-critical anyway.
Checkins
I’ve been involved in brainstorming ways in which the act of finding (or failing to find, etc.) a geocache or reaching (or failing to
reach) a geohashpoint could best be represented as a “checkin“, and last year I open-sourced my plugin for pulling logs (with as much automation as is permitted by the terms of service of some of the
silos involved) from geocaching websites and posting them to WordPress blogs: effectively PESOS-for-geocaching. I’d prefer to be publishing on my own blog in the first instance, but syndicating my adventures from various
silos into my blog is “good enough”.
Syndication
New notes get pushed out to my Twitter account, for the benefit of my Twitter-using friends. Articles get advertised on Facebook, Twitter and LiveJournal (yes, really) in teaser form, for the benefit of friends
who prefer to get notifications via those platforms. Facebook have been fucking around with their APIs and terms of
service lately and this is now less-automatic than it used to be, which is a bit of an annoyance. My RSS feeds carry copies
of content out to people who prefer to subscribe via that medium, and I’ve also been using this to power an experimental MailChimp “daily digest” mailing list of “what Dan’s been up to”
to a small number of friends, right in their email inboxes: I’ve not made it available to everybody yet, but if you’re happy to help test it then give me a shout
and I’ll hook you up.
Finally, a couple of IFTTT recipes push my articles and my reposts to Reddit communities: I don’t
really use Reddit myself, any more, but I’ve got friends in a few places there who prefer to keep up-to-date with what I’m up to via that medium. For historical reasons, my reposts to
Reddit don’t go directly via my blog’s RSS feeds but “shortcut” directly from my RSS reader: this is suboptimal because I don’t get to tweak post titles for Reddit but it’s not a big deal.
I used to syndicate content to Google+ (before it joined the long list of Things Google Have Killed) and to Ello
(but it never got much traction there). I’ve probably historically syndicated to other places too: I’ve certainly manually-republished content to other blogs, from time to time, too.
I use Ryan Barrett‘s excellent Brid.gy to convert Twitter replies and likes back into Webmentions for publication as comments on my blog. This used to work for Facebook, too, but again: Facebook
fucked it over. I’ve occasionally manually backfed significant Facebook comments, but it’s not ideal: I might like to look at using similar technologies to RSSey to subvert
Facebook’s limitations.
Reintegration
I’ve routinely retroactively reintegrated content that I’ve produced elsewhere on the Web. This includes my previous blogs (which is why you can browse my archives, right here on this
site, all the way back to some of the cringeworthy angsty-teenager posts I made in the 1990s) but also some Reddit posts,
some replies originally posted directly to other people’s blogs, all my old del.icio.us bookmarks, long-form forum
posts, posts I made to mailing lists and newsgroups, and more. As a result, there’s a lot of backdated content on this site, nowadays: almost a million words, and significantly
more than the 600,000 or so I counted a few years ago, before my biggest push for reintegration!
Why do I do this? Because I really, really like owning my identity online! I’ve tried the “big” silo alternatives like Facebook, Twitter, Medium, Instagram etc., and they’ve eventually
always lead to disappointment, either because they get shut down or otherwise made-unusable, because
of inappropriately-applied “real names” policies, because they give too much power to
untrustworthy companies, because they impose arbitrary limitations on my content, because they manipulate output
promotion (and exacerbate filter bubbles), or because they make the walls of their walled gardens taller and stop you integrating with them how you used to.
A handful of silos have shown themselves to be more-trustworthy than the average – in particular, eschewing techniques that promote “lock-in” – and I’d love to tell you more about them
and what I think you should look for in a silo, another time. But for now: suffice to say that just like I don’t use YouTube like most people do, I
elect not to use Facebook or Twitter in the conventional ways either. And it’s awesome, thanks.
There are plenty of reasons that people choose to take control of their own Web presence – and everybody who puts content online ought to consider
it – but I imagine that few individuals have such a complicated publishing ecosystem as I do! Now you’ve got a picture of how my digital content production workflow works, and
perhaps start owning your online identity, too.
We might never have been very good at keeping track of the exact date our relationship began in Edinburgh twelve years ago, but that doesn’t
stop Ruth and I from celebrating it, often with a trip away very-approximately in the summer. This year, we marked the occasion with a return to Scotland, cycling our way around and between Glasgow and Edinburgh.
Even sharing a lightweight conventional bike and a powerful e-bike, travelling under your own steam makes you pack lightly. We were able to get everything we needed – including packing
for the diversity of weather we’d been told to expect – in a couple of pannier bags and a backpack, and pedalled our way down to Oxford Parkway station to start our journey.
In anticipation of our trip and as a gift to me, Ruth had arranged for tickets on the Caledonian Sleeper train from London
to Glasgow and returning from Edinburgh to London to bookend our adventure. A previous sleeper train ticket she’d purchased, for Robin as part of
Challenge Robin II, had lead to enormous difficulties when the train got cancelled… but how often can sleeper trains get cancelled, anyway?
Turns out… more-often than you’d think. We cycled across London and got to Euston Station just in time to order dinner and pour a glass of wine before we received an email to let
us know that our train had been cancelled.
Station staff advised us that instead of a nice fast train full of beds they’d arranged for a grotty slow bus full of disappointment. It took quite a bit of standing-around and waiting
to speak to the right people before anybody could even confirm that we’d be able to stow our bikes on the bus, without which our plans would have been completely scuppered. Not a great
start!
Eight uncomfortable hours of tedious motorway (and the opportunity to wave at Oxford as we went back past it) and two service stations later, we finally reached Glasgow.
Despite being tired and in spite of the threatening stormclouds gathering above, we pushed on with our plans to explore Glasgow. We opted to put our trust into random exploration –
aided by responses to weirdly-phrased questions to Google Assistant about what we should see or do – to deliver us serendipitous discoveries, and this plan worked well for us. Glasgow’s
network of cycle paths and routes seems to be effectively-managed and sprawls across the city, and getting around was incredibly easy (although it’s hilly enough that I found plenty of
opportunities to require the lowest gears my bike could offer).
We kicked off by marvelling at the extravagance of the memorials at Glasgow Necropolis, a sprawling 19th-century cemetery covering an
entire hill near the city’s cathedral. Especially towards the top of the hill the crypts and monuments give the impression that the dead were competing as to who could leave the
most-conspicuous marker behind, but there are gems of subtler and more-attractive Gothic architecture to be seen, too. Finding a convenient nearby geocache completed the experience.
Pushing on, we headed downriver in search of further adventure… and breakfast. The latter was provided by the delightful Meat Up Deli, who make a spectacularly-good omelette. There, in
the shadow of Partick Station, Ruth expressed surprise at the prevalence of railway stations in Glasgow; she, like many folks, hadn’t known that Glasgow is served by an underground train network, But I too would get to learn things I hadn’t known about the subway at our next destination.
We visited the Riverside Museum, whose exhibitions are dedicated to the history of transport and industry,
with a strong local focus. It’s a terrifically-engaging museum which does a better-than-usual job of bringing history to life through carefully-constructed experiences. We spent much of
the time remarking on how much the kids would love it… but then remembering that the fact that we were able to enjoy stopping and read the interpretative signage and not just have to
sprint around after the tiny terrors was mostly thanks to their absence! It’s worth visiting twice, if we find ourselves up here in future with the little tykes.
It’s also where I learned something new about the Glasgow Subway: its original implementation – in effect until 1935 – was cable-driven! A steam engine on the South side of the circular
network drove a pair of cables – one clockwise, one anticlockwise, each 6½ miles long – around the loop, between the tracks. To start the train, a driver would pull a lever which would
cause a clamp to “grab” the continuously-running cable (gently, to prevent jerking forwards!); to stop, he’d release the clamp and apply the brakes. This solution resulted in
mechanically-simple subway trains: the system’s similar to that used for some of the surviving parts of San Franciso’s original tram network.
Equally impressive as the Riverside Museum is The Tall Ship accompanying it, comprising the barque Glenlee converted into a floating museum about
itself and about the maritime history of its age.
This, again, was an incredibly well-managed bit of culture, with virtually the entire ship accessible to visitors, right down into the hold and engine room, and with a great amount of
effort put into producing an engaging experience supported by a mixture of interactive replicas (Ruth particularly enjoyed loading cargo into a hoist, which I’m pretty sure was designed
for children), video, audio, historical sets, contemporary accounts, and all the workings of a real, functional sailing vessel.
After lunch at the museum’s cafe, we doubled-back along the dockside to a distillery we’d spotted on the way past. The Clydeside Distillery
is a relative newcomer to the world of whisky – starting in 2017, their first casks are still several years’ aging away from being ready for consumption, but that’s not stopping them
from performing tours covering the history of their building (it’s an old pumphouse that used to operate the swingbridge over the now-filled-in Queen’s Dock) and distillery, cumulating
in a whisky tasting session (although not yet including their own single malt, of course).
This was the first time Ruth and I had attended a professionally-organised whisky-tasting together since 2012, when we did so not once
but twice in the same week. Fortunately, it turns out that we hadn’t forgotten how to drink whisky; we’d both kept our hand in in the meantime.
<hic> Oh, and we got to keep our tasting-glasses as souvenirs, which was a nice touch.
Thus far we’d been lucky that the rain had mostly held-off, at least while we’d been outdoors. But as we wrapped up in Glasgow and began our cycle ride down the towpath of the Forth & Clyde Canal, the weather turned quickly through bleak to ugly to downright atrocious. The amber flood warning we’d been given gave way to what forecasters and the media called a “weather bomb”: an hours-long torrential downpour that limited visibility and soaked everything
left out in it.
You know: things like us.
Our bags held up against the storm, thankfully, but despite an allegedly-waterproof covering Ruth and I both got thoroughly drenched. By the time we reached our destination of Kincaid House Hotel we were both exhausted (not helped by a lack of sleep the previous night during our rail-replacement-bus journey) and soaking wet
right through to our skin. My boots squelched with every step as we shuffled uncomfortably like drowned rats into a hotel foyer way too-fancy for bedraggled waifs like us.
We didn’t even have the energy to make it down to dinner, instead having room service delivered to the room while we took turns at warming up with the help of a piping hot bath. If I
can sing the praises of Kincaid House in just one way, though, it’s that the food provided by room service was absolutely on-par with what I’d expect from their restaurant: none of the
half-hearted approach I’ve experienced elsewhere to guests who happen to be too knackered (and in my case: lacking appropriate footwear that’s not filled with water) to drag themselves
to a meal.
Our second day of cycling was to be our longest, covering the 87½ km (54½ mile) stretch of riverside and towpath between Milton of Campsie and our next night’s accommodation on the
South side of Edinburgh. We were wonderfully relieved to discover that the previous day’s epic dump of rain had used-up the clouds’ supply in a single day and the forecast was far more
agreeable: cycling 55 miles during a downpour did not sound like a fun idea for either of us!
Kicking off by following the Strathkelvin Railway Path, Ruth and I were able to enjoy verdant
countryside alongside a beautiful brook. The signs of the area’s industrial past are increasingly well-concealed – a rotting fence made of old railway sleepers here; the remains of a
long-dead stone bridge there – and nature has reclaimed the land dividing this former-railway-now-cycleway from the farmland surrounding it. Stopping briefly for another geocache we made good progress down to Barleybank where we were able to rejoin the canal towpath.
This is where we began to appreciate the real beauty of the Scottish lowlands. I’m a big fan of a mountain, but there’s also a real charm to the rolling wet countryside of the
Lanarkshire valleys. The Forth & Clyde towpath is wonderfully maintained – perhaps even better than the canal itself, which is suffering in patches from a bloom of spring reeds – and
makes for easy cycling.
Outside of moorings at the odd village we’d pass, we saw no boats along most of the inland parts of the Forth & Clyde canal. We didn’t see many joggers, or dog-walkers, or indeed
anybody for long stretches.
The canal was also teeming with wildlife. We had to circumnavigate a swarm of frogs, spotted varied waterfowl including a heron who’d decided that atop a footbridge was the perfect
place to stand and a siskin that made itself scarce as soon as it spotted us, and saw evidence of water voles in the vicinity. The rushes and woodland all around but especially on the
non-towpath side of the canal seemed especially popular with the local fauna as a place broadly left alone by humans.
The canal meanders peacefully, flat and lock-free, around the contours of the Kelvin valley all the way up to the end of the river. There, it drops through Wyndford Lock into the valley
of Bonny Water, from which the rivers flow into the Forth. From a hydrogeological perspective, this is the half-way point between Edinburgh and Glasgow.
Seven years ago, I got the chance to visit the Falkirk Wheel, but Ruth had never
been so we took the opportunity to visit again. The Wheel is a very unusual design of boat lift: a pair of counterbalanced rotating arms swap places to move entire sections of the canal
from the lower to upper level, and vice-versa. It’s significantly faster to navigate than a flight of locks (indeed, there used to be a massive flight of eleven locks a little
way to the East, until they were filled in and replaced with parts of the Wester Hailes estate of Falkirk), wastes no water, and – because it’s always in a state of balance – uses next
to no energy to operate: the hydraulics which push it oppose only air resistance and friction.
So naturally, we took a boat ride up and down the wheel, recharged our batteries (metaphorically; the e-bike’s battery would get a top-up later in the day) at the visitor centre cafe,
and enjoyed listening-in to conversations to hear the “oh, I get it” moments of people – mostly from parts of the world without a significant operating canal network, in their defence –
learning how a pound lock works for the first time. It’s a “lucky 10,000” thing.
Pressing on, we cycled up the hill. We felt a bit cheated, given that we’d just been up and down pedal-free on the boat tour, and this back-and-forth manoeuvrer confused my GPSr – which was already having difficulty with our insistence on sticking to the towpath despite all the road-based
“shortcuts” it was suggesting – no end!
From the top of the Wheel we passed through Rough Castle Tunnel and up onto the towpath of the Union Canal. This took us right underneath the remains of the Antonine Wall, the lesser-known sibling of Hadrian’s Wall and the absolute furthest extent, albeit short-lived, of the Roman Empire on
this island. (It only took the Romans eight years to realise that holding back the Caledonian Confederacy was a lot harder work than their replacement plan: giving most of what is now
Southern Scotland to the Brythonic Celts and making the defence of the Northern border into their problem.)
A particular joy of this section of waterway was the Falkirk Tunnel, a very long tunnel broad enough that the towpath follows through it, comprised of a mixture of hewn rock and masonry
arches and very variable in height (during construction, unstable parts of what would have been the ceiling had to be dug away, making it far roomier than most narrowboat canal
tunnels).
Wet, cold, slippery, narrow, and cobblestoned for the benefit of the horses that no-longer pull boats through this passage, we needed to dismount and push our bikes through. This proved
especially challenging when we met other cyclists coming in the other direction, especially as our e-bike (as the designated “cargo bike”) was configured in what we came to lovingly
call “fat ass” configuration: with pannier bags sticking out widely and awkwardly on both sides.
This is probably the oldest tunnel in Scotland, known with certainty to predate any of the nation’s railway tunnels. The handrail was added far later (obviously, as it would interfere
with the reins of a horse), as were the mounted electric lights. As such, this must have been a genuinely challenging navigation hazard for the horse-drawn narrowboats it was built to
accommodate!
On the other side the canal passes over mighty aqueducts spanning a series of wooded valleys, and also providing us with yet another geocaching opportunity. We were very selective about our geocache stops on this trip; there
were so many candidates but we needed to make progress to ensure that we made it to Edinburgh in good time.
We took lunch and shandy at Bridge 49 where we also bought a painting depicting one of the bridges on the Union Canal and negotiated with the
proprietor an arrangement to post it to us (as we certainly didn’t have space for it in our bags!), continuing a family tradition of us buying art from and of places we take holidays
to. They let us recharge our batteries (literal this time: we plugged the e-bike in to ensure it’d have enough charge to make it the rest of the way without excessive rationing of
power). Eventually, our bodies and bikes refuelled, we pressed on into the afternoon.
For all that we might scoff at the overly-ornate, sometimes gaudy architecture of the Victorian era – like the often-ostentatious monuments of the Necropolis we visited early in our
adventure – it’s still awe-inspiring to see their engineering ingenuity. When you stand on a 200-year-old aqueduct that’s still standing, still functional, and still objectively
beautiful, it’s easy to draw unflattering comparisons to the things we build today in our short-term-thinking, “throwaway” culture. Even the design of the Falkirk Wheel’s, whose fate is
directly linked to these duocentenarian marvels, only called for a 120-year lifespan. How old is your house? How long can your car be kept functioning? Long-term thinking has given way
to short-term solutions, and I’m not convinced that it’s for the better.
Eventually, and one further (especially sneaky) geocache later, a total of around 66 “canal miles”, one monsoon, and one sleep
from the Glasgow station where we dismounted our bus, we reached the end of the Union Canal in Edinburgh.
There we checked in to the highly-recommendable 94DR guest house where our host Paul and his dog Molly demonstrated their ability to instantly-befriend
just-about anybody.
We went out for food and drinks at a local gastropub, and took a brief amble part-way up Arthur’s Seat (but not too far… we had just cycled fifty-something miles), of which our
hotel room enjoyed a wonderful view, and went to bed.
The following morning we cycled out to Craigmillar Castle: Edinburgh’s other castle,
and a fantastic (and surprisingly-intact) example of late medieval castle-building.
This place is a sprawling warren of chambers and dungeons with a wonderful and complicated history. I feel almost ashamed to not have even known that it existed before now:
I’ve been to Edinburgh enough times that I feel like I ought to have visited, and I’m glad that I’ve finally had the chance to discover and explore it.
Edinburgh’s a remarkable city: it feels like it gives way swiftly, but not abruptly, to the surrounding countryside, and – thanks to the hills and forests – once you’re outside of
suburbia you could easily forget how close you are to Scotland’s capital.
In addition to a wonderful touch with history and a virtual geocache, Craigmillar Castle also provided with a
delightful route back to the city centre. “The Innocent Railway” – an 1830s stretch
of the Edinburgh and Dalkeith Railway which retained a tradition of horse-drawn carriages long after they’d gone out of fashion elsewhere – once connected Craigmillar to Holyrood Park
Road along the edge of what is now Bawsinch and Duddington Nature Reserve, and has long since been converted into a cycleway.
Making the most of our time in the city, we hit up a spa (that Ruth had secretly booked as a surprise for me) in the afternoon followed by an escape room – The Tesla Cube – in the evening. The former involved a relaxing soak, a stress-busting massage, and a chill lounge in a
rooftop pool. The latter undid all of the good of this by comprising of us running around frantically barking updates at one another and eventually rocking the week’s highscore for the
game. Turns out we make a pretty good pair at escape rooms.
After a light dinner at the excellent vegan cafe Holy Cow (who somehow sell a banana bread that is vegan, gluten-free, and sugar-free: by the
time you add no eggs, dairy, flour or sugar, isn’t banana bread just a mashed banana?) and a quick trip to buy some supplies, we rode to Waverley Station to find out if we’d at least be
able to get a sleeper train home and hoping for not-another-bus.
We got a train this time, at least, but the journey wasn’t without its (unnecessary) stresses. We were allowed past the check-in gates and to queue to load our bikes into their
designated storage space but only after waiting for this to become available (for some reason it wasn’t immediately, even though the door was open and crew were standing there) were we
told that our tickets needed to be taken back to the check-in gates (which had now developed a queue of their own) and something done to them before they could be accepted. Then they
reprogrammed the train’s digital displays incorrectly, so we boarded coach B but then it turned into coach E once we were inside, leading to confused passengers trying to take one
another’s rooms… it later turned back into coach B, which apparently reset the digital locks on everybody’s doors so some passengers who’d already put their luggage into a room
now found that they weren’t allowed into that room…
…all of which tied-up the crew and prevented them from dealing with deeper issues like the fact that the room we’d been allocated (a room with twin bunks) wasn’t what we’d paid for (a
double room). And so once their seemingly-skeleton crew had solved all of their initial technical problems they still needed to go back and rearrange us and several other customers in a
sliding-puzzle-game into one another’s rooms in order to give everybody what they’d actually booked in the first place.
In conclusion: a combination of bad signage, technical troubles, and understaffing made our train journey South only slightly less stressful than our bus journey North had been. I’ve
sort-of been put off sleeper trains.
After a reasonable night’s sleep – certainly better than a bus! – we arrived in London, ate some breakfast, took a brief cycle around Regent’s Park, and then found our way to Marylebone
to catch a train home.
All in all it was a spectacular and highly-memorable adventure, illustrative of the joy of leaving planning to good-luck, the perseverance of wet cyclists, the ingenuity of Victorian
engineers, the beauty of the Scottish lowlands, the cycle-friendliness of Glasgow, and – sadly – the sheer incompetence of the operators of sleeper trains.
I was watching a recent YouTube video by Derek Muller (Veritasium),
My Video Went Viral. Here’s Why, and I came to a realisation: I don’t watch YouTube like most people
– probably including you! – watch YouTube. And as a result, my perspective on what YouTube is and does is fundamentally biased from the way that others probably think
about it.
The magic moment came for me when his video explained that the “subscribe” button doesn’t do what I’d assumed it does. I’m probably not alone in my assumptions: I’ll
bet that people who use the “subscribe” button as YouTube intend don’t all realise that it works the way that it does.
Like many, I’d assumed the “subscribe” buttons says “I want to know about everything this creator publishes”. But that’s not what actually happens. YouTube wrangles your subscription
list and (especially) your recommendations based on their own metrics using an opaque algorithm. I knew, of course, that they used such a thing to manage the list of recommended
next-watches… but I didn’t realise how big an influence it was having on the way that most YouTube users choose what they’ll watch!
YouTube’s metrics for “what to show to you” is, of course, biased by your subscriptions. But it’s also biased by what’s “trending” (which in turn is based on watch time and
click-through-rate), what people-who-watch-the-things-you-watch watch, subscription commonalities, regional trends, what your contacts are interested in, and… who knows what else! AAA
YouTubers try to “game” it, but the goalposts are moving. And the struggle to stay on-top, especially after a fluke viral hit, leads to the application of increasingly desperate and
clickbaity measures.
This is a battle to which I’ve been mostly oblivious, until now, because I don’t watch YouTube like you watch YouTube.
Tom Scott produced an underappreciated sci-fi short last year describing a
theoretical AI which, in 2028, caused problems as a result of its single-minded focus. What we’re seeing in YouTube right
now is a simpler example, but illustrates the problem well: optimising YouTube’s algorithm for any factor or combination of factors other than a user’s stated preference (subscriptions)
will necessarily result in the promotion of videos to a user other than, and at the expense of, the ones by creators that they’ve subscribed to. And there are so many things
that YouTube could use as influencing factors. Off the top of my head, there’s:
Number of views
Number of likes
Ratio of likes to dislikes
Number of tracked shares
Number of saves
Length of view
Click-through rate on advertisements
Recency
Subscriber count
Subscriber engagement
Popularity amongst your friends
Popularity amongst your demographic
Click-through-ratio
Etc., etc., etc.
But this is all alien to me. Why? Well: here’s how I use YouTube:
Subscription: I subscribe to creators via RSS. My RSS reader doesn’t implement YouTube’s algorithm, of course, so it just gives me exactly what I subscribe to – no more, no less.It’s not perfect
(for example, it pisses me off every time it tells me about an upcoming “premiere”, a YouTube feature I don’t care about even a little), but apart from that it’s great! If I’m
on-the-move and can’t watch something as long as involved TheraminTrees‘ latest deep-thinker, my RSS reader remembers so I can watch it later at my convenience. I can have National Geographic‘s videos “expire” if I don’t watch them within a week but Dr. Doe‘s wait for me forever. And I can implement my own filters if a feed isn’t showing exactly what I’m looking for (like I did to
strip the sport section from BBC News’ RSS feed). I’m in control.
Discovery: I don’t rely on YouTube’s algorithm to find me new content. I don’t mind being a day or two behind on what’s trending: I’m not sure I care at all? I’m far
more-interested in recommendations curated by a human. If I discover and subscribe to a channel on YouTube, it was usually (a) mentioned by another YouTuber or (b)
linked from a blog or community blog. I’m receiving recommendations from people I already respect, and they have a way higher hit-rate than YouTube’s recommendations.(I also sometimes
discover content because it’s exactly what I searched for, e.g. I’m looking for that tutorial on how to install a fiddly damn kiddy seat into the car, but this is unlikely to result
in a subscription.)
This isn’t yet-another-argument that you should use RSS because it’s awesome. (Okay, so it is. RSS isn’t dead, and its killer feature is that its users get to choose
how it works. But there’s more I want to say.)
What I wanted to share was this reminder, for me, that the way you use a technology can totally blind you to the way others use it. I had no idea that many YouTube
creators and some YouTube subscribers felt increasingly like they were fighting YouTube’s algorithms, whose goals are different from their own, to get what they want. Now I can see it
everywhere! Why do schmoyoho always encourage me to press the notification bell and not just the subscribe button? Because for a
typical YouTube user, that’s the only way that they can be sure that their latest content will be seen!
Of course, the business needs of YouTube mean that we’re not likely to see any change from them. So until either we have mainstream content-curating AIs that answer to their human owners rather than to commercial networks (robot butler, anybody?) or else the video fediverse catches on – and I don’t know which of those two are least-likely! – I guess I’ll stick to my algorithm-lite
subscription model for YouTube.
But at least now I’ll have a better understanding of why some of the channels I follow are changing the way they produce and market their content…
I love RSS, but it’s a minor niggle for me that if I subscribe to any of the
BBC News RSS feeds I invariably get all the sports
news, too. Which’d be fine if I gave even the slightest care about the world of sports, but I don’t.
Filters it to remove all entries whose GUID matches a particular regular expression (removing all of those from the
“sport” section of the site)
Outputs the resulting feed into a temporary file
Uploads the temporary file to a bucket in Backblaze‘s “B2” repository (think: a better-value competitor S3); the bucket I’m using is
publicly-accessible so anybody’s RSS reader can subscribe to the feed
I like the versatility of the approach I’ve used here and its ability to perform arbitrary mutations on the feed. And I’m a big fan of Nokogiri. In some ways, this could be considered a
lower-impact, less real-time version of my tool RSSey. Aside from the fact that it won’t (easily) handle websites that require Javascript, this
approach could probably be used in exactly the same ways as RSSey, and with significantly less set-up: I might look into whether its functionality can be made more-generic so I can
start using it in more places.
When I write a blog post, it generally becomes a static thing: its content always
usually stays the same for the rest of its life (which is, in my case, pretty much forever). But sometimes, I go back and make an
amendment. When I make minor changes that don’t affect the overall meaning of the work, like fixing spelling mistakes and repointing broken links, I just edit the page, but for
more-significant changes I try to make it clear what’s changed and how.
Historically, I’d usually marked up deletions with the HTML <strike>/<s> elements (or
other visually-similar approaches) and insertions by clearly stating that a change had been made (usually accompanied by the date and/or time of the change), but this isn’t a good
example of semantic code. It also introduces an ambiguity when it clashes with the times I use <s> for comedic effect in the Web equivalent of the old caret-notation joke:
Be nice to this fool^H^H^H^Hgentleman, he's visiting from corporate HQ.
Better, then, to use the <ins> and <del> elements, which were designed for exactly this purpose and even accept attributes to specify the date/time
of the modification and to cite a resource that explains the change, e.g. <ins datetime="2019-05-03T09:00:00+00:00"
cite="https://alices-blog.example.com/2019/05/03/speaking.html">The last speaker slot has now been filled; thanks Alice</ins>. I’ve worked to retroactively add such
semantic markup to my historical posts where possible, but it’ll be an easier task going forwards.
Of course, no browser I’m aware of supports these attributes, which is a pity because the metadata they hold may well have value to a reader. In order to expose them I’ve added a little
bit of CSS that looks a little like this, which makes their details (where available) visible as a sort-of tooltip when hovering
over or tapping on an affected area. Give it a go with the edits at the top of this post!
I’m aware that the intended use-case of <ins>/<del> is change management, and that the expectation is that the “final” version of a
document wouldn’t be expected to show all of the changes that had been made to it. Such a thing could be simulated, I suppose, by appropriately hiding and styling the
<ins>/<del> blocks on the client-side, and that’s something I might look into in future, but in practice my edits are typically small and rare
enough that nobody would feel inconvenienced by their inclusion/highlighting: after all, nobody’s complained so far and I’ve been doing exactly that, albeit in a non-semantic way, for
many years!
I’m also slightly conscious that my approach to the “tooltip” might cause it to obstruct interactivity with something directly above an insertion or deletion: e.g. making a hyperlink
inaccessible. I’ve tested with a variety of browsers and devices and it doesn’t seem to happen (my line height works in my favour) but it’s something I’ll need to be mindful of if I
change my typographic design significantly in the future.
A final observation: I love the CSS attr() function, and I’ve been using it (and counter()) for all
kinds of interesting things lately, but it annoys me that I can only use it in a content: statement. It’d be amazingly valuable to be able to treat integer-like attribute
values as integers and combine it with a calc() in order to facilitate more-dynamic styling of arbitrary sets of HTML elements. Maybe one day…
For the time being, I’m happy enough with my new insertion/deletion markers. If you’d like to see them in use in their natural environment, see the final paragraph of my 2012 review of The Signal and The Noise.
I’m increasingly convinced that Friedemann Friese‘s 2009 board game Power Grid: Factory Manager (BoardGameGeek) presents gamers with a highly-digestible model of the energy economy in a capitalist society.
In Factory Manager, players aim to financially-optimise a factory over time, growing production and delivery capacity through upgrades in workflow, space, energy, and staff
efficiency. An essential driving factor in the game is that energy costs will rise sharply throughout. Although it’s not always clear in advance when or by how much, this increase in
the cost of energy is always at the forefront of the savvy player’s mind as it’s one of the biggest factors that will ultimately impact their profit.
Given that players aim to optimise for turnover towards the end of the game (and as a secondary goal, for the tie-breaker: at a specific point five rounds after the game begins) and not
for business sustainability, the game perhaps-accidentally reasonably-well represents the idea of “flipping” a business for a profit. Like many business-themed games, it favours
capitalism… which makes sense – money is an obvious and quantifiable way to keep score in a board game! – but it still bears repeating.
There’s one further mechanic in Factory Manager that needs to be understood: a player’s ability to control the order in which they take their turn and their capacity to
participate in the equipment auctions that take place at the start of each round is determined by their manpower-efficiency in the previous round. That is: a player who
operates a highly-automated factory running on a skeleton staff benefits from being in the strongest position for determining turn order and auctions in their next turn.
The combination of these rules leads to an interesting twist: in the final turn – when energy costs are at their highest and there’s no benefit to holding-back staff to
monopolise the auction phase in the nonexistent subsequent turn – it often makes most sense strategically to play what I call the “sweatshop strategy”. The player switches off
the automated production lines to save on the electricity bill, drags in all the seasonal workers they can muster, dusts off the old manpower-inefficient machines mouldering in the
basement, and gets their army of workers cranking out widgets!
With indefinitely-increasing energy prices and functionally-flat staff costs, the rules of the game would always eventually reach the point at which it is most cost-effective
to switch to slave cheap labour rather than robots. but Factory Manager‘s fixed-duration means that this point often comes for all players in many games at the same
predictable point: a tipping point at which the free market backslides from automation to human labour to keep itself alive.
There are parallels in the real world. Earlier this month, Tim Watkins wrote:
The demise of the automated car wash may seem trivial next to these former triumphs of homo technologicus but it sits on the same continuum. It is just one of a gathering
list of technologies that we used to be able to use, but can no longer express (through market or state spending) a purpose for. More worrying, however, is the direction in which we
are willingly going in our collective decision to move from complexity to simplicity. The demise of the automated car wash has not followed a return to the practice of people
washing their own cars (or paying the neighbours’ kid to do it). Instead we have more or less happily accepted serfdom (the use of debt and blackmail to force people to work) and
slavery (the use of physical harm) as a reasonable means of keeping the cost of cleaning cars to a minimum (similar practices are also keeping the cost of food down in the UK).
This, too, is precisely what is expected when the surplus energy available to us declines.
I love Factory Manager, but after reading Watkins’ article, it’ll probably feel a little different to play it, now. It’s like that moment when, while reading the rules, I first
poured out the pieces of Puerto Rico. Looking through them, I thought for a moment about what the “colonist”
pieces – little brown wooden circles brought to players’ plantations on ships in a volume commensurate with the commercial demand for manpower – represented. And that realisation adds
an extra message to the game.
Beneath its (fabulous) gameplay, Factory Manager carries a deeper meaning encouraging the possibility of a discussion about capitalism, environmentalism, energy, and
sustainability. And as our society falters in its ability to fulfil the techno-utopian dream, that’s perhaps a discussion we need to be having.
But for now, go watch Sorry to Bother You, where you’ll find further parallels… and at least you’ll get to laugh as you do
so.
As I’ve previously mentioned (sadly), Microsoft Edge is to drop its own rendering engine EdgeHTML and replace it with Blink, Google’s one (more
of my and others related sadness here, here, here, and here). Earlier this month, Microsoft made available the first prerelease versions of the browser, and I gave it a go.
All of the Chrome-like features you’d expect are there, including support for Chrome plugins, but Microsoft have also clearly worked to try to integrate as much as possible of the
important features that they felt were distinct to Edge in there, too. For example, Edge Blink supports SmartScreen filtering and uses Microsoft accounts for sync, and Incognito is of
course rebranded InPrivate.
But what really interested me was the approach that Edge Dev has taken with Progressive Web Apps.
Edge Dev may go further than any other mainstream browser in its efforts to make Progressive Web Apps visible to the user, putting a plus sign (and sometimes an extended
install prompt) right in the address bar, rather than burying it deep in a menu. Once installed, Edge PWAs “just work” in
exactly the way that PWAs ought to, providing a simple and powerful user experience. Unlike some browsers, which
make installing PWAs on mobile devices far easier than on desktops, presumably in a misguided belief in the importance of
mobile “app culture”, it doesn’t discriminate against desktop users. It’s a slick and simple user experience all over.
Feature support is stronger than it is for Progressive Web Apps delivered as standalone apps via the Windows Store, too, with the engine not falling over at the first sign of a modal
dialog for example. Hopefully (as I support one of these hybrid apps!) these too will begin to be handled properly when Edge Dev eventually achieves mainstream availability.
But perhaps most-impressive is Edge Dev’s respect for the importance of URLs. If, having installed the progressive “app”
version of a site you subsequently revisit any address within its scope, you can switch to the app version via a link in the menu. I’d rather have seen a nudge in the address bar, where
the user might expect to see such things (based on that being where the original install icon was), but this is still a great feature… especially given that cookies and other
state maintainers are shared between the browser, meaning that performing such a switch in a properly-made application will result in the user carrying on from almost exactly where they
left off.
Similarly, and also uncommonly forward-thinking, Progressive Web Apps installed as standalone applications from Edge Dev enjoy a “copy URL” option in their menu, even if the app runs without an address bar (e.g. as a result of a "display": "standalone" directive
in the manifest.json). This is a huge boost to sharability and is enormously (and unusually) respectful of the fact that addresses are the
Web’s killer feature! Furthermore, it respects the users’ choice to operate their “apps” in whatever way suits them best: in a browser (even a competing browser!), on their
mobile device, or wherever. Well done, Microsoft!
I’m still very sad overall that Edge is becoming part of the Chromium family of browsers. But if the silver lining is that we get a pioneering and powerful new Progressive Web App
engine then it can’t be all bad, can it?
You’ve probably seen the news about people taking a technological look at the issue of consent, lately. One thing that’s been
getting a lot of attention is the Tulipán Placer Consentido, an Argentinian condom which comes in a packet that requires the cooperation of two pairs of hands to open it.
One fundamental flaw with the concept that nobody seems to have pointed out (unless perhaps in Spanish), is that – even assuming the clever packaging works perfectly – all that you can
actually consent to with such a device is the use of a condom. Given that rape can be and often is committed coercively rather than physically – e.g. through fear, blackmail,
or obligation rather than by force – consent to use of a condom by one of the parties shouldn’t be conflated with consent to a sexual act: it may just be preferable to it
without, if that seems to be the alternative.
Indeed, all of these technical “solutions” to rape seem to focus on the wrong part of the process. Making sure that an agreement is established isn’t a hard problem,
algorithmically-speaking (digital signatures with split-key cryptography has given us perhaps the strongest possible solution to the problem for forty years now)! The hard problem here
is in getting people to think about what rape is and to act appropriately to one another. Y’know: it’s a people problem, not a technology problem! (Unshocker.)
But even though they’re perhaps functionally-useless, I’m still glad that people are making these product prototypes. As the news coverage kicked off by the #MeToo movement wanes, its valuable to keep that wave of news going: the issues faced by the victims of sexual assault and rape
haven’t gone away! Products like these may well be pointless in the real world, but they’re a vehicle to
keep talking about consent and its importance. Keeping the issue in the limelight is helpful, because it forces people to continually re-evaluate their position on sex and
consent, which makes for a healthy and progressive society.
So I’m looking forward to whatever stupid thing we come up with next. Bring it on, innovators! Just don’t take your invention too seriously: you’re not going to “fix” rape with
it, but at least you can keep us talking about it.
I’m not sure that I process death in the same way that “normal” people do. I blame my family.
When my grandmother died in 2006 I was just in the process of packing up the car with Claire to
try to get up to visit her before the inevitable happened. I received the phone call to advise me that she’d passed, and – ten emotional
minutes later – Claire told me that she’d “never seen anybody go through the five stages of grief as fast as that
before”. Apparently I was a textbook example of the Kübler-Ross model, only at speed. Perhaps I should volunteer to stand in front of introductory psychology classes and feel things, or
something.
Since my dad’s death seven years ago, I’ve marked Dead Dad Day every 19 February a way that’s definitely “mine”: with a pint or three of Guinness
(which my dad enjoyed… except if there were a cheaper Irish stout on draught because he never quite shook off his working-class roots) and some outdoors and ideally a hill, although
Oxfordshire makes the latter a little difficult. On the second anniversary of my dad’s death, I commemorated his love of setting out and checking the map later by making
my first geohashing expedition: it seemed appropriate that even without him, I could make a journey without either of us
being sure of either the route… or the destination.
As I implied at his funeral, I’ve always been far more-interested in celebrating life than
mourning death (that might be why I’m not always the best at supporting those in grief). I’m not saying that it isn’t sad
that he went before his time: it is. What’s worst, I think, is when I remember how close-but-not-quite he came to getting to meet his grandchildren… who’d have doubtless called him
“Grandpeter”.
We all get to live, and we’re all going to die, and I’d honestly be delighted if I thought that people might remember me with the same kind of smile (and just occasionally tear) that
finds my face every Dead Dad Day.
I’m a big believer in the idea that the hardware I lay my hands on all day, every day, needs to be the best for its purpose. On my primary desktop, I type on a Das Keyboard 4 Professional (with Cherry MX brown switches) because it looks, feels, and sounds spectacular. I use
the Mac edition of the keyboard because it, coupled with a few tweaks, gives me the best combination of features and compatibility across all of the Windows, MacOS, and Linux (and
occasionally other operating systems) I control with it. These things matter.
I also care about the mouse I use. Mice are, for the most part, for the Web and for gaming and not for use in most other applications (that’s what keyboard shortcuts are for!) but
nonetheless I spend plenty of time holding one and so I want the tool that feels right to me. That’s why I was delighted when, in replacing my four year-old Logitech MX1000 in 2010 with my first Logitech Performance MX, I felt
able to declare it the best mouse in the world. My Performance MX lived for about four years, too – that seems to be how long a mouse can stand the kind of use that I give it –
before it started to fail and I opted to replace it with an identical make and model. I’d found “my” mouse, and I was sticking with it. It’s a great shape (if you’ve got larger hands),
is full of features including highly-configurable buttons, vertical and horizontal scrolling (or whatever you want to map them to), and a cool “flywheel” mouse wheel that can
be locked to regular operation or unlocked for controlled high-speed scrolling at the touch of a button: with practice, you can even use it as a speed control by gently depressing the
switch like it was a brake pedal. Couple all of that with incredible accuracy on virtually any surface, long battery life, and charging “while you use” and you’ve a recipe for success,
in my mind.
My second Performance MX stopped properly charging its battery this week, and it turns out that they don’t make them any more, so I bought its successor, the Logitech MX Master 2S.
The MX Master 2S is… different… from its predecessor. Mostly in good ways, sometimes less-good. Here’s the important differences:
Matte coating: only the buttons are made of smooth plastic; the body of the mouse is now a slightly coarser plastic: you’ll see in the photo above how much less light
it reflects. It feels like it would dissipate heat less-well.
Horizontal wheel replaces rocker wheel: instead of the Performance MX’s “rocker” scroll wheel that can be pushed sideways for horizontal scroll, the MX Master 2S adds
a dedicated horizontal scroll (or whatever you reconfigure it to) wheel in the thumb well. This is a welcome change: the rocker wheel in both my Performance MXes became less-effective
over time and in older mice could even “jam on”, blocking the middle-click function. This seems like a far more-logical design.
New back/forward button shape: to accommodate the horizontal wheel, the “back” and “forward” buttons in the thumb well have been made smaller and pushed closer
together. This is the single biggest failing of the MX Master 2S: it’s clearly a mouse designed for larger hands, and yet these new buttons are slightly, but noticeably, harder to
accurately trigger with a large thumb! It’s tolerable, but slightly annoying.
Bluetooth support: one of my biggest gripes about the Performance MX was its dependence on Unifying, Logitech’s proprietary wireless protocol. The MX Master 2S
supports Unifying but also supports Bluetooth, giving you the best of both worlds.
Digital flywheel: the most-noticable change when using the mouse is the new flywheel and braking mechanism, which is comparable to the change in contemporary cars
from a mechanical to a digital handbrake. The flywheel “lock” switch is now digital, turning on or off the brake in a single stroke and so depriving you of the satisfaction of using
it to gradually “slow down” a long spin-scroll through an enormous log or source code file. But in exchange comes an awesome feature called SmartShift, which dynamically
turns on or off the brake (y’know, like an automatic handbrake!) depending on the speed with which you throw the wheel. That’s clever and intuitive and “just works” far better than
I’d have imagined: I can choose to scroll slowly or quickly, with or without the traditional ratchet “clicks” of a wheel mouse, with nothing more than the way I flick my finger (and
all fully-configurable, of course). And I’ve still got the button to manually “toggle” the brake if I need it. It took some getting used to, but this change is actually really cool!
(I’m yet to get used to the sound of the digital brake kicking in automatically, but that’s true of my car too).
Basic KVM/multi-computing capability: with a button on the underside to toggle between different paired Unifying/Bluetooth transceivers and software support for
seamless edge-of-desktop multi-computer operation, Logitech are clearly trying to target folks who, like me, routinely run multiple computers simultaneously from a single keyboard and
mouse. But it’s a pointless addition in my case because I’ve been quite happy using Synergy to do this for
the last 7+ years, which does it better. Still, it’s a harmless “bonus” feature and it might be of value to others, I suppose.
All in all, the MX Master 2S isn’t such an innovative leap forward over the Performance MX as the Performance MX was over the MX1000, but it’s still great that this spectacular series
of heavyweight workhouse feature-rich mice continues to innovate and, for the most part, improve upon the formula. This mouse isn’t cheap, and it isn’t for everybody, but if you’re a
big-handed power user with a need to fine-tune all your hands-on hardware to get it just right, it’s definitely worth a look.
The current iteration of my blog diverges from an architectural principle common to most of previous versions of the last 20 years. While
each previous change in design and layout was intended to provide a single monolithic upgrade, this version tries to provide me with a platform for continuous ongoing
experimentation and change.
I’ve been trying to make better use of my blog as a vehicle for experimenting with web technologies, as I used to with personal sites back in the 1990s and early 2000s; to see a vanity
site like this one as a living playground rather than something that – like most of the sites I’m paid to work on – something whose design is, for the most part, static for
long periods of time.
The grid of recent notes, shares, checkins and videos on my
homepage is powered by the display: grid; CSS directive. The number of columns varies by screen width from six
on the widest screens down to three or just one on increasingly small screens. Crucially, grid-auto-flow: dense; is used to ensure an even left-to-right filling of the
available space even if one of the “larger” blocks (with grid-column: span 2; grid-row: span 2;) is forced for space reasons to run onto the next line. This means that
content might occasionally be displayed in a different order from that in which it is written in the HTML (which is reverse
order of publication), but in exchange the items are flush with both sides.
Not all web browsers support display: grid; and while that’s often only one of design and not of readability because these browsers will fall back to usually-very-safe
default display modes like block and inline, as appropriate, sometimes there are bigger problems. In Internet Explorer 11, for example, I found (with thanks to
@_ignatg) a problem with my directives specifying the size of these cells (which are actually <li> elements because, well,
semantics matter). Because it understood the directives that ought to impact the sizing of the list items but not
the one that redeclared its display type, IE made… a bit of a mess of things…
Do websites need to look the same in every browser? No. But the content should be readable
regardless, and here my CSS was rendering my content unreadable. Given that Internet Explorer users represent a little
under 0.1% of visitors to my site I don’t feel the need to hack it to have the same look-and-feel: I just need it to have the same content readability. CSS Feature Queries to the rescue!
CSS Feature Queries – the @supports selector – make it possible to apply parts of your stylesheet if and only if
the browser supports specific CSS features, for example grids. Better yet, using it in a positive manner (i.e. “apply these
rules only if the browser supports this feature”) is progressive enhancement, because browsers that don’t understand the @supports selector act in
the same way as those that understand it but don’t support the specified feature. Fencing off the relevant parts of my stylesheet in a @supports (display: grid) { ... }
block instructed IE to fall back to displaying that content as a boring old list: exactly what I needed.
Reduced-motion support
I like to put a few “fun” features into each design for my blog, and while it’s nowhere near as quirky as having my head play peek-a-boo when you
hover your cursor over it, the current header’s animations are in the same ballpark: hover over or click on some of the items in the header menu to see for yourself..
These kinds of animations are fun, but they can also be problematic. People with inner ear disorders (as well as people who’re just trying to maximise the battery life on their portable
devices!) might prefer not to see them, and web designers ought to respect that choice where possible. Luckily, there’s an emerging standard to acknowledge that: prefers-reduced-motion. Alongside its cousins inverted-colors, prefers-reduced-transparency, prefers-contrast and
prefers-color-scheme (see below for that last one!), these new CSS tools allow developers to optimise based on the accessibility
features activated by the user within their operating system.
If you’ve tweaked your accessibility settings to reduce the amount of animation your operating system shows you, this website will respect that choice as well by not animating the
contents of the title, menu, or the homepage “tiles” any more than is absolutely necessary… so long as you’re using a supported browser, which right now means Safari or Firefox (or the
“next” version of Chrome). Making the change itself is pretty simple: I just added a @media screen and (prefers-reduced-motion: reduce) { ... } block to disable or
otherwise cut-down on the relevant animations.
Dark-mode support
Similarly, operating systems are beginning to
support “dark mode”, designed for people trying to avoid eyestrain when using their computer at night. It’s possible for your browser to respect this and try to “fix” web pages for
you, of course, but it’s better still if the developer of those pages has anticipated your need and designed them to acknowledge your choice for you. It’s only supported in Firefox and
Safari so far and only on recent versions of Windows and MacOS, but it’s a start and a helpful touch for those nocturnal websurfers out there.
It’s pretty simple to implement. In my case, I just stacked some overrides into a @media (prefers-color-scheme: dark) { ... } block, inverting the background and primary
foreground colours, softening the contrast, removing a few “bright” borders, and darkening rather than lightening background images used on homepage tiles. And again, it’s an example of
progressive enhancement: the (majority!) of users whose operating systems and/or browsers don’t yet support this feature won’t be impacted by its inclusion in my stylesheet, but those
who can make use of it can appreciate its benefits.
This isn’t the end of the story of CSS experimentation on my blog, but it’s a part of the it that I hope you’ve enjoyed.
Apparently the NCSF (US) are typing to make 28 February into Metamour Day: a
celebration of one’s lover’s lovers. While I’m not convinced that’ll ever get Hallmark’s interest, I thought it provided a good opportunity to sing the praises of my metamour, JTA.
I first met JTA 15 years ago at Troma Night XX, when his girlfriend Ruth – an attendee of Troma Night since its earliest days the previous year – brought him along and we all mocked his three-letter initialism.
Contrary to our previous experience, thanks to Liz, of people bringing boyfriends once but never again (we always assumed that we
scared them off), JTA became a regular, even getting to relive some of the early nights that he’d missed in our nostalgic 50th event. Before long, I felt glad to count him among my friends.
Almost 13 years ago I described JTA thusly, and I stand by it:
You have a fantastic temper which you keep carefully bottled away and of which you draw out only a little at a time and only where it is genuinely justly deserved. Conversely, your
devotion to the things you love and care about is equally inspiring.
We’d be friends anyway, but having a partner-in-common has given us the opportunity for a closer relationship still. I love you, man: y’know, in the Greek way. Happy metamour
appreciation day.
People were quick to point this out and assume that it was something to do with the modernity of MetaFilter:
honestly, the disheartening thing is that many metafilter pages don’t seem to work. Oh, the modern web.
Some even went so far as to speculate that the reason related to MetaFilter’s use of CSS and JS:
CSS and JS. They do things. Important things.
This is, of course, complete baloney, and it’s easy to prove to oneself. Firstly, simply using the View Source tool in your browser on a MetaFilter page reveals source code that’s quite
comprehensible, even human-readable, without going anywhere near any CSS or JavaScript.
Secondly, it’s pretty simple to try browsing MetaFilter without CSS or JavaScript enabled! I tried in two ways: first,
by using Lynx, a text-based browser that’s never supported either of those technologies. I also tried by using
Firefox but with them disabled (honestly, I slightly miss when the Web used to look like this):
And thirdly: the error code being returned by the simulated WorldWideWeb browser is a HTTP code 500. Even if you don’t
know your HTTP codes (I mean, what kind of weirdo would take the time to memorise them all anyway <ahem>),
it’s worth learning this: the first digit of a HTTP response code tells you what happened:
1xx means “everything’s fine, keep going”;
2xx means “everything’s fine and we’re done”;
3xx means “try over there”;
4xx means “you did something wrong” (the infamous 404, for example, means you asked for a page that doesn’t exist);
5xx means “the server did something wrong”.
Simple! The fact that the error code begins with a 5 strongly implies that the problem isn’t in the (client-side) reimplementation of WorldWideWeb: if this had have been a
CSS/JS problem, I’d expect to see a blank page, scrambled content, “filler”
content, or incomplete content.
So I found myself wondering what the real problem was. This is, of course, where my geek flag becomes most-visible: what we’re talking about, let’s not forget, is a fringe
problem in an incomplete simulation of an ancient computer program that nobody uses. Odds are incredibly good that nobody on Earth cares about this except, right now, for me.
The (simulated) copy of WorldWideWeb is asked to open a document by reference, e.g. “https://www.metafilter.com/”.
To work around same-origin policy restrictions, the request is sent to an API which acts as a proxy server.
The API makes a request using the Node package “request” with this line of code: request(url, (error, response, body) =>
{ ... }). When the first parameter to request is a (string) URL, the module uses its default settings for all of
the other options, which means that it doesn’t set the User-Agent header (an optional part of a Web request where the computer making the request identifies the software
that’s asking).
MetaFilter, for some reason, blocks requests whose User-Agent isn’t set. This is weird! And nonstandard: while web browsers should – in RFC2119 terms – set their User-Agent: header, web servers shouldn’t require
that they do so. MetaFilter returns a 403 and a message to say “Forbidden”; usually a message you only see if you’re trying to access a resource that requires session authentication and
you haven’t logged-in yet.
The API is programmed to handle response codes 200 (okay!) and 404 (not found), but if it gets anything else back
it’s supposed to throw a 400 (bad request). Except there’s a bug: when trying to throw a 400, it requires that an error message has been set by the request module and if there
hasn’t… it instead throws a 500 with the message “Internal Server Fangle” and no clue what actually went wrong. So MetaFilter’s 403 gets translated by the proxy into a 400 which
it fails to render because a 403 doesn’t actually produce an error message and so it gets translated again into the 500 that you eventually see. What a knock-on effect!
This then sets a User-Agent header and makes servers that require one, such as MetaFilter, respond appropriately. I don’t know whether WorldWideWeb originally set a User-Agent header
(CERN’s source file archive seems to be missing the relevant C sources so I can’t check) but I
suspect that it did, so this change actually improves the fidelity of the emulation as a bonus. A better fix would also add support for and appropriate handling of other HTTP response
codes, but that’s a story for another day, I guess.
I know the hackathon’s over, but I wonder if they’re taking pull requests…
This month, a collection of some of my favourite geeks got invited to CERN in Geneva to
participate in a week-long hackathon with the aim of reimplementing WorldWideWeb –
the first web browser, circa 1990-1994 – as a web application. I’m super jealous, but I’m also really pleased with what they managed
to produce.
This represents a huge leap forward from their last similar project, which aimed to recreate the line mode browser: the first web browser that
didn’t require a NeXT computer to run it and so a leap forward in mainstream appeal. In some ways, you might expect
reimplementing WorldWideWeb to be easier, because its functionality is more-similar that of a modern browser, but there were doubtless some challenges too: this early browser predated the concept of the DOM and so there are distinct
processing differences that must be considered to get a truly authentic experience.
Among their outputs, the team also produced a cool timeline of the Web, which – thanks to some careful authorship – is as legible in WorldWideWeb as it is in a modern browser (if, admittedly, a little less pretty).
In an age of increasing Single Page Applications and API-driven sites and “apps”, it’s nice to be reminded that if you develop right for the Web, your content will be visible
(sort-of; I’m aware that there are some liberties taken here in memory and processing limitations, protocols and negotiation) on machines 30 years old, and that gives me hope that
adherence to the same solid standards gives us a chance of writing pages today that look just as good in 30 years to come. Compare that to a proprietary technology like Flash whose heyday 15 years ago is overshadowed by its imminent death (not to
mention Java applets or ActiveX <shudders>), iOS apps which stopped working when the operating system went 64-bit, and websites which only work
in specific browsers (traditionally Internet Explorer, though as I’ve complained before we’re getting more and more Chrome-only sites).
The Web is a success story in open standards, natural and by-design progressive enhancement, and the future-proof archivability of human-readable code. Long live the Web.