Now that travel for work is back on the menu, I’ve been trying to upgrade my “pack light” game.
I’ve been inspired in part by Beau, who I first met during my trip to South Africa in 2019 during my Automattic onboarding. Beau travelled from the US for a two week jaunt with nothing but
hand luggage, and it blew my mind.
For my trip to Vienna earlier this year for a divisional meetup, I got by with just a backpack and a laptop bag. Right now, I’m waiting to fly to Rome for a week, and I’ve ditched the
laptop bag in favour of just a single carry-on backpack. About 7kg of luggage, and well within the overhead locker size limit.
I’m absolutely sold on this approach. I get to:
walk past the queues for luggage drop (having checked-in online),
keep the entirety of my luggage with me at all times (which ensures it goes where I do),
breeze through security1,
thanks to smart packing2
walk right out of the airport at the other end without having to wait for the flingers to finish smashing everybody’s luggage into the carousels.
As somebody who’s travelled “heavy” for most of my life – and especially since the children came along – it’s liberating to migrate to a “pick up a bag and go” mindset. To begin with,
the nagging thought that I must’ve forgotten something essential was challenging, but I think I’ve gotten past that stage now.
Travelling light feels like carefree: like being a kid again, when all you needed was the back on your back and you were ready for an adventure. Once again, I’ve got a bag on my
back3 and I know that everything I need for an adventure
is right here with me4.
Footnotes
1 If you’ve travelled with me before, you might have noticed that I sometimes have trouble
at borders on account of my damn stupid name, as predicted by the Passport Office. I’ve since learned all the requisite tricks to sidestep these problems, but that’s probably worthy
of a post in its own right.
2 A little smart packing goes a long way. In the photo above, you might see my pre-prepared liquids bag in a side pocket, my
laptop slides right out for separate scanning, my wallet and phone just dump out of my pockets, and I’m done.
3 I don’t really have a bag on my back right now. I’m sat in a depature lounge at Gatwick
Airport. But you get the idea.
4 Do I really have everything I need? I’ve not brought a waterproof coat and,
looking at the weather forecast at my destination, this might have been a mistake. But worst case I can buy a cheap poncho at the other end. That’s the kind of freedom that being an
adult gets you, replacing the childlike freedom to get soaked and not care.
Lacking a basis for comparison, children accept their particular upbringing as normal and representative.
Kit was telling me about how his daughter considers it absolutely normal to live in a house full of
insectivorous plants1, and it got
me thinking about our kids, and then about myself:
I remember once overhearing our eldest, then at nursery, talking to her friend. Our kid had mentioned doing something with her “mummy, daddy, and Uncle Dan” and was incredulous that her friend didn’t have an Uncle Dan that they lived with! Isn’t having three parents…
just what a family looks like?
By the time she was at primary school, she’d learned that her family wasn’t the same shape as most other families, and she could code-switch with incredible ease. While picking her up
from school, I overheard her talking to a friend about a fair that was coming to town. She told the friend that she’d “ask her dad if she could go”, then turned to me and said
“Uncle Dan: can we go to the fair?”; when I replied in the affirmitive, she turned back and said “my dad says it’s okay”. By the age of 5 she was perfectly capable of
translating on-the-fly2 in order to
simultaneously carry out intelligble conversations with her family and with her friends. Magical.
When I started driving, and in particular my first few times on multi-lane
carriageways, something felt “off” and it took me a little while to work out what it was. It turns out that I’d internalised a particular part of the motorway journey experience from
years of riding in cars driven by my father, who was an unrepentant3
and perpetual breaker of speed limits.4
I’d come to associate motorway driving with overtaking others, but almost never being overtaken, but that wasn’t what I saw when I drove for myself.5
It took a little thinking before I realised the cause of this false picture of “what driving looks like”.
The thing is: you only ever notice the “this is normal” definitions that you’ve internalised… when they’re challenged!
It follows that there are things you learned from the quirks of your upbringing that you still think of as normal. There might even be things you’ll never un-learn. And you’ll
never know how many false-normals you still carry around with you, or whether you’ve ever found them all, exept to say that you probably haven’t yet.
It’s amazing and weird to think that there might be objective truths you’re perpetually unable to see as a restult of how, or where, or by whom you were brought up, or by what your
school or community was like, or by the things you’ve witnessed or experienced over your life. I guess that all we can all do is keep questioning everything, and work to help
the next generation see what’s unusual and uncommon in their own lives.
Footnotes
1 It’s a whole thing. If you know Kit, you’re probably completely unsurprised, but spare a
thought for the poor randoms who sometimes turn up and read my blog.
2 Fully billingual children who typically speak a different language at home than they do
at school do this too, and it’s even-more amazing to watch.
3 I can’t recall whether his license was confiscated on two or three separate ocassions,
in the end, but it was definitely more than one. Having a six month period where you and your siblings have to help collect the weekly shop from the supermarket by loading up your
bikes with shopping bags is a totally normal part of everybody’s upbringing, isn’t it?
4 Virtually all of my experience as a car passenger other than with my dad was in Wales,
where narrow windy roads mean that once you get stuck behind something, that’s how you’re going to be spending your day.
5 Unlike my father, I virtually never break the speed limit, to such an extent that when I
got a speeding ticket the other year (I’d gone from a 70 into a 50 zone and re-set the speed limiter accordingly, but didn’t bother to apply the brakes and just coasted down to the
new speed… when the police snapped their photo!), Ruth and JTA both independently reacted to the news with great skepticism.
Last night I had a nightmare about Dungeons & Dragons. Specifically, about the group I DM for on alternate Fridays.
In their last session the
party – somewhat uncharacteristically – latched onto a new primary plot hook rightaway. Instead of rushing off onto some random side quest threw themselves directly into this new
mission.
This effectively kicked off a new chapter of their story, so I’ve been doing some prep-work this last week or so. Y’know: making battlemaps, stocking treasure chests with mysterious and
powerful magical artefacts, and inventing a plethora of characters for the party to either befriend or kill (or, knowing this party: both).
Anyway: in the dream, I sat down to complete the prep-work I want to get done before this week’s play session. I re-checked my notes about what the adventurers had gotten up to
last time around, and… panicked! I was wrong, they hadn’t thrown themselves off the side of a city floating above the first layer of Hell at all! I’d mis-remembered completely
and they’d actually just ventured into a haunted dungeon. I’d been preparing all the wrong things and now there wasn’t time to correct my mistakes!
This is, of course, an example of the “didn’t prepare for the test” trope of dreams. Clearly I’m still feeling underprepared for this week’s game! But probably a bigger reason for the
dream, and remembering it, was that I’ve had a cold and kept waking up to cough.
This is an alternate history of the Web. The premise is true, but the story diverges from our timeline and looks at an alternative “Web that might have been”.
Prehistory
This is the story of P3P, one of the greatest Web standards whose history has been forgotten1, and how the abject failure of its first versions paved the
way for its bright future decades later. But I’m getting ahead of myself…
Drafted in 2002 in the wake of growing concern about the death of privacy on the Internet, P3P 1.0 aimed to make the collection of personally-identifiable data online transparent. Hurrah, right?
Developers are lazy3 and soon converged on the simplest possible solution: add a garbage HTTP header like P3P: CP="See our website for our privacy policy." and your cookies work just fine! Ignore the problem, ignore the
proposed solution, just do what gets the project shipped.
Without any meaningful enforcement it also perfectly feasible to, y’know, just lie about how well you treat user data. Seeing the way the wind was blowing, Mozilla dropped
support for P3P, and Microsoft’s support – which had always been half-baked and lacked even the most basic user-facing
controls or customisation options – languished in obscurity.
For a while, it seemed like P3P was dying. Maybe, in some alternate timeline, it did die: vanishing into
nothing like VRML, WAP, and XBAP.
But fortunately for us, we don’t live in that timeline.
Revival
In 2009, the European Union revisited the Privacy and Electronic Communications
Directive. The initial regulations, published in 2002, required that Web users be able to opt-out of tracking cookies, but the amendment required that sites ensure that
users opted-in.
As-written, this confusing new regulation posed an
immediate problem: if a user clicked the button to say “no, I don’t want cookies”, and you didn’t want to ask for their consent again on every page load… you had to give them a cookie
(or use some other technique
legally-indistinguishable from cookies). Now you’re stuck in an endless cookie-circle.4
This, and other factors of informed consent, quickly introduced a new pattern among those websites that were fastest to react to the legislative change:
Web users rebelled. These ugly overlays felt like a regresssion to a time when popup ads and splash pages were commonplace. “If only,” people cried out, “There were a better way to do
this!”
It was Professor Lorie Cranor, one of the original authors of the underloved P3P specification and a respected champion of usable privacy and security, whose rallying cry gave us hope. Her CNET article, “Why
the EU Cookie Directive is a solved problem”5, inspired a new generation of development on what would become known as P3P 2.0.
While maintaining backwards compatibility, this new standard:
deprecated those horrible XML documents in favour of HTTP
headers and <link> tags alone,
removing support for Set-Cookie2: headers, which nobody used anyway, and
added features by which the provenance and purpose of cookies could be stated in a way that dramatically simplified adoption in browsers
Internet Explorer at this point was still used by a majority of Web users. It still supported the older
version of the standard, and – as perhaps the greatest gift that the much-maligned browser ever gave us – provided a reference implementation as well as a stepping-stone to wider
adoption.
Opera, then Firefox, then “new kid” Chrome each adopted P3P 2.0; Microsoft finally got on board with IE 8 SP 1. Now the latest versions of all the mainstream browsers had a solid
implementation6
well before the European data protection regulators began fining companies that misused tracking cookies.
But where the story of P3P‘s successes shine brightest came in 2016, with the passing of the GDPR. The W3C realised that P3P could simplify both the expression and understanding of privacy policies for users, and formed a group to work on version 2.1. And that’s
the version you use today.
When you launch a new service, you probably use one of the many free wizard-driven tools to express your privacy policy and the bases for your data processing, and it spits out a
template privacy policy. You need the human-readable version, of course, since the 2020 German court ruling that you cannot rely on a machine-readable privacy policy alone, but
the real gem is the P3P: 2.1 header version.
Assuming you don’t have any unusual quirks in your data processing (ask your lawyer!), you can just paste the relevant code into your server configuration and you’re good to go. Site
users get a warning if their personal data preferences conflict with your data policies, and can choose how to act: not using your service, choosing which of your
features to opt-in or out- of, or – hopefully! – granting an exception to your site (possibly with caveats, such as sandboxing your cookies or clearing them immediately after closing
the browser tab).
Sure, what we’ve got isn’t perfect. Sometimes companies outright lie about their use of information or use illicit methods to track user behaviour. There’ll always be bad guys out there. That’s what laws are there to deal with.
But what we’ve got today is so seamless, it’s hard to imagine a world in which we somehow all… collectively decided that the correct solution to the privacy problem might have been to
throw endless popovers into users’ faces, bury consent-based choices under dark patterns, and make humans do the work that should from the outset have been done by machines. What a
strange and terrible timeline that would have been.
Footnotes
1 If you know P3P‘s
history, regardless of what timeline you’re in: congratulations! You win One Internet Point.
2 Techbros have been trying to solve political problems using technology since long before
the word “techbro” was used in its current context. See also: (a) there aren’t enough mental health professionals, let’s make an AI app? (b) we don’t have enough ventilators for this
pandemic, let’s 3D print air pumps? (c) banks keep failing, let’s make a cryptocurrency? (d) we need less carbon in the atmosphere or we’re going to go extinct, better hope direct
carbon capture tech pans out eh? (e) we have any problem at all, lets somehow shoehorn blockchain into some far-fetched idea about how to solve it without me having to get out of my
chair why not?
3 Note to self: find a citation for this when you can be bothered.
4 I can’t decide whether “endless cookie circle” is the name of the New Wave band I want
to form, or a description of the way I want to eventually die. Perhaps both.
6 Implementation details varied, but that’s part of the joy of the Web. Firefox favoured
“conservative” defaults; Chrome and IE had “permissive” ones; and Opera provided an ultra-configrable matrix of options by which a user could specify exactly which kinds of cookies to
accept, linked to which kinds of personal data, from which sites, all somehow backed by an extended regular expression parser that was only truly understood by three people, two of
whom were Opera developers.
Much has been said about how ChatGPT and her friends will hallucinate and mislead. Let’s take an example.
Remember that ChatGPT has almost-certainly read basically everything I’ve ever written online – it might well be better-informed about me better than you are – as
you read this:
When I asked ChatGPT about me, it came up with a mixture of truths and believable lies2,
along with a smattering of complete bollocks.
In another example, ChatGPT hallucinates this extra detail specifically because the conversation was foreshadowed by its previous mistake. At this point, it digs its heels in and
commits to its claim, like the stubborn guy in the corner of the pub who doubles-down on his bullshit.
If you were to ask at the outset who wrote Notpron, ChatGPT would have gotten it right, but because it already mis-spoke, it’s now trapped itself in a lie, incapable of reconsidering
what it said previously as having been anything but the truth:
Simon Willison says that we should call this behaviour “lying”. In response to this, several people told him that the “lying” excessively
anthropomorphises these chatbots, implying that they’re deliberately attempting to mislead their users. Simon retorts:
I completely agree that anthropomorphism is bad: these models are fancy matrix arithmetic, not entities with intent and opinions.
But in this case, I think the visceral clarity of being able to say “ChatGPT will lie to you” is a worthwhile trade.
I agree with Simon. ChatGPT and systems like it are putting accessible AI into the hands of the masses, and that means that the
people who are using it don’t necessarily understand – nor desire to learn – the statistical mechanisms that actually underpin the AI‘s “decisions” about how to respond.
Trying to explain how and why their new toy will get things horribly wrong is hard, and it takes a critical eye, time, and practice to begin to discover how to use these tools
effectively and safely.3
It’s simpler just to say “Here’s a tool; by the way, it’s a really convincing liar and you can’t trust it even a little.”
Giving people tools that will lie to them. What an interesting time to be alive!
Footnotes
1 I’m tempted to blog about my experience of using Stable Diffusion and GPT-3 as
assistants while DMing my regular Dungeons & Dragons game, but haven’t worked out exactly what I’m saying yet.
2 That ChatGPT lies won’t be a surprise to anybody who’s used the system nor anybody who
understands the fundamentals of how it works, but as AIs get integrated into more and more things, we’re going to need to teach a level of technical literacy about what that means,
just like we do should about, say, Wikipedia.
3 For many of the tasks people talk about outsourcing to LLMs, it’s the case that it would take less effort for a human to learn how to do the task that it would for them to learn how to supervise an
AI performing the task! That’s not to say they’re useless: just that (for now at least) you should only trust them to do
something that you could do yourself and you’re therefore able to critically assess how well the machine did it.
Wait, there’s new Far Side content? Yup: it turns out Gary Larson’s dusted off his pen
and started drawing again. That’s awesome! But the last thing I want is to have to go to the website once every few… what: days? weeks? months? He’s not syndicated any more so
he’s not got a deadline to work to! If only there were some way to have my feed reader, y’know, do it for me and let me know whenever he draws something new.
Here’s my setup for getting Larson’s new funnies right where I want them:
Feed URL:https://www.thefarside.com/new-stuff/1
This isn’t a valid address for any of the new stuff, but always seems to redirect to somewhere that is, so that’s nice.
XPath for finding news items://div[@class="swiper-slide"]
Turns out all the “recent” new stuff gets loaded in the HTML and then JavaScript turns it into a slider etc.; some of the
CSS classes change when the JavaScript runs so I needed to View Source rather than use my browser’s inspector to find
everything.
Item title:concat("Far Side #", descendant::button[@aria-label="Share"]/@data-shareable-item)
Ugh. The easiest place I could find a “clean” comic ID number was in a data- attribute of the “share” button, where it’s presumably used for engagement tracking. Still,
whatever works right?
Item content:descendant::figcaption
When Larson captions a comic, the caption is important.
Item link (URL) and item unique ID: concat("https://www.thefarside.com",
./@data-path)
The URLs work as direct links to the content, and because they’re unique, they make a reasonable unique ID too (so long as
their numbering scheme is internally-consistent, this should stop a re-run of new content popping up in your feed reader if the same comic comes around again).
Item thumbnail:concat("https://fox.q-t-a.uk/referer-faker.php?pw=YOUR-SECRET-PASSWORD-GOES-HERE&referer=https://www.thefarside.com/&url=",
descendant::img[@data-src]/@data-src)
The Far Side uses Referer: headers as an anti-hotlinking measure, which prevents us easily loading the images directly in an RSS reader. I use this tiny PHP script as a proxy to mitigate that. If
you don’t have such a proxy set up, you could simply omit the “Item thumbnail” and “Item content” fields and click the link to go to the original page.
Item date:normalize-space(descendant::div[@class="tfs-comic-new__meta"]/*[1])
The date is spread through two separate text nodes, so we get the content of their wrapper and use normalize-space to tidy the whitespace up. The date format then looks
like “Wednesday, March 29, 2023”, which we can parse using a custom date/time format string:
Custom date/time format:l, F j, Y
I promise I’ll stop writing about how awesome FreshRSS + XPath is someday. Today isn’t that day.
Meanwhile: if you used to use a feed reader but gave up when the Web started to become hostile to them and big social media systems started to wall you in, you should really consider
picking one up again. The stuff I write about is complex edge-cases that most folks don’t need to think about in order to benefit from RSS… but it’s super convenient to have the things you care about online (news, blogs, social media, videos, newsletters, comics, search trends…)
collated and sorted for you… without interference from algorithms that want to push “sticky” content, without invasive tracking or advertisements (or cookie banners or privacy popups),
without something “disappearing” simply because you put off reading it for a few days.
XPath for finding news items://a[starts-with(@href,'archive.php')]
Item title:.
Item link (URL):./@href
Item date:./following-sibling::text()[1]
Custom date/time format:- Y.m.d
I continue to love this “killer feature” of FreshRSS, but I’m beginning to see how it could go further – I wish I had the free time to contribute to its development!
I’d love to see a mechanism for exporting/importing feed configurations like this so that I could share them more-easily, for example. I’d also be delighted if I could expand on my
XPath rules to load pages referenced by the results and get data from them, too, e.g. so I could use an image found by XPath on the “item link” page as the thumbnail
image! These are things RSSey could do for me, but FreshRSS can’t… yet!
A year and a half ago I came up with a technique for intercepting the “shuffle” operation
on jigsaw website Jigidi, allowing players to force the pieces to appear in a consecutive “stack” for ludicrously easy solving. I did this
partially because I was annoyed that a collection of geocaches near me used Jigidi puzzles as a barrier to their coordinates1…
but also because I enjoy hacking my way around artificially-imposed constraints on the Web (see, for example, my efforts last week to circumvent region-blocking on radio.garden).
My solver didn’t work for long: code changes at Jigidi’s end first made it harder, then made it impossible, to use the approach I suggested. That’s fine by me – I’d already got what I
wanted – but the comments thread on that post suggests that there’s
a lot of people who wish it still worked!2
And so I ignored the pleas of people who wanted me to re-develop a “Jigidi solver”. Until recently, when I once again needed to solve a jigsaw puzzle in order to find a geocache’s
coordinates.
Making A Jigidi Helper
Rather than interfere with the code provided by Jigidi, I decided to take a more-abstract approach: swapping out the jigsaw’s image for one that would be easier.
This approach benefits from (a) having multiple mechanisms of application: query interception, DNS hijacking, etc., meaning that if one stops working then another one can be easily
rolled-out, and (b) not relying so-heavily on the structure of Jigidi’s code (and therefore not being likely to “break” as a result of future upgrades to Jigidi’s platform).
It’s not as powerful as my previous technique – more a “helper” than a “solver” – but it’s good enough to shave at least half the time off that I’d otherwise spend solving a Jigidi
jigsaw, which means I get to spend more time out in the rain looking for lost tupperware. (If only geocaching were even the weirdest of my hobbies…)
How To Use The Jigidi Helper
To do this yourself and simplify your efforts to solve those annoying “all one colour” or otherwise super-frustrating jigsaw puzzles, here’s what you do:
Visit a Jigidi jigsaw. Do not be logged-in to a Jigidi account.
Open your browser’s debug tools (usually F12). In the Console tab, paste it and press enter. You can close your debug tools again (F12) if you like.
Press Jigidi’s “restart” button, next to the timer. The jigsaw will restart, but the picture will be replaced with one that’s easier-to-solve than most, as described below.
Once you solve the jigsaw, the image will revert to normal (turn your screen around and show off your success to a friend!).
What makes it easier to solve?
The replacement image has the following characteristics that make it easier to solve than it might otherwise be:
Every piece has written on it the row and column it belongs in.
Every “column” is striped in a different colour.
Striped “bands” run along entire rows and columns.
To solve the jigsaw, start by grouping colours together, then start combining those that belong in the same column (based on the second digit on the piece). Join whole or partial
columns together as you go.
I’ve been using this technique or related ones for over six months now and no code changes on Jigidi’s side have impacted upon it at all, so it’s probably got better longevity than the
previous approach. I’m not entirely happy with it, and you might not be either, so feel free to fork my code and improve it: the legiblity of the numbers is sometimes suboptimal, and
the colour banding repeats on larger jigsaws which I’d rather avoid. There’s probably also potential to improve colour-recognition by making the colour bands span the gaps
between rows or columns of pieces, too, but more experiments are needed and, frankly, I’m not the right person for the job. For the second time, I’m going to abandon a tool
that streamlines Jigidi solving because I’ve already gotten what I needed out of it, and I’ll leave it up to you if you want to come up with an improvement and share it with the
community.
Footnotes
1 As I’ve mentioned before, and still nobody believes me: I’m not a fan of jigsaws! If you
enjoy them, that’s great: grab a bucket of popcorn and a jigsaw and go wild… but don’t feel compelled to share either with me.
2 The comments also include asuper-helpful person called Rich who’s been manually
solving people’s puzzles for them, and somebody called Perdita
who “could be my grandmother” (except: no) with whom I enjoyed a
conversation on- and off-line about the ethics of my technique. It’s one of the most-popular comment threads my blog has ever seen.
I wanted to play about with Listmonk and it’s available as a Docker image, so I figured I’d just install it on my Unraid box. It doesn’t have a recipe in Community Apps but it’s not usually hard to reverse-engineer an official installation guide into something that “just works” on Unraid. After a
first attempt failed, I looked around for a quick how-to guide online and mostly found… a mixture of people similarly failing to get it working or else having a kindly stranger offer to help… but not on the open Web where the rest of us can
benefit from their knowledge. Sigh.
So I resolved that when I figured it out, I’d document the steps so that the next person after me can have an easier job of it.
Installing Listmonk on Unraid
Install Postgres if you don’t have it already. I used the postgresql15 image from Community Apps.
Set up a role and database. To do this, log in to your Postgres database using your favourite Postgres client and run, for example:
CREATE USER listmonk WITH LOGIN PASSWORD 'my-listmonk-db-password';
CREATE DATABASE listmonk OWNER listmonk;
Create a Listmonk configuration file. I created a listmonk share and put it in there, calling it /listmonk/config.toml, but anywhere on your
Unraid server will do. There’s a sample configuration
in the repository. You’ll probably want to change:
[app] address: change to 0.0.0.0:9000 to listen on all interfaces so you can access it from elsewhere on your network (might not be needed if you
intend to proxy with a host-networked reverse proxy server)
[app] admin_username / admin_password: obviously change these – this is how you’ll log in to your Listmonk system
[db] host: if your Postgres container and/or Listmonk container is running in bridged networking mode rather than host networking mode, you’ll need to change this
to the name or IP address of your Postgres server
[db] password: set to the password you chose for the listmonk user on your Postgres server
Add a Listmonk container. In Unraid, on the Docker tab, click the Add Container button. A minimal configuration might look like this:
Name: Listmonk
Repository:listmonk/listmonk:latest
Network Type: consider using Host to simplify your [db] setup, above.
Add a Port with Name: HTTP and Host Port: 9000. Then fill in 9000 as the value (or
whatever port you want to run Listmonk on)
Add a Path with Name: Config and Container Path: /listmonk/config.toml. Set the Host
Path to wherever you put the Listmonk configuration file, e.g. /mnt/user/listmonk/config.toml.
Start the Listmonk container and watch it stop. When you click “Apply” the container will start, run for a few seconds, and then stop. If you want, look at the logs
and you’ll see what the problem is: it needs to be started in a different way in order to set up the database. Instead, what we’ll do is spin up a new Listmonk container just
for that purpose (and then throw it away).
Start Listmonk in “install” mode. SSH into your Unraid server itself and run, e.g.
docker run --rm -ti --net='host' -e TZ="UTC" -v '/mnt/user/listmonk/config.toml':'/listmonk/config.toml':'rw' listmonk/listmonk:latest ./listmonk -- --install
Substitute /mnt/user/listmonk/config.toml for whatever path your configuration file is at, if applicable. You’ll be prompted with the messages “** first time
installation **”, “** IMPORTANT: This will wipe existing listmonk tables and types in the DB ‘listmonk’ **”, and then asked “continue (y/N)?”. Press “y” and the installation will
complete.
Start the Listmonk container again. This time it’ll stay running and you’ll be able to access the Web interface via e.g. https://your-unraid-server:9000/
I must be the last person on Earth to have heard about radio.garden (thanks
Pepsilora!), a website that uses a “globe” interface to let you tune in to radio stations around the globe. But I’d only used it for a couple of minutes before I discovered that
there are region restrictions in place. Here in the UK, and perhaps elsewhere, you can’t listen to stations in other countries without
using a VPN or similar tool… which might introduce a different region’s restrictions!
Install this userscript;
it’s hacky – I threw it together in under half an hour – but it seems to work!
How does this work and how did I develop it?
For those looking to get into userscripting, here’s a quick tutorial on what I did to develop this bypass.
First, I played around with radio.garden for a bit to get a feel for what it was doing. I guessed that it must be tuning into a streaming URL when you select a radio station, so I opened by browser’s debugger on the Network tab and looked at what happened when I clicked on a “working”
radio station, and how that differed when I clicked on a “blocked” one:
When connecting to a station, a request is made for some JSON that contains station metadata. Then, for a working
station, a request is made for an address like /api/ara/content/listen/[ID]/channel.mp3. For a blocked station, this request isn’t made.
I figured that the first thing I’d try would be to get the [ID] of a station that I’m not permitted to listen to and manually try the URL to see if it was actually blocked, or merely not-being-loaded. Looking at a working station, I first found the ID in the
JSON response and I was about to extract it when I noticed that it also appeared in the request for the
JSON: that’s pretty convenient!
My hypothesis was
that the “blocking” is entirely implemented in the front-end: that the JavaScript code that makes the pretty bits work is looking at the “country” data that’s returned and using that to
decide whether or not to load the audio stream. That provides many different ways to bypass it, from manipulating the JavaScript to remove that functionality, to altering the
JSON response so that every station appears to be in the user’s country, to writing some extra code that intercepts the
request for the metadata and injects an extra audio player that doesn’t comply with the regional restrictions.
But first I needed to be sure that there wasn’t some actual e.g. IP-based blocking on the streams. To do this, first I took the
/api/ara/content/listen/[ID]/channel.mp3 address of a known-working station and opened it in VLC using Media
> Open Network Stream…. That worked. Then I did the same thing again, but substituted the [ID] part of the address with the ID of a “blocked” station.
VLC happily started spouting French to me: the bypass would, in theory, work!
Next, I needed to get that to work from within the site itself. It’s implemented in React, which is a pig to inject code into because it uses horrible identifiers for
DOM elements. But of course I knew that there’d be this tell-tale fetch request for the station metadata that I
could tap into, so I used this technique to override the native fetch method and
replace it with my own “wrapper” that logged the stream address for any radio station I clicked on. I tested the addresses this produced using my browser.
That all worked nicely, so all I needed to do now was to use those addresses rather than simply logging them. Rather that get into the weeds reverse-engineering the built-in
player, I simply injected a new <audio> element after it and pointed it at the correct address, and applied a couple of CSS tweaks to make it fit in nicely.
The only problem was that on UK-based radio stations I’d now hear a slight echo, because the original player was still working. I
could’ve come up with an elegant solution to this, I’m sure, but I went for a quick-and-dirty hack: I used res.json() to obtain the body of the metadata response… which
meant that the actual code that requested it would no longer be able to get it (you can only decode the body of a fetch response once!). radio.garden’s own player treats this as an
error and doesn’t play that radio station, but my new <audio> element still plays it perfectly well.
It’s not pretty, but it’s functional. You can read the finished source code on Github. I don’t anticipate
that I’ll be maintaining this script so if it stops working you’ll have to fix it yourself, and I have no intention of “finishing” it by making it nicer or prettier. I just wanted to
share in case you can learn anything from my approach.
My work colleague Simon was looking for a way to add all of the
upcoming UK strike action to their calendar, presumably so they know when not to try to catch a bus or require an ambulance or maybe
just so they’d know to whom they should be giving support on any particular day. Thom was able to suggest a
few places to see lists of strikes, such as this BBC News page and the comprehensive strikecalendar.co.uk, but neither provided a
handy machine-readable feed.
If only they knew somebody who loves an excuse to throw a screen-scraper together. Oh wait, that’s me!
I threw together a 36-line Ruby program that extracts all the data from strikecalendar.co.uk and outputs an
.ics file. I guess if you wanted you could set it up to automatically update the file a couple of times a day and host it at a URL that people can subscribe to; that’s an exercise left for the reader.
If you just want a one-off import based on the state-of-play right now, though, you can save this .ics file to your computer
and import it to your calendar. Simple.
I swear I’m onto something with this idea: Scottish-Mexican fusion cookery. Hear me out.
It started on the last day of our trip to the Edinburgh
Fringe Festival in 2012 when, in an effort to use up our self-catering supplies, JTA suggested (he later claimed this should have
been taken as a joke) haggis tacos. Ruth and I ate a whole bunch of them and they were great.
In Scotland last week (while I wasn’t climbing mountains and thinking of my father), Ruth and I came up with our second bit of Scottish-Mexican fusion food: tattie scone
quesadillas. Just sandwich some cheese and anything else you like between tattie scones and gently fry in butter.
We’re definitely onto something. But what to try next? How about…
Bean chilli stovies?
Arroz con pollo on oatcakes?
Carnitas and refried beans in a bridie?
Huevos rancheros with lorne sausage sandwiched between the tortilla and the eggs?
Kedgeree fajitas? (I’m not entirely convinced by this one)
Rumbledethumps con carne?
Caldo de leekie: cock-a-leekie soup but with mexican rice dumped in after cooking, caldo-de-pollo-style?
Something like a chimichanga but battered before it’s fried? (my god, that sounds like an instant heart attack)
Is there a name for that experience when you forget for a moment that somebody’s dead?
For a year or so after my dad’s death 11 years ago I’d routinely have that moment:
when I’d go “I should tell my dad about this!”, followed immediately by an “Oh… no, I can’t, can I?”. Then, of course, it got rarer. It happened in 2017, but I don’t know if it happened again after
that – maybe once? – until last week.
I wonder if subconsciously I was aware that the anniversary of his death – “Dead Dad Day”, as my sisters and I call
it – was coming up? In any case, when I found myself on Cairn Gorm on a family trip and snapped a photo from near the summit, I had a moment where I thought “I should send this
picture to my dad”, before once again remembering that nope, that wasn’t possible.
Strange that this can still happen, over a decade on. If there’s a name for the phenomenon, I’d love to know it.
The two most important things you can do to protect your online accounts remain to (a) use a different password, ideally a randomly-generated one, for every service, and (b) enable
two-factor authentication (2FA) where it’s available.
If you’re not already doing that, go do that. A password manager like 1Password, Bitwarden, or LastPass will help (although be aware that the latter’s had some security issues lately, as I’ve mentioned).
I promised back in 2018 to talk about what
this kind of authentication usually1
looks like for me, because my approach is a little different:
I simply press my magic key combination, (re-)authenticate with my password safe if necessary, and then it does the rest. Including, thanks to some light scripting/hackery, many
authentication flows that span multiple pages and even ones that ask for randomly-selected characters from a secret word or similar2.
My approach isn’t without its controversies. The argument against it broadly comes down to this:
Storing the username, password, and the means to provide an authentication code in the same place means that you’re no-longer providing a second factor. It’s no longer e.g.
“something you have” and “something you know”, but just “something you have”. Therefore, this is equivalent to using only a username and password and not enabling 2FA at all.
I disagree with this argument. I provide two counter-arguments:
1. For most people, they’re already simplifying down to “something you have” by running the authenticator software on the same device, protected in the same way, as their
password safe: it’s their mobile phone! If your phone can be snatched while-unlocked, or if your password safe and authenticator are protected by the same biometrics3,
an attacker with access to your mobile phone already has everything.
2. Even if we do accept that this is fewer factors, it doesn’t completely undermine the value of time-based second factor codes4.
Time-based codes have an important role in protecting you from authentication replay!
For instance: if you use a device for which the Internet connection is insecure, or where there’s a keylogger installed, or where somebody’s shoulder-surfing and can see what you type…
the most they can get is your username, password, and a code that will stop working in 30 seconds5. That’s
still a huge improvement on basic username/password-based system.6
Note that I wouldn’t use this approach if I were using a cloud-based password safe like those I linked in the first paragraph! For me personally: storing usernames, passwords, and
2FA authentication keys together on somebody else’s hardware feels like too much of a risk.
But my password manager of choice is KeePassXC/KeePassDX, to which I migrated after I realised that the
plugins I was using in vanilla KeePass were provided as standard functionality in those forks. I keep the master copy of my password database
encrypted on a pendrive that attaches to my wallet, and I use Syncthing to push
secondary copies to a couple of other bits of hardware I control, such as my phone. Cloud-based password safes have their place and they’re extremely accessible to people new to
password managers or who need organisational “sharing” features, but they’re not the right tool for me.
As always: do your own risk assessment and decide what’s right for you. But from my experience I can say this: seamless, secure logins feel magical, and don’t have to require an
unacceptable security trade-off.
Footnotes
1 Not all authentication looks like this, for me, because some kinds of 2FA can’t be provided by my password safe. Some service providers “push” verification checks to an app, for example. Others use proprietary
TOTP-based second factor systems (I’m looking at you, banks!). And some, of course, insist on proven-to-be-terrible
solutions like email and SMS-based 2FA.
2 Note: asking for a username, password, and something that’s basically another-password
is not true multifactor authentication (I’m looking at you again, banks!), but it’s still potentially useful for organisations that need to authenticate you by multiple media
(e.g. online and by telephone), because it can be used to help restrict access to secrets by staff members. Important, but not the same thing: you should still demand 2FA.
3 Biometric security uses your body, not your mind, and so is still usable even if you’re
asleep, dead, uncooperative, or if an attacker simply removes and retains the body part that is to be scanned. Eww.
4 TOTP is a very popular
mechanism: you’ve probably used it. You get a QR code to scan into the authenticator app on your device (or multiple devices,
for redundancy), and it comes up with a different 6-digit code every 30 seconds or so.
5 Strictly, a TOTP code is
likely to work for a few minutes, on account of servers allowing for drift between your clock and theirs. But it’s still a short window.
6 It doesn’t protect you if an attacker manages to aquire a dump of the usernames,
inadequately-hashed passwords, and 2FA configuration from the server itself, of course, where other forms of 2FA (e.g. certificate-based) might, but protecting servers from bad actors is a whole separate essay.
Two years after our last murder mystery party, almost
three years since the one before, and much, much longer since our last in-person one, we finally managed to have another
get-the-guests-in-one-place murder mystery party, just like old times. And it was great!
D’Avekki’s murder mystery sets use an unusual mechanic that I’ve discussed before online with other murder mystery party authorship enthusiasts1 but never tried in practice: a way of determining at
random who the murderer is when play begins. This approach has a huge benefit in that it means that you can assign characters to players using a subset of those available (rather
than the usual challenges that often come up when, for example, somebody need to play somebody of a different gender than their own) and, more-importantly, it protects you from the
eventuality that a player drops-out at short notice. This latter feature proved incredibly useful as we had a total of three of our guests pull out unexpectedly!
The challenge of writing a murder mystery with such a mechanic is to ensure that the script and evidence adapt to the various possible murderers. When I first examined the set
that was delivered to us, I was highly skeptical: the approach is broadly as follows2:
At the start of the party, the players secretly draw lots to determine who is the murderer: the player who receives the slip marked with an X is the murderer.
Each character “script” consists of (a) an initial introduction, (b) for each of three acts, a futher introduction which sets up two follow-up questions, (c) the answers to those
two follow-up questions, (d) a final statement of innocence, and (e) a final statement of guilt, for use by the murderer.
In addition, each script has a handful of underlined sections, which are to be used only if you are the murderer. This
means that the only perceivable difference between one person and another being the murderer is that the only who is the murderer will present a small amount of additional information.
The writing is designed such that this additional piece of evidence will be enough to make the case against them be compelling (e.g. because their story becomes
internally-inconsistent).
The writing was good overall: I especially appreciated the use of a true crime podcast as a framing device (expertly delivered thanks to Rory‘s
radio voice). It was also pleasing to see, in hindsight, how the story had been assembled such that any character could be the murderer, but only one would give away a
crucial clue. The downside of the format is pretty obvious, though: knowing what the mechanic is, a detective only needs to look at each piece of evidence that appears and look for a
connection with each statement given by every other player, ruling out any “red herring” pairings that connect to every other player (as is common with just about the entire
genre, all of the suspects had viable motives: only means and opportunity may vary).
It worked very well, but I wonder if – now the formula’s understood by us – a second set in a similar style wouldn’t be as successful.
That said, nobody correctly fingered the murderer this time around. Maybe we’re out of practice? Or maybe the quality of the hints in such a wide-open and dynamic murderer-selection
mechanic is less-solid than we’re used to? It’s hard to say: I’d certainly give another D’Avekki a go to find out.