My past self, receiving a copy of Transport Tycoon for his 14th birthday, would have his mind blown if he could see the kind of insanely-complex super-stations that are
possible in (the open-source successor to) the game 30 years on.
Of course, this kind of thing – multiple simultaneously shared in-and-out routes on a bidirectional station – wasn’t (sensibly) possible before the introduction of path-based signalling in OpenTTD 0.7.0. And modern path-based signals in the game are even smarter.
But still, 14-year-old me had a dream. And nowadays that dream is real.
The dog and I came out to Ducklington today for a spot of geohashing, in search of the 2024-11-14 51 -1 geohashpoint.
After a walk around the fields to the East we had to give up on that expedition (for reasons that’ll be described in my geohashing log) so we decided to console ourselves with a hunt
for this nearby geocache, instead.
Solving the first part was made harder when I failed to read the description properly and started counting letters in the sign, rather than the plaque, but once we’d corrected that
mistake we were on our way.
At the GZ there was a clear trail that looked likely, but the dog took some coaxing to join us. As soon as I was at the coordinates (feeling like I was hiding in a bush!) and followed
the hint instructions the cache was an easy find. TFTC!
The dog and I drove out to Ducklington, parking near the church, and walked out to these fields. Unfortunately the hashpoint turns out to be 33+ metres into a field full of sheep. That
_might’ve_ been the kind of trespassing I’d have been willing to consider, were it not for the combination of the amount of pedestrian traffic (a whole platoon of birdwatchers, armed
with extra-long camera lenses, and every dog walker under the sun!) and the fact that I had the dog with me (who’d have to have waited unhappily outside the field: not taking her _into_
a field of sheep, even by only 33 metres).
Instead, then, we took a pleasant walk around Ducklington and found the GC656RM “Church Micro 8564…Ducklington”
geocache, so it wasn’t entirely a wasted trip. The dog’s come home and zonked out in her basket after a decent walk, anwyay!
Maybe it’s just that my sabbatical is making me pay more attention then usual, but it feels like I’m getting very lucky with nearby geohashpoints lately. Tomorrow’s hashpoint in my
graticule might be achievable!
This is a good omen, perhaps, for next week. Next week my mother and I are going to hop over to the West coast of Ireland where there are several contiguous mostly-land graticules that
have never seen a successful expedition. We could be the first! 🤞
Kids’ ability to pick up new words from context is amazing.
Kids’ confidence even when they’ve misunderstood how a word is used is hilarious. 😊
This evening, our 7-year-old was boasting about how well-behaved his class was while their regular teacher had to attend an all-day meeting, vs how much it impressed the temporary
teacher they had.
His words: “Today we had a supply teacher and we totally DOMINATED her!”
Waiting patiently at the school gates on a distinctly Autumnal morning, our pupper’s squat stature means she’s about knee-deep in the season’s golden leaves.
theimprobable.blog, which I look after on behalf of my partner’s brother after using it to GPS-track his adventures
I think that’s all of them, but it’s hard to be sure…
Footnotes
1 Maybe I’ve finally shaken off my habit of buying a domain name for everything.
Or maybe it’s just that I’ve embraced subdomains for more stuff. Probably the latter.
After brunch, I reckon I can get to and from this hashpoint… Tron-style!
Expedition
I planned a slightly circuitous route to this hashpoint in order to make a Tron achievement possible. I got my bike lightcycle out of
the garage, checked the brakes and tyres, and set off in the opposite direction of the hashpoint! My thinking was I could cut up Tar Lakes Road to Cogges Farm, join the A40 cyclepath at
Witney, follow it all the way to Barnard Gate, and – after passing through the hamlet and hopefully the hashpoint – turn _back_ along the opposite side of A40 (for the section that
doesn’t have a cyclepath) and then cut through South Leigh to get back home.
My first hazard came just three minutes out of my door, where a motorist failed to give way to me at Stanton Harcourt Roundabout, entering the junction even though I was already
half-way across it from the other direction. They had to slam on their brakes to avoid smashing into the side of me, and I’ll admit I may have sworn at them at least a little as they
pulled guiltily away.
The Tar Lakes road remains a delightful route from Stanton Harcourt to Witney, which I’ve enjoyed cycling many times. It was a little busier than usual, perhaps because it’s Sunday and
folks were off to and from the fishing lakes along its path to do some angling or to walk their dogs, but it was still a fast and easy journey. Reaching Cogges, I turned back towards
the hashpoint and joined the A40 cyclepath which, I hoped, would bring me right through it.
Approaching the hashpoint, I was concerned to see that the road was closed ahead, but a sign reassured me that it was still open to pedestrians, so I dismounted my bike. This also
provided an excuse for me to slow down and pay attention to my GPSr as I counted down the metres. I got within the circle of uncertainty at ~3m away, as I leaned over the dyke that
separates Pear Tree Cottage’s garden from the byway.
I snapped the regulation silly grin selfie at 14:44.
Photo taken, I then had to continue to push my bike all the way through the roadworks: the fastest way home would have been to turn around, at this point, but I didn’t want to be robbed
of my shot at the Tron achievement, so I pressed on.
At the far end of Barnard Gate I determined that cycling back along the A40 without the benefit of a cyclepath was perhaps a little too dangerous (especially after my scare earlier), so
I adapted my route to instead head East towards Eynsham, crossing the main road at the Evenlode pub to get onto Old Witney Road, through Eynsham, and back onto the road home.
Returning home, I made sure to cut the corner short as I turned into my driveway so I didn’t cross the path I’d taken as I’d initially exited, an hour earlier. A successful trip, and a
fresh achievement!
Setting up and debugging your FreshRSS XPath Scraper
Okay, so here’s Adam’s blog. I’ve checked, and there’s no RSS feed1, so it’s time to start planning my XPath Scraper. The first thing I want to do is to find some way of identifying the “posts” on the page. Sometimes people use
solid, logical id="..." and class="..." attributes, but I’m going to need to use my browser’s “Inspect Element” tool to check:
The next thing that’s worth checking is that the content you’re inspecting is delivered with the page, and not loaded later using JavaScript. FreshRSS’s XPath Scraper works with the raw
HTML/XML that’s delivered to it; it doesn’t execute any JavaScript2,
so I use “View Source” and quickly search to see that the content I’m looking for is there, too.
Now it’s time to try and write some XPath queries. Luckily, your browser is here to help! If you pop up your debug console, you’ll discover that you’re probably got a predefined
function, $x(...), to which you can path a string containing an XPath query and get back a NodeList of the element.
First, I’ll try getting all of the links inside the #posts section by running $x( '//*[@id="posts"]//a' ) –
In my first attempt, I discovered that I got not only all the posts… but also the “tags” at the top. That’s no good. Inspecting the URLs of each, I noticed that the post URLs all
contained /posts/, so I filtered my query down to $x( '//*[@id="posts"]//a[contains(@href, "/posts/")]' ) which gave me the
expected number of results. That gives me //*[@id="posts"]//a[contains(@href, "/posts/")]
as the XPath query for “news items”:
Obviously, this link points to the full post, so that tells me I can put ./@href as the “item link” attribute in FreshRSS.
Next, it’s time to see what other metadata I can extract from each post to help FreshRSS along:
Inspecting the post titles shows that they’re <h3>s. Running $x( '//*[@id="posts"]//a[contains(@href, "/posts/")]//h3' ) gets them.
Within FreshRSS, everything “within” a post is referenced relative to the post, so I convert this to descendant::h3 for my “XPath (relative to item) for Item
Title:” attribute.
Inspecting within the post summary content, it’s… not great for scraping. The elements class names don’t correspond to what the content is4: it looks like Adam’s using a utility class library5.
Everything within the <a> that we’ve found is wrapped in a <div class="flex-grow">. But within that, I can see that the date is
directly inside a <p>, whereas the summary content is inside a <p>within a<div class="mb-2">. I don’t want my code to
be too fragile, and I think it’s more-likely that Adam will change the class names than the structure, so I’ll tie my queries to the structure. That gives me
descendant::div/p for the date and descendant::div/div/p for the “content”. All that remains is to tell FreshRSS that Adam’s using F j, Y as his
date format (long month name, space, short day number, comma, space, long year number) so it knows how to parse those dates, and the feed’s good.
If it’s wrong and I need to change anything in FreshRSS, the “Reload Articles” button can be used to force it to re-load the most-recent X posts. Useful if you need to tweak things. In
my case, I’ve also set the “Article CSS selector on original website” field to article so that the full post text can be pulled into my reader rather than having to visit
the actual site. Then I’m done!
Takeaways
Use Inspect Element to find the elements you want to scrape for.
Use $x( ... ) to test your XPath expressions.
Remember that most of FreshRSS’s fields ask for expressions relative to the news item and adapt accordingly.
If you make a mistake, use “Reload Articles” to pull them again.
2 If you need a scraper than executes JavaScript, you need something more-sophisticated. I
used to use my very own RSSey for this purpose but nowadays XPath Scraping is sufficient so I don’t bother any more, but RSSey might be a
good starting point for you if you really need that kind of power!
3 If you’ve not had the chance to think about it before: View Source shows you the actual
HTML code that was delivered from the web server to your browser. This then gets interpreted by the browser to generate the DOM, which might result in changes to it: for example,
invalid elements might be removed, ambiguous markup will have an interpretation applied, and so on. The DOM might further change as a result of JavaScript code, browser plugins, and
whatever else. When you Inspect Element, you’re looking at the DOM (represented “as if” it were HTML), not the actual underlying HTML
4 The date isn’t in a <time> element nor does it have a class like
.post--date or similar.
5 I’ll spare you my thoughts on utility class libraries for now, but they’re… not
positive. I can see why people use them, and I’ve even used them myself before… but I don’t think they’re a good thing.
After a morning of optimising a nonprofit’s reverse proxy configuration, I feel like I’ve earned my lunch! Four cheese, mushroom and jalapeño quesapizzas, mmm…
The feature here is that you can take a color you already have and manipulate its components. Which things you can change vary by the color space you choose, so for an RGB color you
can change the red, green, blue, and alpha channels, for an HSL color you can change hue, saturation, lightness, and alpha, and for
my belovedOKLCH you can change lightness, chroma, hue, and yes, opacity.
The syntax if you wanted to use this and not change anything about the color is:
oklch(fromvar(--color)lch/1)
But of course you can change each component, either swapping them entirely as with this which sets the lightness to 20%:
oklch(fromvar(--color)20%ch/1)
…
This is really something. I was aware that new colour functions were coming to CSS but kinda dropped the ball and didn’t notice that oklch(...) is, for the most
part, usable in any modern browser. That’s a huge deal!
The OKLCH colour model makes more sense than RGB, covers a wider spectrum than HSL, and – on screens that support it – describes a (much) larger spectrum, providing access to a wider
array of colours (with sensible fallbacks where they’re not supported). But more than that, the oklch(...) function provides good colour adaptation.
If you’ve ever used e.g. Sass’s darken(...) function and been disappointed when it seems to have a bigger impact on some colours than others… that’s because simple
mathematical colour models don’t accurately reflect the complexities of human vision: some colours just look brighter, to us, thanks quirks of biochemistry, psychology, and
evolution!
This colour vision curve feels to me a little like how pianos aren’t always tuned to equal-temper – i.e. how the maths of harmonics says that should be – but are instead tuned
so that the lowest notes are tuned slightly flat and the highest notes slightly sharp to compensate for
inharmonicity resulting from the varying stiffness of the strings. This means that their taut length alone doesn’t dictate what note humans think they hear: my understanding is
that at these extremes, the difference in the way the wave propagates within the string results in an inharmonic overtone that makes these notes sound out-of-tune with the rest
of the instrument unless compensated for with careful off-tuning! Humans experience something other than what the simple maths predicts, and so we compensate for it! (The quirk isn’t
unique to the piano, but it’s most-obvious in plucked or struck strings, rather than in bowed strings, and for instruments with a wide range, of which a piano is of course both!)
OKLCH is like that. And with it as a model (and a quick calc(...) function), you can tell your
CSS “make this colour 20% lighter” and get something that, for most humans, will actually look “20% lighter”, regardless of the initial hue. That’s cool.
I spent way too long playing with this colour picker while I understood this concept. And now I want to use it
everywhere!