WSL9x runs a modern Linux kernel (6.19 at time of writing) cooperatively inside the Windows 9x kernel, enabling users to take advantage of the full suite of capabilities of both
operating systems at the same time, including paging, memory protection, and pre-emptive scheduling. Run all your favourite applications side by side – no rebooting required!
…
Well this blew my mind.
Windows Subsystem for Linux (WSL) is one of the single best things Microsoft have added to Windows in the last decade1.
But, of course, it’s for Windows 10 and 11 only. I would never have conceived that somebody could make the same trick work for, like, Windows 95!
But Hails has done so. And no, this isn’t some kind of emulation; it’s proper cooperative multitasking between the two kernels, just like regular WSL does. Somehow, in a version that
came out nine years before Windows even supported the NX bit. Mindboggling.
Footnotes
1 This ought to be a little embarrassing for them: I mean – if the most-valuable
improvement you make to your operating system is to make it… more like a different operating system… – that’s not a great sign, is it?
For instance, at the start of the weekend I received an email from somebody called Phil, who asked:
Could you possibly have an alternative ‘HQ’ version of your feeds which replaces standard/240 with standard/1200 in the URL for each article in the XML?
I am obviously pretty desperate for this feature, hence me reaching out.
Phil’s right. The BBC News RSS feeds contain thumbnail images that look like this:
You see the /240/ in that URL? If you change it to /1200/ then, as Phil observes, you get a much-higher resolution thumbnail. Naturally you ought
to correct the width and height attributes accordingly, too.
The difference is pretty significant. See:
You’d be forgiven for thinking the left-hand-side of this image was the Lego model of this car.
So I raised Phil’s request as a GitHub issue, like a good maintainer, before realising that – hang on – this would be
a really easy improvement and I should just… do it.
My BBC feeds “improver” leverages one of my very favourite RubyGems, Nokogiri, to perform XML parsing and modification. The code you need to tweak
these URLs is super simple:
# Iterate through each <media:thumbnail> element in the RSS feed:
rss.xpath('//media:thumbnail').eachdo|thumb|# Skip any that don't start the way we expect:nextunlessthumb['url']=~/^https:\/\/ichef.bbci.co.uk\/ace\/(standard|ws)\/240\//# Swap the 240 for 1200 in the url="..." attribute:thumb['url']=thumb['url'].gsub(/\/ace\/(standard|ws)\/240\//,"/ace/\\1/1200/")
# Set width="1200":thumb['width']="1200"# Set the height="..." proportionally (they're not always the same!):thumb['height']=(thumb['height'].to_f/240*1200).round.to_s
end
That really is all there is to it, but look at what a difference it makes in an RSS reader:
I got that merged and the GitHub action that makes the magic happen got started on its usual 20-minute schedule soon afterwards. I didn’t even have to finish waiting for my lunchtime
ramen to cool down before the change was out there and, hopefully, helping people. Phil emailed me again soon afterwards:
You managed to fix something in your lunch break that has been bugging me for well over a decade. The difference in quality is night and day.
Anyway: it pleased me to discover that my software is out there, helping people.
As with most of my open source work, I put little to no effort into tracking any kind of metrics of usage, which means I only get to find out if I’ve done good in the world when people
reach out and tell me. So I was delighted to hear from Phil (as well as to take his suggestion and improve the tool for everybody!).
Footnotes
1 Specifically, the code I’ve written makes a few improvements to the BBC News RSS feeds:
(1) removing duplicate news, (2) removing non-news content such as “nudges” towards the app or to iPlayer content, and (3) optionally removing sports news. If that sounds
like a better version of the BBC News RSS feeds, you should take a look!
This is a reply to a post published elsewhere. Its content might be duplicated as a traditional comment at the original source.
…
For a year or so after he died, I used to call his phone as it would go straight to voicemail and I’d get to hear his voice. Eventually the line was cut though. I wish I’d recorded
it, just to have something.
His mobile phone was never found. Given our relationships with our phones, it’s easy to imagine it as a piece of him still up there. It may have broken at the time of the accident or
may have failed some time later – first when its battery expired; later when it was destroyed by the elements – but it was still the last place it reached out to a cellular tower: made
that connection that defines its purpose.
His voicemail, of course, didn’t live on his phone. That it does it an illusion for the convenience of humans, especially those of us who are old enough to remember having to replace
the Dictaphone-style microcassette tapes in physical answering machines (remember those?). But the illusion of him living on in that, too, persisted. A few times in the months that
followed, I called his mobile number – one of very few etched permanently in my memory – just to hear his voice. Sometimes I’d leave a short message; a message that nobody would hear.
It was a strange time.
Later, I learned that my dad’s partner had done the same. She regretted deleting her final received voicemail from him, and calling to hear the outbound message was
perhaps the next-closest thing.
Years later, in 2017, I wrote about the experience of calling my dad’s mobile after his death. I’d been reminded of the ritual when listening to a
new album – Robert Plant’s Carry Fire – and thinking “gosh, my dad would have loved this; what a shame that he didn’t live long enough to hear it.”
In my experience, that’s the journey you take when you lose a close family member. For a while, you miss them because of what they shared with you: love, care, upbringing, support,
company… you regret that they’re not there any longer and you wish you could have them back. But as time goes on, there’s a transition, and the moments that you miss them are about the
things that they didn’t live to see. It saddens me that my dad never got to meet our children (our eldest was born between one and two years after his death), for example (and not just
because it would have spared me playing a game of re-enacting his demise with one of them!).
Of course, like any grief, it fades and gets easier with time, even if it never goes away.
On Friday, I said goodbye to a colleague as she left us after most of a decade with the company. Then this morning, all hell broke loose on some production servers.
It turns out that the API key that connected our application to our feature flag management platform was associated with her account, and hadn’t shown up in the exit audit.
Let this be your reminder to go check where, if anywhere, your applications are using person-specific keys where they should be using generic ones!
This afternoon I’m acting as backup driver for my partner Ruth, who’s walking the
length of the Thames Path by (very gradual) instalments. Having parked at Culham Lock I began to walk back towards Abingdon to meet the walking team coming the other way, when I
noticed that a new cache had been published nearby and diverted to find it.
A delightful tree climb later and I had this great cache container in hand. TFTC, FP awarded!
A little sad to DNF this cache which is probably just well-hidden, but I’ve only got limited time to hunt as I’m the backup driver for this segment of a walk down the entire length of the Thames Path.
On the 6th of January 1995, viewers of BBC Two were treated to a new series of Waiting for Godot Bottom. Stuck at the top of a Ferris wheel, Vyvyan and the People’s
Poet Eddie and Ritchie wait to see what the cruel hand of fate has dealt them in this week’s episode “Hole”.
At one point, Captain Edrison Peavey Edward Elizabeth Hitler pulls out a newspaper to read.
It may surprise you to know that the “Hammersmith Bugle” is not a real paper and they never ran a headline “No News Shocker”. At which point, it is time to rip off Dirty Feed’s shtick and find out what that paper really is.
…
This is exactly the kind of rabbitholey deep-dive I know and love (and have experienced ever so frequently myself). Take a ride with Terence on a long (and not-entirely satisfying!)
ride to try to find the actual newspaper that’s been adapted by the Bottom production team in this particular episodes.
Anyway, it’s an amusing journey that I enjoyed going along with, this morning, and maybe you will too.
I took the dog out for a walk from the Chicory House yesterday. At one point, we found ourselves on a familiar-looking footpath: I couldn’t place
exactly why I’d been there before. Geocaching, possibly: I couldn’t see any on the map but perhaps they’d been since archived?
Personal location tracking continues to be awesome. Being able to both forwards-search (“where was I on this date?”) and reverse-search (“when was I last within this area?”) unlocks a
wealth of aides mémoire that are otherwise hard to come by.
It’s hard to sell people on the idea, probably because it’s a slow-burner – you need lots of data before it starts to pay off! – but I still recommend it.
With its few-columns and large hit-areas, the game’s well-optimised for mobile play.
The premise is simple enough:
5-column solitaire game with 1-5 suits.
23 cards dealt out into those columns; only the topmost ones face-up.
2 “reserve” cards retained at the bottom.
Stacks can be formed atop any suit based on value-adjacency (in either order, even mixing the ordering within a stack)
Individual cards can always be moved, but stacks can only be moved if they share a value-adjacency chain and are all the same suit.
Aim is to get each suit stacked in order at the top.
Well this looks like a suboptimal position…
One of the things that stands out to me is that the game comes with over five thousand pre-shuffled decks to play, all of which guarantee that they are “winnable”.
Playing through these is very satisfying because it means that if you get stuck, you know that it’s because of a choice that you made2,
and not (just) because you get unlucky with the deal.
After giving us 5,105 pregenerated ‘decks’, author Zach Gage probably thinks we’ll never run out of playable games. Some day, I might prove him wrong.
Every deck is “winnable”?
When I first heard that every one of FlipFlop‘s pregenerated decks were winnable, I misinterpreted it as claiming that every conceivable shuffle for a game
of FlipFlop was winnable. But that’s clearly not the case, and it doesn’t take significant effort to come up with a deal that’s clearly not-winnable. It only takes a
single example to disprove a statement!
If you think you’ve found a solution to this deal – for example, by (necessarily) dealing out all of the cards, then putting both reserve kings out and stacking everything else on top
of them in order to dig down to the useful cards, bear in mind that (a) the maximum stack depth of 20 means you can get to a 6, or a 5, but not both, and (b) you can’t then move any
of those stacks in aggregate because – although it’s not clear in my monochrome sketch – the suits cycle in a pattern to disrupt such efforts.
That it’s possible for a fairly-shuffled deck of cards to lead to an “unwinnable” game of FlipFlop Solitaire means the author must have necessarily had some
mechanism to differentiate between “winnable” (which are probably the majority) and “unwinnable” ones. And therein lies an interesting problem.
If the only way to conclusively prove that a particular deal is “winnable” is to win it, then the developer must have had an algorithm that they were using to test that a given
deal was “winnable”: that is – a brute-force solver.
So I had a go at making one3.
The code is pretty hacky (don’t judge me) and, well… it takes a long, long time.
This isn’t an animation, but it might as well be! By the time you’ve permuted all possible states of the first ten moves of this starting game, you’re talking about having somewhere
in the region of three million possible states. Solving a game that needs a minimum of 80 moves takes… a while.
Partially that’s because the underlying state engine I used, BfsBruteForce, is a breadth-first optimising algorithm. It aims to
find the absolute fewest-moves solution, which isn’t necessarily the fastest one to find because it means that it has to try all of the “probably stupid” moves it
finds4
with the same priority as the the “probably smart” moves5.
If you pull off a genuinely random shuffle, then – statistically-speaking – you’ve probably managed to put that deck into an order that no deck of cards has never been in
before!6
And sure: the rules of the game reduce the number of possibilities quite considerably… but there’s still a lot of them.
So how are “guaranteed winnable” decks generated?
I think I’ve worked out the answer to this question: it came to me in a dream!
Show this puzzle to any smarter-than-average child and they’ll quickly realise that the fastest way to get the solution is not to start from each programmer and trace
their path… but to start from the laptop and work backwards!
The trick to generating “guaranteed winnable” decks for FlipFlop Solitaire (and, probably, any similar game) is to work backwards.
Instead of starting with a random deck and checking if it’s solvable by performing every permutation of valid moves… start with a “solved” deck (with all the cards stacked
up neatly) and perform a randomly-selected series of valid reverse-moves! E.g.:
The first move is obvious: take one of the kings off the “finished” piles and put it into a column.
For the next move, you’ll either take a different king and do the same thing, or take the queen that was exposed from under the first king and place it either in an empty
column or atop the first king (optionally, but probably not, flipping the king face down).
With each subsequent move, you determine what the valid next-reverse-moves are, choose one at random (possibly with some kind of weighting), and move on!
In computational complexity theory, you just transformed an NP-Hard problem7
into a P problem.
Once you eliminate repeat states and weight the randomiser to gently favour moving “towards” a solution that leaves the cards set-up and ready to begin the game, you’ve created a
problem that may take an indeterminate amount of time… but it’ll be finite and its complexity will scale linearly. And that’s a big improvement.
I started implementing a puzzle-creator that works in this manner, but the task wasn’t as interesting as the near-impossible brute-force solver so I gave up, got distracted,
and wrote some even more-pointless code instead.
If you go ahead and make an open source FlipFlop deck generator, let me know: I’d be interested to play with it!
Footnotes
1 I don’t get much time to play videogames, nowadays, but I sometimes find that I’ve got
time for a round or two of a simple “droppable” puzzle game while I’m waiting for a child to come out of school or similar. FlipFlop Solitaire is one of only three games I
have installed on my phone for this purpose, the other two – both much less frequently-played – being Battle of Polytopia and the
buggy-but-enjoyable digital version of Twilight Struggle.
2 Okay, it feels slightly frustrating when you make a series of choices that are
perfectly logical and the most-rational decision under the circumstances. But the game has an “undo” button, so it’s not that bad.
4 An example of a “probably stupid” move would be splitting a same-suit stack in order to
sit it atop a card of a different suit, when this doesn’t immediately expose any new moves. Sometimes – just sometimes – this is an optimal strategy, but normally it’s a pretty bad
idea.
5 Moving a card that can go into the completed stacks at the top is usually a good idea…
although just sometimes, and especially in complex mid-game multi-suit scenarios, it can be beneficial to keep a card in play so that you can use it as an anchor for something else,
thereby unblocking more flexible play down the line.
6 Fun fact: shuffling a deck of cards is a sufficient source of entropy that you can use
it to generate cryptographic keystreams, as Bruce Schneier demonstrated in 1999.
7 I’ve not thought deeply about it, but determining if a given deck of cards will result
in a winnable game probably lies somewhere between the travelling salesman and the halting problem, in terms of complexity, right? And probably not something a right-thinking person
would ask their desktop computer to do for fun!
As if I hadn’t suffered enough “flood damage” this year, I started my first workday since rebuilding my home office setup – hour the first time in months! – in our rental… by pouring a
cup of coffee into my keyboard. 😱
Now that we’ve finished our move into the Chicory House, I
have for the first time in over two months been able to set up my preferred coding environment… with a proper monitor on a proper desk with a proper office chair. Bliss!
Jeremy Keith posted his salary history last week. I absolutely agree with him that employers exploit
the information gap created by opaque salary advertisement, and I think that our industry of software engineering is especially troublesome for this.
So I’m joining him (and others) in choosing to share my salary history. I’ve set up a new page for that purpose, but here’s the summary of its
initial state:
Understand
A few understandings and caveats:
For most of my career I’ve described myself as a “Full-Stack Web Applications Developer”, but I’ve worked outside of every one of those words and my job titles have often been more
like “CMS Developer” or “Senior Engineer (Security)”.
My specialisms and “hot areas” are security engineering, web standards, performance, and accessibility.
When I worked multiple roles in a year, I’ve tried to capture that, but there’ll be some fuzziness around the edges.
The salaries are rounded slightly to make nice readable numbers.
I’ve not always worked full-time; all salaries are translated into “full-time equivalent”1.
I’ve only included jobs that fit into my software engineering career2.
If the table below looks out-of-date then I’ve probably just forgotten to update it. Let me know!
Ad-hoc and hard to estimate.
Alongside full-time study.
What does that look like?
I drew a graph, but I don’t like it. Mostly because I don’t see my salary as a “goal” to aim for or some kind of “score”.
It’s gone up; it’s gone down; but I’ve always been more-motivated by what I’m working on, with whom, and for what purpose than I have been on how much I get paid for it3.
But if you want to see:
I’m not sure to what degree my career looks typical or not. But I guess I also don’t care! My motivations are probably different than most (a little-more idealistic, a little-less
capitalistic), I’d guess.
Footnotes
1 i.e. what I’d have earned if I had worked full-time
2 That summer back in college that I worked in a factory building striplight fittings
doesn’t appear, for example!
3 Pro-tip if you’re looking at my CV and pitching me an opportunity:
mention what you expect to pay, sure, but if you’re trying to win me over then tell me about the problems I’ll be solving and how that’ll make the world a better
place. That’s how you motivate me to accept your offer!