Review for ProtonMail Encryption Status by Morgan Larosa

This is a repost promoting content originally published elsewhere. See more things Dan's reposted.

⭐⭐⭐⭐⭐
Does what it says on the tin! Short and sweet codebase that’s easy enough to verify personally, and doesn’t ask for any crazy permissions.

I probably needn’t care about this validation: when I wrote a Thunderbird plugin to enhance integration with ProtonMail, I wrote it principally for myself: scratching my own itch. It was nice to see that (at time of writing) a few hundred other people have made use of the extension too, but it wasn’t essential. I’d be maintaining it regardless because I use it every day.

But it still warmed my heart to see a five-star review come in alongside a clearly-expressed justification.

Standing up for developers: youtube-dl is back

This is a repost promoting content originally published elsewhere. See more things Dan's reposted.

Today we reinstated youtube-dl, a popular project on GitHub, after we received additional information about the project that enabled us to reverse a Digital Millennium Copyright Act (DMCA) takedown.

This is a Big Deal. For two reasons:

Firstly, youtube-dl is a spectacularly useful project. I’ve used it for many years to help me archive my own content, to improve my access to content that’s freely available on the platform, and to help centralise (freely available) metadata to keep my subscriptions on video-sharing sites. Others have even more-important uses for the tool. I love youtube-dl, and I’d never considered the possibility that it could be used to circumvent digital restrictions (apparently it’s got some kind of geofence-evading features you can optionally enable, for people who don’t have a multi-endpoint VPN I guess?… I note that it definitely doesn’t break DRM…) until its GitHub repo got taken down the other week.

Which was a bleeding stupid thing to use a DMCA request on, because, y’know: Barbara Streisand Effect. Lampshading that a free, open-source tool could be used for people’s convenience is likely to increase awareness and adoption, not decrease it! Huge thanks to the EFF for stepping up and telling GitHub that they’d got it wrong (this letter is great reading, by the way).

But secondly, GitHub’s response is admirable and – assuming their honour their new stance – effective. They acknowledge their mistake, then go on to set out a new process by which they’ll review takedown requests. That new process includes technical and legal review, erring on the side of the developer rather than the claimant (i.e. “innocent until proven guilty”), multiparty negotiation, and limiting the scope of takedowns by allowing violators to export their non-infringing content after the fact.

I was concerned that the youtube-dl takedown might create a FOSS “chilling effect” on GitHub. It still might: in the light of it, I for one have started backing up my repositories and those of projects I care about to an different Git server! But with this response, I’d still be confident hosting the main copy of an open-source project on GitHub, even if that project was one which was at risk of being mistaken for copyright violation.

Note that the original claim came not from Google/YouTube as you might have expected (if you’ve just tuned in) but from the RIAA, based on the fact that youtube-dl could be used to download copyrighted music videos for enjoyment offline. If you’re reminded of Sony v. Universal City Studios (1984) – the case behind the “Betamax standard” – you’re not alone.

Displaying ProtonMail Encryption Status in Thunderbird

In a hurry? Get the Thunderbird plugin here.

I scratched an itch of mine this week and wanted to share the results with you, in case you happen to be one of the few dozen other people on Earth who will cry “finally!” to discover that this is now a thing.

Encrypted email identified in Thunderbird having gone through ProtonMail Bridge
In the top right corner of this email, you can see that it was sent with end-to-end encryption from another ProtonMail user.

I’ve used ProtonMail as my primary personal email provider for about four years, and I love it. Seamless PGP/GPG for proper end-to-end encryption, privacy as standard, etc. At first, I used their web and mobile app interfaces but over time I’ve come to rediscover my love affair with “proper” email clients, and I’ve been mostly using Thunderbird for my desktop mail. It’s been great: lightning-fast search, offline capabilities, and thanks to IMAP (provided by ProtonMail Bridge) my mail’s still just as accessible when I fall-back on the web or mobile clients because I’m out and about.

But the one thing this set-up lacked was the ability to easily see which emails had been delivered encrypted versus those which had merely been delivered “in the clear” (like most emails) and then encrypted for storage on ProtonMail’s servers. So I fixed it.

Four types of email: E2E encrypted internal mail from other ProtonMail users, PGP-encrypted email from non ProtonMail users, encrypted mail stored encrypted by ProtonMail, and completely unencrypted mail such as stored locally in your Sent or Drafts folder
There are fundamentally four states a Thunderbird+ProtonMail Bridge email can be in, and here’s how I represent them.

I’ve just released my first ever Thunderbird plugin. If you’re using ProtonMail Bridge, it adds a notification to the corner of every email to say whether it was encrypted in transit or not. That’s all.

And of course it’s open source with a permissive license (and a doddle to compile using your standard operating system tools, if you want to build it yourself). If you’re using Thunderbird and ProtonMail Bridge you should give it a whirl. And if you’re not then… maybe you should consider it?

× ×

Syncthing

This last month or so, my digital life has been dramatically improved by Syncthing. So much so that I want to tell you about it.

Syncthing interface via Synctrayzor on Windows, showing Dan's syncs.
1.25TiB of data is automatically kept in sync between (depending on the data in question) a desktop PC, NAS, media centre, and phone. This computer’s using the Synctrayzor system tray app.

I started using it last month. Basically, what it does is keeps a pair of directories on remote systems “in sync” with one another. So far, it’s like your favourite cloud storage service, albeit self-hosted and much-more customisable. But it’s got a handful of killer features that make it nothing short of a dream to work with:

  • The unique identifier for a computer can be derived from its public key. Encryption comes free as part of the verification of a computer’s identity.
  • You can share any number of folders with any number of other computers, point-to-point or via an intermediate proxy, and it “just works”.
  • It’s super transparent: you can always see what it’s up to, you can tweak the configuration to match your priorities, and it’s open source so you can look at the engine if you like.

Here are some of the ways I’m using it:

Keeping my phone camera synced to my PC

Phone syncing with PC

I’ve tried a lot of different solutions for this over the years. Back in the way-back-when, like everybody else in those dark times, I used to plug my phone in using a cable to copy pictures off and sort them. Since then, I’ve tried cloud solutions from Google, Amazon, and Flickr and never found any that really “worked” for me. Their web interfaces and apps tend to be equally terrible for organising or downloading files, and I’m rarely able to simply drag-and-drop images from them into a blog post like I can from Explorer/Finder/etc.

At first, I set this up as a one-way sync, “pushing” photos and videos from my phone to my desktop PC whenever I was on an unmetered WiFi network. But then I switched it to a two-way sync, enabling me to more-easily tidy up my phone of old photos too, by just dragging them from the folder that’s synced with my phone to my regular picture storage.

Centralising my backups

Phone and desktop backups centralised through the NAS

Now I’ve got a fancy NAS device with tonnes of storage, it makes sense to use it as a central point for backups to run fom. Instead of having many separate backup processes running on different computers, I can just have each of them sync to the NAS, and the NAS can back everything up. Computers don’t need to be “on” at a particular time because the NAS runs all the time, so backups can use the Internet connection when it’s quietest. And in the event of a hardware failure, there’s an up-to-date on-site backup in the first instance: the cloud backup’s only needed in the event of accidental data deletion (which could be sync’ed already, of course!). Plus, integrating the sync with ownCloud running on the NAS gives easy access to my files wherever in the world I am without having to fire up a VPN or otherwise remote-in to my house.

Plus: because Syncthing can share a folder between any number of devices, the same sharing mechanism that puts my phone’s photos onto my main desktop can simultaneously be pushing them to the NAS, providing redundant connections. And it was a doddle to set up.

Maintaining my media centre’s screensaver

PC photos syncing to the media centre.

Since the NAS, running Jellyfin, took on most of the media management jobs previously shared between desktop computers and the media centre computer, the household media centre’s had less to do. But one thing that it does, and that gets neglected, is showing a screensaver of family photos (when it’s not being used for anything else). Historically, we’ve maintained the photos in that collection via a shared network folder, but then you’ve got credential management and firewall issues to deal with, not to mention different file naming conventions by different people (and their devices).

But simply sharing the screensaver’s photo folder with the computer of anybody who wants to contribute photos means that it’s as easy as copying the picture to a particular place. It works on whatever device they care to (computer, tablet, mobile) on any operating system, and it’s quick and seamless. I’m just using it myself, for now, but I’ll be offering it to the rest of the family soon. It’s a trivial use-case, but once you’ve got it installed it just makes sense.

In short: this month, I’m in love with Syncthing. And maybe you should be, too.

×

Identifying Post Kinds in WordPress RSS Feeds

I use the Post Kinds plugin to streamline the management of the different types of posts I make on my blog, based on the IndieWeb post types list: articles, like this one, are “conventional” blog posts, but I also publish notes (which are analogous to “tweets”), reposts (“shares” of things I’ve found online, sometimes with commentary), checkins (mostly chronicling my geocaching/geohashing), and others: I’ve extended Post Kinds to facilitate comics and reviews, for example.

But for people who subscribe (either directly or indirectly) to everything I post, I imagine it must be a little frustrating to sometimes be unable to identify the type of a post before clicking-through. So I’ve added the following code, which I’m sharing here and on GitHub in case it’s of any use to anybody else, to my theme’s functions.php:

// Make titles in RSS feed be prefixed by the Kind of the post.
function add_kind_to_rss_post_title(){
        $kinds = wp_get_post_terms( get_the_ID(), 'kind' );
        if( ! isset( $kinds ) || empty( $kinds ) ) return get_the_title(); // sanity-check.
        $kind = $kinds[0]->name;
        $title = get_the_title();
        return trim( "[{$kind}] {$title}" );
}
add_filter( 'the_title_rss', 'add_kind_to_rss_post_title', 4 ); // priority 4 to ensure it happens BEFORE default escaping filters.

This decorates the titles of my posts, but only in my feeds, so it’s easier for people to tell at-a-glance what’s going on:

Rendered RSS feed showing Post Kinds prefixes

Down the line I might expand this so that it doesn’t show if the subscriber is, for example, asking only for articles (e.g. via this feed); I’m coming up with a huge list of things I’d like to do at IndieWebCamp London! But for now, this feels like a nice simple improvement to a plugin I love that helps it to fit my specific needs.

×

City Roads

This is a repost promoting content originally published elsewhere. See more things Dan's reposted.

Map of Kidlington's roads

Cute open source project that produces on-demand SVG and PNG maps, like the one above, based on the roads in OpenStreetMap data. It takes a somewhat liberal view of what a “road” is: I found it momentarily challenging to get my bearings in the map above, which includes where I live, because the towpath and cycle paths are included which I hadn’t expected. Still a beautiful bit of output and the source code could be adapted for any number of interesting cartographic projects.

Cheating Hangman

A long while ago, inspired by Nick Berry‘s analysis of optimal Hangman strategy, I worked it backwards to find the hardest words to guess when playing Hangman. This week, I showed these to my colleague Grace – who turns out to be a fan of word puzzles – and our conversation inspired me to go a little deeper. Is it possible, I thought, for me to make a Hangman game that cheats by changing the word it’s thinking of based on the guesses you make in order to make it as difficult as possible for you to win?

Play “Cheating Hangman”

The principle is this: every time the player picks a letter, but before declaring whether or not it’s found in the word –

  1. Make a list of all possible words that would fit into the boxes from the current game state.
  2. If there are lots of them, still, that’s fine: let the player’s guess go ahead.
  3. But if the player’s managing to narrow down the possibilities, attempt to change the word that they’re trying to guess! The new word must be:
    • Legitimate: it must still be the same length, have correctly-guessed letters in the same places, and contain no letters that have been declared to be incorrect guesses.
    • Harder: after resolving the player’s current guess, the number of possible words must be larger than the number of possible words that would have resulted otherwise.
Gallows on a hill.
Yeah, you’re screwed now.

You might think that this strategy would just involve changing the target word so that you can say “nope” to the player’s current guess. That happens a lot, but it’s not always the case: sometimes, it’ll mean changing to a different word in which the guessed letter also appears. Occasionally, it can even involve changing from a word in which the guessed letter didn’t appear to one in which it does: that is, giving the player a “freebie”. This may seem counterintuitive as a strategy, but it sometimes makes sense: if saying “yeah, there’s an E at the end” increases the number of possible words that it might be compared to saying “no, there are no Es” then this is the right move for a cheating hangman.

Playing against a cheating hangman also lends itself to devising new strategies as a player, too, although I haven’t yet looked deeply into this. But logically, it seems that the optimal strategy against a cheating hangman might involve making guesses that force the hangman to bisect the search space: knowing that they’re always going to adapt towards the largest set of candidate words, a perfect player might be able to make guesses to narrow down the possibilities as fast as possible, early on, only making guesses that they actually expect to be in the word later (before their guess limit runs out!).

Cheating Hangman
The game is brutally-difficult, but surprisingly fun, and you can have it tell you when and how it cheats so you can begin to understand its strategy.

I also find myself wondering how easily I could adapt this into a “helpful hangman”: a game which would always change the word that you’re trying to guess in order to try to make you win. This raises the possibility of a whole new game, “suicide hangman”, in which the player is trying to get themselves killed and so is trying to pick letters that can’t possibly be in the word and the hangman is trying to pick words in which those letters can be found, except where doing so makes it obvious which letters the player must avoid next. Maybe another day.

In the meantime, you’re welcome to go play the game (and let me know what you think, below!) and, if you’re of such an inclination, read the source code. I’ve used some seriously ugly techniques to make this work, including regular expression metaprogramming (using regular expressions to write regular expressions), but the code should broadly make sense if you want to adapt it. Have fun!

Play “Cheating Hangman”

Update 26 September 2019, 16:23: I’ve now added “helpful mode”, where the computer tries to cheat on your behalf rather than against you, but it’s not as helpful as you’d think because it assumes you’re playing optimally and have already memorised the dictionary!

Update 1 October 2019, 06:40: Now featured on MetaFilter; hi, MeFites!

×

Counting Down

I wasn’t sure that my whiteboard at the Bodleian, which reminds my co-workers exactly how many days I’ve got left in the office, was attracting as much attention as it needed to. If I don’t know what my colleagues don’t know about how I do my job, I can’t write it into my handover notes.

You have [20] work days left to ask Dan that awkward question.
Tick, tick, tick, tick, boom.
So I repurposed a bit of digital signage in the office with a bit of Javascript to produce a live countdown. There’s a lot of code out there to produce countdown timers, but mine had some very specific requirements that nothing else seems to “just do”. Mine needed to:
  • Only count down during days that I’m expected to be in the office.
  • Only count down during working hours.
  • Carry on seamlessly after a reboot.
Screen showing: "Dan will be gone in 153 hours, 54 minutes, 38 seconds."
[insert Countdown theme song here]
Naturally, I’ve open-sourced it in case anybody else needs one, ever. It’s pretty basic, of course, because I’ve only got a hundred and fifty-something hours to finish a lot of things so I only wanted to throw a half hour at this while I ate my lunch! But if you want one, just put in an array of your working dates, the time you start each day, and the number of hours in your workday, and it’ll tick away. × ×

LABS Comic RSS Archive

Yesterday I recommended that you go read Aaron Uglum‘s webcomic LABS which had just completed its final strip. I’m a big fan of “completed” webcomics – they feel binge-able in the same way as a complete Netflix series does! – but Spencer quickly pointed out that it’s annoying for we enlightened modern RSS users who hook RSS up to everything to have to binge completed comics in a different way to reading ongoing ones: what he wanted was an RSS feed covering the entire history of LABS.

LABS comic adapted to show The Robot literally "feeding" RSS
With apologies to Aaron Uglum who I hope won’t mind me adapting his comic in this way.

So naturally (after the intense heatwave woke me early this morning anyway) I made one: complete RSS feed of LABS. And, of course, I open-sourced the code I used to generate it so that others can jumpstart their projects to make static RSS feeds from completed webcomics, too.

Even if you’re not going to read it via this medium, you should go read LABS.

Non Stop Hammer Ti.me

You know how sometimes I make a thing and, in hindsight, it doesn’t make much sense? And at best, all it can be said to do is to make the Internet more fun and weird?

Hammer Logo

I give you: NonStopHammerTi.me.

Things that make it awesome:

  • Well, the obvious.
  • Vanilla Javascript.
  • CSS animations timed to every-other-beat.
  • Using an SVG stroke-dasharray as a progress bar.
  • Progressively-enhanced; in the worst case you just get to download the audio.
  • PWA-enhanced; install it to your mobile!
  • Open source!
  • Decentralised (available via the peer-web at dat://nonstophammerti.me/ / dat://0a4a8a..00/)
  • Accessible to screen readers, keyboard navigators, partially-sighted users, just about anybody.
  • Compatible with digital signage at my workplace…
Digital signage showing NonStopHammerTi.me
My office aren’t sick of this… yet.

That is all.

×

Bodleian Advent Calendar

Hot on the tail of Pong, I wanted to share another mini-project I’ve developed for the Bodleian: this year’s digital advent calendar:

Bodleian 2018 digital advent calendar
If you look closely, you’ll find I’ve shown you a sneak-peek at some of what’s behind tomorrow’s door. Shh. Don’t tell our social media officer.

As each door is opened, a different part of a (distinctly-Bodleian/Oxford) winter scene unfolds, complete with an array of fascinating characters connected to the history, tradition, mythology and literature of the area. It’s pretty cool, and you should give it a go.

If you want to make one of your own – for next year, presumably, unless you’ve an inclination to count-down in this fashion to something else that you’re celebrating 25 days hence – I’ve shared a version of the code that you can adapt for yourself.

Sample advent calendar
The open-source version doesn’t include the beautiful picture that the Bodleian’s does, so you’ll have to supply your own.

Features that make this implementation a good starting point if you want to make your own digital advent calendar include:

  • Secure: your server’s clock dictates which doors are eligible to be opened, and only content legitimately visible on a given date can be obtained (no path-traversal, URL-guessing, or traffic inspection holes).
  • Responsive: calendar adapts all the way down to tiny mobiles and all the way up to 4K fullscreen along with optimised images for key resolutions.
  • Friendly: accepts clicks and touches, uses cookies to remember the current state so you don’t have to re-open doors you opened yesterday (unless you forgot to open one yesterday), “just works”.
  • Debuggable: a password-protected debug mode makes it easy for you to test, even on a production server, without exposing the secret messages behind each door.
  • Expandable: lots of scope for the future, e.g. a progressive web app version that you can keep “on you” and which notifies you when a new door’s ready to be opened, was one of the things I’d hoped to add in time for this year but didn’t quite get around to.

I’ve no idea if it’s any use to anybody, but it’s available on GitHub if you want it.

×

Importing Geocaching Logs into WordPress

Background

As an ocassional geocacher and geohasher, I’m encouraged to post logs describing my adventures, and each major provider wants me to post my logs into their silo (see e.g. my logs on geocaching.com, on opencache.uk, and on the geohashing wiki). But as a believer in the ideals behind the IndieWeb (since long before anybody said “IndieWeb”), I’m opposed to keeping the only copy of content that I produce in an environment controlled by somebody else (why?).

How do I reconcile this?

Wrist-mounted GPS in the snow.
Just another hundred metres to the cache, then it’s time to freeze my ass back to base.

What I’d prefer would be to be able to write my logs here, on my own blog, and for my content to by syndicated via some process into the logging systems of the various silo sites I prefer. This approach is called POSSE – Publish on Own Site, Syndicate Elsewhere. In addition to the widely-described benefits of this syndication strategy, such a system would also make it possible for me to:

  • write single posts amalgamating multiple locations (e.g. a geohashing expedition that included geocache finds) or,
  • write single posts that represent the same location published on multiple silos (e.g. a visit to a geocache published on two different listing sites [e.g. 1, 2])

Applying such an tool would require some work as different silos have different acceptable content rules (geocaching.com, for example, effectively forbids mention of the existence of other geocache listing sites), but that’d theoretically be workable.

POSSE would involve posts being made first to my blog and then converted via some process into logs in each relevant silo
The ideal solution would be POSSE-based.

Unfortunately, content rules aren’t the only factor making PESOS – writing content into each silo and then copying it to my blog – preferable to POSSE. There’s also:

  • Not all of the silos offer suitable (published) APIs, and where they do, the APIs are all distinctly different.
  • Geocaching.com specifically forbids the use of unapproved automated robots to access the site (and almost certainly wouldn’t approve the kind of tool that would be ideal).
  • The siloed services are well-supported by official and third-party apps with medium-specific logic which make them the best existing way to produce logs.
PESOS would mean that posts were made "the usual way" to the silos and then a process duplicates them onto my blog
A PESOS-based solution is far easier to implement, in this case.

Needless to say: as much as I’d have loved to POSSE my geo* logs, PESOS will do.

Implementation

My implementation is a WordPress plugin which does two things. The first is that it provides a Javascript bookmarklet and an accompanying dynamically-generated Javascript file (the former loads the latter) served from my blog’s domain. That Javascript file contains reference to every log already published to my blog, so that the Javascript code can deliberately omit these logs from any import. When executed on a log listing page like those linked above, it copies all of the details of that log into a form which submits them back to my blog, where it’s received by the second part of the plugin.

Geocache logs to WordPress importer seen running on geocaching.com
The import controls appear in a new, right-most column (GCVote is also visible running in my browser).

The second part of the plugin takes this data and creates a new draft post. My plugin is pretty opinionated on this part because it’s geared strongly towards my use-case, so if you want to use it yourself you’ll probably want to tweak the code a little (e.g. it applies specific tags and names metadata fields a particular way).

Import plugin running on OpenCache.uk
When run on OpenCache.uk effectively the same interface is presented, even though the underlying mechanisms and data locations are different.

It’s not fully-automated and it’s not POSSE,but it’s “good enough” and it’s enabled me to synchronise all of my cache logs to my blog. I’ve plans to extend it to support other GPS game services to streamline my de-siloisation even further.

And of course, I’ve open-sourced the whole thing. If it’s any use to you (probably in an adapted form), it’s all yours.

× × ×

How To Set Up an OSRM Server on Ubuntu 14.04

This is a repost promoting content originally published elsewhere. See more things Dan's reposted.

How To Set Up an OSRM Server on Ubuntu 14.04 | DigitalOcean (DigitalOcean)

The OpenStreetMap project consists of raw map data, collected and aggregated by thousands of users. This tutorial covers the configuration and maintenance of a web service using Open Source Routing Machine (OSRM), which is based on the OpenStreetMap d

The OpenStreetMap project consists of raw map data, collected and aggregated by thousands of users. However, its open access policy sparked a number of collateral projects, which collectively cover many of the features typically offered by commercial mapping services.

The most obvious advantage in using OpenStreetMap-based software over a commercial solution is economical convenience, because OpenStreetMap comes as free (both as in beer and as in speech) software. The downside is that it takes a little configuration in order to setup a working web service.

This tutorial covers the configuration and maintenance of a web service which can answer questions such as:

  • What is the closest street to a given pair of coordinates?
  • What’s the best way to get from point A to point B?
  • How long does it take to get from point A to point B with a car, or by foot?

The software that makes this possible is an open-source project called Open Source Routing Machine (OSRM), which is based on the OpenStreetMap data. Functionalities to embed OpenStreetMaps in Web pages are already provided out-of-the-box by APIs such as OpenLayers.

While slightly dated, I found this guide to be really valuable in my effort to set up a server that could spit out fastest walking routes around Oxford to support a PWA-driven tour of places relevant to J. R. R. Tolkien’s life, at my “day job”.

Lunar Lander

Just want to play my game without reading this whole post? Play the game here – press a key, mouse button, or touch the screen to fire the thrusters, and try to land at less than 4 m/s with as much fuel left over as possible.

In 1969, when all the nerds were still excited by sending humans to the moon instead of flinging cars around the sun, the hottest video game was Rocket (or Lunar) for the PDP-8. Originally implemented in FOCAL by high school student Jim Storer and soon afterwards ported to BASIC (the other dominant language to come as standard with microcomputers), Rocket became the precursor to an entire genre of video games called “Lunar Lander games“.

Source code of Rocket and sample output.
Like many pieces of microcomputer software of the time, Rocket was distributed as printed source code that you’d need to carefully type in at the other end.

The aim of these games was to land a spacecraft on the moon or similar body by controlling the thrust (and in some advanced versions, the rotation) of the engine. The spacecraft begins in freefall towards the surface and will accelerate under gravity: this can be counteracted with thrust, but engaging the engine burns through the player’s limited supply of fuel. Furthermore, using fuel lowers the total mass of the vessel (a large proportion of the mass of the Apollo landers was fuel for use in the descent stage) which reduces its inertia, giving the engine more “kick” which must be compensated for during the critical final stages. It sounds dry and maths-y, but I promise that graphical versions can usually be played entirely “by eye”.

Atari's Lunar Lander (1979)
Atari’s 1979 adaptation is perhaps the classic version you’d recognise, although its release was somewhat overshadowed by their other vector-graphics space-themed release in 1979: Asteroids.

Let’s fast-forward a little. In 1997 I enrolled to do my A-levels at what was then called Preston College, where my Computing tutor was a chap called Kevin Geldard: you can see him at 49 seconds into this hilariously low-fi video which I guess must have been originally shot on VHS despite being uploaded to YouTube in 2009. He’s an interesting chap in his own right whose contributions to my career in computing deserve their own blog post, but for the time being all you need to know is that he was the kind of geek who, like me, writes software “for fun” more often than not. Kevin owned a Psion 3 palmtop – part of a series of devices with which I also have a long history and interest – and he taught himself to program OPL by reimplementing a favourite game of his younger years on it: his take on the classic mid-70s-style graphical Lunar Lander.

Psion Series 3
I never owned a Psion Series 3 (pictured), but I bought a Series 5mx in early 2000 out of my second student loan cheque, ultimately wrote most of my undergraduate dissertation using it, and eventually sold it to a collector in about 2009 for almost as much as I originally paid for it. The 5mx was an amazing bit of kit. But I’ll blog about that another day, I guess.

My A-level computing class consisted of a competitive group of geeky lads, and we made sort-of a personal extracurricular challenge to ourselves of re-implementing Kevin’s take on Lunar Lander using Turbo Pascal, the primary language in which our class was taught. Many hours out-of-class were spent in the computer lab, tweaking and comparing our various implementations (with only ocassional breaks to play Spacy, CivNet, or my adaptation of LORD2): later, some of us would extend our competition by going on to re-re-implement in Delphi, Visual Basic, or Java, or by adding additional levels relating to orbital rendezvous or landing on other planetary bodies. I was quite proud of mine at the time: it was highly-playable, fun, and – at least on your first few goes – moderately challenging.

Dan's Lunar Lander (2018)
I sometimes wonder what it would have looked like if I’d have implemented my 1997 Lunar Lander today. So I did.

Always game to try old new things, and ocassionally finding time between the many things that I do to code, I decided to expand upon my recently-discovered interest in canvas coding to bring back my extracurricular Lunar Lander game of two decades ago in a modern format. My goals were:

  • A one-button version of a classic “straight descent only” lunar lander game (unlike my 1997 version, which had 10 engine power levels, this remake has just “on” and “off”)
  • An implementation based initially on real physics (although not necessarily graphically to scale)… and then adapted as necessary to give a fun/playability balance that feels good
  • Runs in a standards-compliant browser without need for plugins: HTML5, Canvas, Javascript
  • Adapts gracefully to any device, screen resolution, and orientation with graceful degredation/progressive enhancement

You can have a go at my game right here in your web browser! The aim is to reach the ground travelling at a velocity of no more than 4 m/s with the maximum amount of fuel left over: this, if anything, is your “score”. My record is 52% of fuel remaining, but honestly anything in the 40%+ range is very good. Touch the screen (if it’s a touchscreen) or press a mouse button or any key to engage your thrusters and slow your descent.

End point of Dan's Lunar Lander (2018)
“Houston, the Eagle has landed.” Kerbal Space Program, it isn’t. Here’s a very good landing: 3 m/s with 48% of the fuel tank remaining.

And of course it’s all open-source, so you’re more than welcome to take it, rip it apart, learn from it, or make something better out of it.

× × ×