Why Did Media Players Look Like That?

You don’t really see it any more, but: if you downloaded some media player software a couple of decades ago, it’d probably appear in a weird-shaped window, and I’ve never understood why.

Composite screenshot showing Sonique, Windows Media Player and BSplayer music players, among others, in a variety of windows that are either unusually-shaped, look like conventional Hi-Fis, or both.Mostly, these designs are… pretty ugly. And for what? It’s also worth noting that this kind of design can be found in all kinds of applications, in media players that it was almost ubiquitous.

You might think that they’re an overenthusiastic kind of skeuomorphic design: people trying to make these players look like their physical analogues. But hardware players were still pretty boxy-looking at this point, either because of the limitations of their data storage1. By the time flash memory-based portable MP3 players became commonplace their design was copying software players, not the other way around.

Composite screenshot showing Windows Media Player, the (old) iTunes companion widget, KMPlayer, and other media players. All of them have unusually-shaped windows, often with organic corners.

So my best guess is that these players were trying to stand out as highly-visible. Like: they were things you’d want to occupy a disproportionate amount of desktop space. Maybe other people were listening to music differently than me… but for me, back when screen real estate was at such a premium2, a music player’s job was to be small, unintrusive, and out-of-the-way.

WinAmp music player in minified mode: just a sliver of a music player, small, showing just back/forward/play/pause/stop controls, play time, and a mini-equaliser. The timer shows we're 3 seconds into a track.
I used to run Winamp in its very-smallest minified size, tucked up at the top of the screen, using the default skin or one that made it even less-obtrusive.

It’s a mystery to me why anybody would (or still does) make media player software or skins for them that eat so much screen space, frequently looking ugly while they do so, only to look like a hypothetical hardware device that wouldn’t actually become commonplace until years after this kind of player design premiered!

Maybe other people listened to music on their computer differently from me: putting it front and centre, not using their computer for other tasks at the same time. And maybe for these people the choice of player and skin was an important personalisation feature; a fashion statement or a way to show off their personal identity. But me? I didn’t get it then, and I don’t get it now. I’m glad that this particular trend seems to have died and windows are, for the most part, rounded rectangles once more… even for music player software!

Footnotes

1 A walkman, minidisc player, or hard drive-based digital music device is always going to look somewhat square because of what’s inside.

2 I “only” had 1600 × 1200 (UXGA) pixels on the very biggest monitor I owned before I went widescreen, and I spent a lot of time on monitors at lower resolutions e.g. 1024 × 768 (XGA); on such screens, wasting space on a music player when you’re mostly going to be listening “in the background” while you do something else seemed frivolous.

Minification vs the GPL

A not-entirely-theoretical question about open source software licensing came up at work the other day. I thought it was interesting enough to warrant a quick dive into the philosophy of minification, and how it relates to copyleft open source licenses. Specifically: does distributing (only) minified source code violate the GPL?

If you’ve come here looking for a legally-justifiable answer to that question, you’re out of luck. But what I can give you is a (fictional) story:

TheseusJS is slow

TheseusJS is a (fictional) Javascript library designed to be run in a browser. It’s released under the GPLv3 license. This license allows you to download and use TheseusJS for any purpose you like, including making money off it, modifying it, or redistributing it to others… but it requires that if you redistribute it you have to do so under the same license and include the source code. As such, it forces you to share with others the same freedoms you enjoy for yourself, which is highly representative of some schools of open-source thinking.

Screenshot showing TheseusJS's GitHub page. The project hasn't been updated in a year, and that was just to add a license: no code has been changed in 12 years.
It’s a cool project, but it really needs some maintenance this side of 2010.

It’s a great library and it’s used on many websites, but its performance isn’t great. It’s become infamous for the impact it has on the speed of the websites it’s used on, and it’s often the butt of jokes by developers: “Man, this website’s slow. Must be running Theseus!”

The original developer has moved onto his new project, Moralia, and seems uninterested in handling the growing number of requests for improvements. So I’ve decided to fork it and make my own version, FastTheseusJS and work on improving its speed.

FastTheseusJS is fast

I do some analysis and discover the single biggest problem with TheseusJS is that the Javascript file itself is enormous. The original developer kept all of the copious documentation in comments in the file itself, and for some reason it doesn’t even compress well. When you use TheseusJS on a website it takes a painfully long time for a browser to download it, if it’s not precached.

Screenshot showing a website for the TheseusJS API. It's pretty labyrinthine (groan).
Nobody even uses the documentation in the comments: there’s a website with a fully-documented API.

My first release of FastTheseusJS, then, removes virtually of the comments, replacing them with a single comment at the top pointing developers to a website where the API is fully documented. While I’m in there anyway, I also fix a minor bug that’s been annoying me for a while.

v1.1.0 changes

  • Forked from TheseusJS v1.0.4
  • Fixed issue #1071 (running mazeSolver() without first connecting <String> component results in endless loop)
  • Removed all comments: improves performance considerably

I discover another interesting fact: the developer of TheseusJS used a really random mixture of tabs and spaces for indentation, sometimes in the same line! It looks… okay if you set your editor up just right, but it’s pretty hideous otherwise. That whitespace is unnecessary anyway: the codebase is sprawling but it seldom goes more than two levels deep, so indentation levels don’t add much readability. For my second release of FastTheseusJS, then, I remove this extraneous whitespace, as well as removing the in-line whitespace inside parameter lists and the components of for loops. Every little helps, right?

v1.1.1 changes

  • Standardised whitespace usage
  • Removed unnecessary whitespace

Some of the simpler functions now fit onto just a single line, and it doesn’t even inconvenience me to see them this way: I know the codebase well enough by now that it’s no disadvantage for me to edit it in this condensed format.

Screenshot of a block of Javascript code intended using semicolons rather than tabs or spaces.
Personally, I’ve given up on the tabs-vs-spaces debate and now I indent my code using semicolons. (That’s clearly a joke. Don’t flame me.)

In the next version, I shorten the names of variables and functions in the code.

For some reason, the original developer used epic rambling strings for function names, like the well-known function dedicateIslandTempleToTheImageOfAGodBeforeOrAfterMakingASacrificeWithOrWithoutDancing( boolBeforeMakingASacrifice, objectImageOfGodToDedicateIslandTempleTo, stringNmeOfPersonMakingDedication, stringOrNullNameOfLocalIslanderDancedWith). That one gets called all the time internally and isn’t exposed via the external API so it might as well be shortened to d=(i,j,k,l,m)=>. Now all the internal workings of the library are each represented with just one or two letters.

v1.1.2 changes

  • Shortened/standarised non-API variable and function names – improves performance

I’ve shaved several kilobytes off the monstrous size of TheseusJS and I’m very proud. The original developer says nice things about my fork on social media, resulting in a torrent of downloads and attention. Within a certain archipelago of developers, I’m slightly famous.

But did I violate the license?

But then a developer says to me: you’re violating the license of the original project because you’re not making the source code available!

A man in a suit sits outdoors with a laptop and a cup of coffee. He is angry and frustrated, and a bubble shows that he is thinking:"why can't people respect the fucking license?!"
This happens every day. Probably not to this same guy every time though, but you never know. Original photo by Andrea Piacquadio.

They claim that my bugfix in the first version of FastTheseusJS represents a material change to the software, and that the changes I’ve made since then are obfuscation: efforts short of binary compilation that aim to reduce the accessibility of the source code. This fails to meet the GPL‘s definition of source code as “the preferred form of the work for making modifications to it”. I counter that this condensed view of the source code is my “preferred” way of working with it, and moreover that my output is not the result of some build step that makes the code harder to read, the code is just hard to read as a result of the optimisations I’ve made. In ambiguous cases, whose “preference” wins?

Did I violate the license? My gut feeling is that no, all of my changes were within the spirit and the letter of the GPL (they’re a terrible way to write code, but that’s not what’s in question here). Because I manually condensed the code, did so with the intention that this condensing was a feature, and continue to work directly with the code after condensing it because I prefer it that way… that feels like it’s “okay”.

But if I’d just run the code through a minification tool, my opinion changes. Suppose I’d run minify --output fasttheseus.js theseus.js and then deleted my copy of theseus.js. Then, making changes to fasttheseus.js and redistributing it feels like a violation to me… even if the resulting code is the same as I’d have gotten via the “manual” method!

I don’t know the answer (IANAL), but I’ll tell you this: I feel hypocritical for saying one piece of code would not violate the license but another identical piece of code would, based only on the process the developer followed to produce it. If I replace one piece of code at a time with less-readable versions the license remains intact, but if I replace them all at once it doesn’t? That doesn’t feel concrete nor satisfying.

Screenshot showing highly-minified HTML code (for this page) which is still reasonably readable.
Sure, I can write a blog post in just one line of code. It’ll just be a really, really, really long line… (Still perfectly readable, though!)

This isn’t an entirely contrived example

This example might seem highly contrived, and that’s because it is. But the grey area between the extremes is where the real questions are. If you agree that redistribution of (only) minified source code violates the GPL, you’re left asking: at what point does the change occur? Code isn’t necessarily minified or not-minified: there are many intermediate steps.

If I use a correcting linter to standardise indentation and whitespace – switching multiple spaces for the appropriate number of tabs, removing excess line breaks etc. (or do the same tasks manually) I’m sure you’d agree that’s fine. If I have it replace whole-function if-blocks with hoisted return statements, that’s probably fine too. If I replace if blocks with ternery operators or remove or shorten comments… that might be fine, but probably depends upon context. At some point though, some way along the process, minification goes “too far” and feels like it’s no longer within the limitations of the license. And I can’t tell you where that point is!

This issue’s even more-complicated with some other licenses, e.g. the AGPL, which extends the requirement to share source code to hosted applications. Suppose I implement a web application that uses an AGPL-licensed library. The person who redistributed it to me only gave me the minified version, but they gave me a web address from which to acquire the full source code, so they’re in the clear. I need to make a small patch to the library to support my service, so I edit it right into the minified version I’ve already got. A user of my hosted application asks for a copy of the source code, so I provide it, including the edited minified library… am I violating the license for not providing the full, unminified version, even though I’ve never even seen it? It seems absurd to say that I would be, but it could still be argued to be the case.

Diagram showing how permissive software licenses are generally compatible for use in LGPL or MPL licensed software, which are then compatible for use (except MPL) in GPL licensed software, which are in turn compatible for use (except GPL 2) with AGPL licensed software.
I love diagrams like this, which show license compatibility of different open source licenses. Adapted from a diagram by Carlo Daffara, in turn adapted from a diagram by David E. Wheeler, used under a CC-BY-SA license.

99% of the time, though, the answer’s clear, and the ambiguities shown above shouldn’t stop anybody from choosing to open-source their work under GPL, AGPL (or any other open source license depending on their preference and their community). Perhaps the question of whether minification violates the letter of a copyleft license is one of those Potter Stewart “I know it when I see it” things. It certainly goes against the spirit of the thing to do so deliberately or unnecessarily, though, and perhaps it’s that softer, more-altruistic goal we should be aiming for.

From Synergy to Barrier

I’ve been using Synergy for a long, long time. By the time I wrote about my admiration of its notification icon back in 2010 I’d already been using it for some years. But this long love affair ended this week when I made the switch to its competitor, Barrier.

Screenshot showing some pre-1.3 version of Synergy running on Windows Vista.
I’m not certain exactly when I took this screenshot (which I shared with Kit while praising Synergy), but it’s clearly a pre-1.4 version and those look distinctly like Windows Vista’s ugly rounded corners, so I’m thinking no later than 2009?

If you’ve not come across it before: Synergy was possibly the first multiplatform tool to provide seamless “edge-to-edge” sharing of a keyboard and mouse between multiple computers. Right now, for example, I’m sitting in front of Cornet, a Debian 11 desktop, Idiophone, a Macbook Pro docked to a desktop monitor, and Renegade, a Windows desktop. And I can move my mouse cursor from one, to the other, to the next, interacting with them all as if I were connected directly to it.

There have long been similar technologies. KVM switches can do this, as can some modern wireless mice (I own at least two such mice!). But none of them are as seamless as what Synergy does: moving from computer to computer as fast as you can move your mouse and sharing a clipboard between multiple devices. I also love that I can configure my set-up around how I work, e.g. when I undock my Macbook it switches from ethernet to wifi, this gets detected and it’s automatically removed from the cluster. So when I pick up my laptop, it magically stops being controlled by my Windows PC’s mouse and keyboard until I dock it again.

Illustration showing a Debian desktop called Cornet, a Mac laptop with attached monitor called Idiophone, and a Windows desktop called Renegade. All three share a single keyboard and mouse using Barrier.

Synergy’s published under a hybrid model: open-source components, with paid-for extra features. It used to provide more in the open-source offering: you could download a fully-working copy of the software and use it without limitation, losing out only on a handful of features that for many users were unnecessary. Nontheless, early on I wanted to support the development of this tool that I used so much, and so I donated money towards funding its development. In exchange, I gained access to Synergy Premium, and then when their business model changed I got grandfathered-in to a lifetime subscription to Synergy Pro.

I continued using Synergy all the while. When their problem-stricken 2.x branch went into beta, I was among the testers: despite the stability issues and limitations, I loved the fact that I could have what was functionally multiple co-equal “host” computers, and – when it worked – I liked the slick new configuration interface it sported. I’ve been following with bated breath announcements about the next generation – Synergy 3 – and I’ve registered as an alpha tester for when the time comes.

If it sounds like I’m a fanboy… that’d probably be an accurate assessment of the situation. So why, after all these years, have I jumped ship?

Email from Symless to Dan, reading: "Thank you for contacting Synergy Support. My name is Kim and I am happy to assist you. We do not have a download option for the 32 bit version of Debian 10. We currently only have the options available in the members area. Feel free to reach out if you have any further questions or concerns."
Dear Future Dan. If you ever need a practical example of where open-source thinking provides a better user experience than arbritrarily closed-source products, please see above. Yours, Past Dan.

I’ve been aware of Barrier since the project started, as a fork of the last open-source version of the core Synergy program. Initially, I didn’t consider Barrier to be a suitable alternative for me, because it lacked features I cared about that were only available in the premium version of Synergy. As time went on and these features were implemented, I continued to stick with Synergy and didn’t bother to try out Barrier… mostly out of inertia: Synergy worked fine, and the only thing Barrier seemed to offer would be a simpler set-up (because I wouldn’t need to insert my registration details!).

This week, though, as part of a side project, I needed to add an extra computer to my cluster. For reasons that are boring and irrelevant and so I’ll spare you the details, the new computer’s running the 32-bit version of Debian 11.

I went to the Symless download pages and discovered… there isn’t a Debian 11 package. Ah well, I think: the Debian 10 one can probably be made to work. But then I discover… there’s only a 64-bit version of the Debian 10 binary. I’ll note that this isn’t a fundamental limitation – there are 32-bit versions of Synergy available for Windows and for ARMhf Raspberry Pi devices – but a decision by the developers not to support that platform. In order to protect their business model, Synergy is only available as closed-source binaries, and that means that it’s only available for the platforms for which the developers choose to make it available.

So I thought: well, I’ll try Barrier then. Now’s as good a time as any.

Screenshot showing Mac computer "Idiophone" being configured in Barrier to connect to server "Renegade".
Setting up Barrier in place of Synergy was pretty familiar and painless.

Barrier and Synergy aren’t cross-compatible, so first I had to disable Synergy on each machine in my cluster. Then I installed Barrier. Like most popular open-source software, this was trivially easy compared to Synergy: I just used an appropriate package manager by running choco install barrier, brew install barrier, and apt install barrier to install on each of the Windows, Mac, and Debian computers, respectively.

Configuring Barrier was basically identical to configuring Synergy: set up the machine names, nominate one the server, and tell the server what the relative positions are of each of the others’ screens. I usually bind the “scroll lock” key to the “lock my cursor to the current screen” function but I wasn’t permitted to do this in Barrier for some reason, so I remapped my scroll lock key to some random high unicode character and bound that instead.

Getting Barrier to auto-run on MacOS was a little bit of a drag – in the end I had to use Automator to set up a shortcut that ran it and loaded the configuration, and set that to run on login. These little touches are mostly solved in Synergy, but given its technical audience I don’t imagine that anybody is hugely inconvenienced by them. Nonetheless, Synergy clearly retains a slightly more-polished experience.

Altogether, switching from Synergy to Barrier took me under 15 minutes and has so far offered me a functionally-identical experience, except that it works on more devices, can be installed via my favourite package managers, and doesn’t ask me for registration details before it functions. Synergy 3’s going to have to be a big leap forward to beat that!

Syncthing

This last month or so, my digital life has been dramatically improved by Syncthing. So much so that I want to tell you about it.

Syncthing interface via Synctrayzor on Windows, showing Dan's syncs.
1.25TiB of data is automatically kept in sync between (depending on the data in question) a desktop PC, NAS, media centre, and phone. This computer’s using the Synctrayzor system tray app.

I started using it last month. Basically, what it does is keeps a pair of directories on remote systems “in sync” with one another. So far, it’s like your favourite cloud storage service, albeit self-hosted and much-more customisable. But it’s got a handful of killer features that make it nothing short of a dream to work with:

  • The unique identifier for a computer can be derived from its public key. Encryption comes free as part of the verification of a computer’s identity.
  • You can share any number of folders with any number of other computers, point-to-point or via an intermediate proxy, and it “just works”.
  • It’s super transparent: you can always see what it’s up to, you can tweak the configuration to match your priorities, and it’s open source so you can look at the engine if you like.

Here are some of the ways I’m using it:

Keeping my phone camera synced to my PC

Phone syncing with PC

I’ve tried a lot of different solutions for this over the years. Back in the way-back-when, like everybody else in those dark times, I used to plug my phone in using a cable to copy pictures off and sort them. Since then, I’ve tried cloud solutions from Google, Amazon, and Flickr and never found any that really “worked” for me. Their web interfaces and apps tend to be equally terrible for organising or downloading files, and I’m rarely able to simply drag-and-drop images from them into a blog post like I can from Explorer/Finder/etc.

At first, I set this up as a one-way sync, “pushing” photos and videos from my phone to my desktop PC whenever I was on an unmetered WiFi network. But then I switched it to a two-way sync, enabling me to more-easily tidy up my phone of old photos too, by just dragging them from the folder that’s synced with my phone to my regular picture storage.

Centralising my backups

Phone and desktop backups centralised through the NAS

Now I’ve got a fancy NAS device with tonnes of storage, it makes sense to use it as a central point for backups to run fom. Instead of having many separate backup processes running on different computers, I can just have each of them sync to the NAS, and the NAS can back everything up. Computers don’t need to be “on” at a particular time because the NAS runs all the time, so backups can use the Internet connection when it’s quietest. And in the event of a hardware failure, there’s an up-to-date on-site backup in the first instance: the cloud backup’s only needed in the event of accidental data deletion (which could be sync’ed already, of course!). Plus, integrating the sync with ownCloud running on the NAS gives easy access to my files wherever in the world I am without having to fire up a VPN or otherwise remote-in to my house.

Plus: because Syncthing can share a folder between any number of devices, the same sharing mechanism that puts my phone’s photos onto my main desktop can simultaneously be pushing them to the NAS, providing redundant connections. And it was a doddle to set up.

Maintaining my media centre’s screensaver

PC photos syncing to the media centre.

Since the NAS, running Jellyfin, took on most of the media management jobs previously shared between desktop computers and the media centre computer, the household media centre’s had less to do. But one thing that it does, and that gets neglected, is showing a screensaver of family photos (when it’s not being used for anything else). Historically, we’ve maintained the photos in that collection via a shared network folder, but then you’ve got credential management and firewall issues to deal with, not to mention different file naming conventions by different people (and their devices).

But simply sharing the screensaver’s photo folder with the computer of anybody who wants to contribute photos means that it’s as easy as copying the picture to a particular place. It works on whatever device they care to (computer, tablet, mobile) on any operating system, and it’s quick and seamless. I’m just using it myself, for now, but I’ll be offering it to the rest of the family soon. It’s a trivial use-case, but once you’ve got it installed it just makes sense.

In short: this month, I’m in love with Syncthing. And maybe you should be, too.

Rebuilding a Windows box with Chocolatey

Computers Don’t Like Moving House

As always seems to happen when I move house, a piece of computer hardware broke for me during my recent house move. It’s always exactly one piece of hardware, like it’s a symbolic recognition by the universe that being lugged around, rattling around and butting up against one another, is not the natural state of desktop computers. Nor is it a comfortable journey for the hoarder-variety of geek nervously sitting in front of them, tentatively turning their overloaded vehicle around each and every corner. UserFriendly said it right in this comic from 2003.

This time around, it was one of the hard drives in Renegade, my primary Windows-running desktop, that failed. (At least I didn’t break myself, this time.)

Western Digital Blue 6TB hard drisk drive
Here’s the victim of my latest move. Rest in pieces.

Fortunately, it failed semi-gracefully: the S.M.A.R.T. alarm went off about a week before it actually started causing real problems, giving me at least a little time to prepare, and – better yet – the drive was part of a four-drive RAID 10 hot-swappable array, which means that every single byte of data on that drive was already duplicated to a second drive.

Incidentally, this configuration may have indirectly contributed to its death: before I built Fox, our new household NAS, I used Renegade for many of the same purposes, but WD Blues are not really a “server grade” hard drive and this one and its siblings will have seen more and heavier use than they might have expected over the last few years. (Fox, you’ll be glad to hear, uses much better-rated drives for her arrays.)

A single-drive failure in a RAID 10 configuration, with the duplicate data shown safely alongside.
Set up your hard drives like this and you can lose at least one, and up to half, of the drives without losing data.

So no data was lost, but my array was degraded. I could have simply repaired it and carried on by adding a replacement similarly-sized hard drive, but my needs have changed now that Fox is on the scene, so instead I decided to downgrade to a simpler two-disk RAID 1 array for important data and an “at-risk” unmirrored drive for other data. This retains the performance of the previous array at the expense of a reduction in redundancy (compared to, say, a three-disk RAID 5 array which would have retained redundancy at the expense of performance). As I said: my needs have changed.

Fixing Things… Fast!

In any case, the change in needs (plus the fact that nobody wants watch an array rebuild in a different configuration on a drive with system software installed!) justified a reformat-and-reinstall, which leads to the point of this article: how I optimised my reformat-and-reinstall using Chocolatey.

Chocolate brownie with melted chocolate sauce.
Not this kind of chocolatey, I’m afraid. Man, I shouldn’t have written this post before breakfast.

Chocolatey is a package manager for Windows: think like apt for Debian-like *nices (you know I do!) or Homebrew for MacOS. For previous Windows system rebuilds I’ve enjoyed the simplicity of Ninite, which will build you a one-click installer for your choice of many of your favourite tools, so you can get up-and-running faster. But Chocolatey’s package database is much more expansive and includes bonus switches for specifying particular versions of applications, so it’s a clear winner in my mind.

Dan's reformat-and-reinstall checklist
If you learn only one thing about me from this post, let it be that I’m a big fan of redundancy. Here’s the printed version of my reinstallation list. Y’know, in case the copy on a pendrive failed.

So I made up a Windows installation pendrive and added to it a “script” of things to do to get Renegade back into full working order. You can read the full script here, but the essence of it was:

  1. Reconfigure the RAID array, reformat, reinstall Windows, and create an account.
  2. Install things I routinely use that aren’t available on Chocolatey (I’d pre-copied these onto the pendrive for laziness): Synergy, Beamgun, Backblaze, ManyCam, Office 365, ProtonMail Bridge, and PureVPN.
  3. Install Chocolatey by running:
    Set-ExecutionPolicy Bypass -Scope Process -Force; [System.Net.ServicePointManager]::SecurityProtocol = [System.Net.ServicePointManager]::SecurityProtocol -bor 3072; iex ((New-Object System.Net.WebClient).DownloadString('https://chocolatey.org/install.ps1'))
  4. Install everything else (links provided in case you’re interested in what I “run”!) by running:
    choco install -y Firefox -y --params "/l:en-GB /RemoveDistributionDir"
    choco install virtualbox -y --params "/NoDesktopShortcut /ExtensionPack" --version 6.0.22
    choco install -y 7zip audacity autohotkey beaker curl discord everything f.lux fiddler foobar2000 foxitreader garmin-basecamp gimp git github-desktop glasswire goggalaxy googlechrome handbrake heidisql inkscape keepassxc krita mountain-duck nmap nodejs notepadplusplus obs-studio owncloud-client paint.net powertoys putty ruby sharpkeys slack steam sublimetext3 telegram teracopy thunderbird vagrant vlc whatsapp wireshark wiztree zoom
  5. Configuration (e.g. set up my unusual keyboard mappings, register software, set up remote connections and backups, etc.).

By scripting virtually all of the above I was able to rearrange hard drives in and then completely reimage a (complex) working Windows machine with well under an hour of downtime; I can thoroughly recommend Chocolatey next time you have to set up a new Windows PC (or just to expand what’s installed on your existing one). There’s a GUI if you’re not a fan of the command line, of course.

An app can be a home-cooked meal

This article is a repost promoting content originally published elsewhere. See more things Dan's reposted.

Have you heard about this new app called BoopSnoop?

It launched in the first week of 2020, and almost immediately, it was downloaded by four people in three different time zones. In the months since, it has remained steady at four daily active users, with zero churn: a resounding success, exceeding every one of its creator’s expectations.

:)

I made a messaging app for, and with, my family. It is ruthlessly simple; we love it; no one else will ever use it. I wanted to jot down some notes about how and why I made it, both to (a) offer a nudge to anyone else out there considering a similar project and (b) suggest something a little larger about software.

Robin Sloan (yes, this one) talks about an app that he wrote exclusively for his family. He likens the experience to a making a home-cooked meal. And I totally get it.

I do this kind of thing all the time. Our new home NAS device, Fox, performs a handful of functions (and I plan to expand it to many more) based on a mixture of open-source and homegrown code, just for my immediate family. Our “family wiki” does the same thing. And the spreadsheet we use for our finances. I’ve written apps for small groups of friends before, too (e.g. 1, 2, 3, 4, 5, 6, 7, 8…). And that’s not to mention the countless “meals for one” I’ve cooked: small applications written entirely for my own benefit – I’m using one right now to pull this article from the list of “things I’ve read and enjoyed recently” into my blog.

A home-cooked meal benefits from being tailored to its audience (if the recipe calls for mustard, I might use less or omit it because it makes my nose feel funny). It benefits from being tailored to its purpose. And it benefits from the love that goes into it. My only superstition – that I’m aware of – is that I believe that food tastes better if the chef smiled during its production… I’m beginning to think that the same might be true for software, too.

First among the reasons I think that learning the basics of programming should be in the school curriculum is that it teaches people how computers work and so, by proxy, what they are (and are not) capable of. The most digitally-literate non-programmers I know are people who have the strongest understanding about how and why computers do what they do. But a close second among my reasons is that those with an inclination can go a step further and, without even necessarily pushing their skills to a level at which they could or would want to work as software developers, build their own tools to “scratch their own itches”. Solving a problem for yourself is enormously empowering, and the versatility of software lends itself to solving a huge array of relatively-tiny problems: problems that affect individuals, families, or small communities but that aren’t big enough to warrant commercial attention.

(Sometimes these projects explode into something bigger, but usually they remain just as they are: a tool for the benefit of oneself and one’s immediate tribe. And that’s just great.)

Avoid rewriting a legacy system from scratch, by strangling it

This article is a repost promoting content originally published elsewhere. See more things Dan's reposted.

Sometimes, code is risky to change and expensive to refactor.

In such a situation, a seemingly good idea would be to rewrite it.

From scratch.

Here’s how it goes:

  1. You discuss with management about the strategy of stopping new features for some time, while you rewrite the existing app.
  2. You estimate the rewrite will take 6 months to cover what the existing app does.
  3. A few months in, a nasty bug is discovered and ABSOLUTELY needs to be fixed in the old code. So you patch the old code and the new one too.
  4. A few months later, a new feature has been sold to the client. It HAS TO BE implemented in the old code—the new version is not ready yet! You need to go back to the old code but also add a TODO to implement this in the new version.
  5. After 5 months, you realize the project will be late. The old app was doing way more things than expected. You start hustling more.
  6. After 7 months, you start testing the new version. QA raises up a lot of things that should be fixed.
  7. After 9 months, the business can’t stand “not developing features” anymore. Leadership is not happy with the situation, you are tired. You start making changes to the old, painful code while trying to keep up with the rewrite.
  8. Eventually, you end up with the 2 systems in production. The long-term goal is to get rid of the old one, but the new one is not ready yet. Every feature needs to be implemented twice.

Sounds fictional? Or familiar?

Don’t be shamed, it’s a very common mistake.

I’ve rewritten legacy systems from scratch before. Sometimes it’s all worked out, and sometimes it hasn’t, but either way: it’s always been a lot more work than I could have possibly estimated. I’ve learned now to try to avoid doing so: at least, to avoid replacing a single monolithic (living) system in a monolithic way. Nicholas gives an even-better description of the true horror of legacy reimplementation, and promotes progressive strangulation as a candidate solution.

SQLite Code Of Ethics (formerly Code of Conduct)

This article is a repost promoting content originally published elsewhere. See more things Dan's reposted.

  1. Attribute to God, and not to self, whatever good you see in yourself.
  2. Recognize always that evil is your own doing, and to impute it to yourself.
  3. Fear the Day of Judgment.
  4. Be in dread of hell.

In an age when more and more open-source projects are adopting codes of conduct that reflect the values of a tolerant, modern, liberal society, SQLite – probably the most widely-used database system in the world, appearing in everything from web browsers to games consoles – went… in a different direction. Interesting to see that, briefly, you could be in violation of their code of conduct by failing to love everything else in the world less than you love Jesus. (!)

After the Internet collectively went “WTF?”, they’ve changed their tune and said that this guidance, which is based upon the Rule of St. Benedict, is now their Code of Ethics, and their Code of Conduct is a little more… conventional.

The elephant in the diversity room

This article is a repost promoting content originally published elsewhere. See more things Dan's reposted.

The elephant in the diversity room – QuirksBlog (quirksmode.org)

false

Although there’s a lot of heated discussion around diversity, I feel many of us ignore the elephant in the web development diversity room. We tend to forget about users of older or non-standard devices and browsers, instead focusing on people with modern browsers, which nowadays means the latest versions of Chrome and Safari.

This is nothing new — see “works only in IE” ten years ago, or “works only in Chrome” right now — but as long as we’re addressing other diversity issues in web development we should address this one as well.

Ignoring users of older browsers springs from the same causes as ignoring women, or non-whites, or any other disadvantaged group. Average web developer does not know any non-whites, so he ignores them. Average web developer doesn’t know any people with older devices, so he ignores them. Not ignoring them would be more work, and we’re on a tight deadline with a tight budget, the boss didn’t say we have to pay attention to them, etc. etc. The usual excuses.

The Right To Read

[this post was lost during a server failure on Sunday 11th July 2004; it was partially recovered on 21st March 2012]

If you haven’t already read it, take a look at The Right To Read, a very short story written in 1997 and updated in 2002 – it’ll only take you a few minutes to read; it’s not ‘techie’ (anybody would understand it!), and it is relevant. The kind of things that are expressed in the story – while futuristic (and facist) sounding now, are being put into effect… slowly, quietly… by companies such as Sony, Phillips, Apple, and Microsoft: not to mention the manufactors of CDs and DVDs.

It’s been circulating the ‘net for years, but recent events such as InterTrust’s Universal Digital Rights Management System (report: The Register), which they claim will be ready within 6 months, and Microsoft’s ongoing work on the ‘Palladium’ project (report: BBC News) – topical events which mark the beginning of what could be the most important thing ever to happen in the history of copyright law, computing, and freedom of information.

So, go on – go read… [the remainder of this post, and three comments, have been lost]

WildCat BBS

This image was shared here in hindsight, on 22 March 2021. I didn’t actually start blogging until around August 1998; the message and text below were published on a bulletin board system to which I contributed.

The picture above illustrates the software I have just ordered