Please Fix This Site? Okay!

This week, digital agency Humidity Studios launched PleaseFixThisSite.com, a website with a deliberately awful design aesthetic.

Screenshot of PleaseFixThisSite.com, featuring all of the design sins described below.
Honestly, I’ve seen worse.

Inspired by XKCD 3113 “Fix This Sign”, the site features marquee animations, poor font choices, wonky rotation and alignment, and more.

Like the comic, it aims to “extort” people offended by the design choices by allowing them to pay to fix them. Once fixed, a change is fixed for everybody… at least, until somebody pays to “reset” the site back to its catastrophic mode.

XKCD comic. Transcript: [A single panel containing a large, elevated sign with Ponytail standing in front of it.] [Title, slightly off horizontal, more to the right than central and the character spacing is not entirely consistent/aesthetic:] Doanate[sic] to fix this sign! [To the left of the lower part of the sign there is an 'QR code', tilted slightly with a plaintext link beneath it:] https://[illegible].com [To the right are several dollar values, in one column, and 'fixes', in a second, some of which have their own self-demonstrating quirks.] [The letters "R" and "N" may be too close together:] $10 fix kerning [Both dollar value and fix text are shifted left of their respective columns:] $20 align columns [This line is in a smaller font:] $20 fix text size $50 fix typo $50 fix centering $100 fix rotation [Ponytail stands looking at the sign, apparently in the process of using a smartphone:] Grrr... [Caption below panel:] My new company's business model is based on extorting graphic designers.
I can’t criticise Humidity Studios for taking a stupid idea from XKCD and taking it way too far, because, well, there’s this site that I run

That’s cute and all, but the difference between a billboard and a web page is, of course, that a web page is under the viewer’s control. Once it’s left the server and reached your computer, there’s nothing the designer can to do stop you editing a page in any way you like. That’s just how the Web works!

A great way to do this is with userscripts: Javascript content that is injected into pages by your browser when you visit particular pages. Mostly by way of demonstration, I gave it a go. And now you can, too! All you need is a userscript manager plugin in your browser (my favourite is Violentmonkey!) and to install my (open source) script.

PleaseFixThisSite.com but with all of the problems fixed.
Much better! (I mean, still not a pinnacle of design… but at least my eyes aren’t bleeding any more!)

I enjoyed the art of the joke that is PleaseFixThisSite.com. But probably more than that, I enjoyed the excuse to remind you that by the time you’re viewing a Web page, it’s running on your computer, and you can change it any way you damn well please.

Don’t like the latest design of your favourite social network? Want to reinstate a feature of a popular video playing site? Need a dark mode to spare your eyes on a particular news publication? Annoyed by artificial wait times before you’re “allowed” to click a download button? There’s probably a userscript for all of those. And if there isn’t, you can have a go at writing one. Userscripts are great, and you should be using them.

×

I Wrote the Same Code Six Times!

I was updating my CV earlier this week in anticipation of applying for a handful of interesting-looking roles1 and I was considering quite how many different tech stacks I claim significant experience in, nowadays.

There are languages I’ve been writing in every single week for the last 15+ years, of course, like PHP, Ruby, and JavaScript. And my underlying fundamentals are solid.

But is it really fair for me to be able to claim that I can code in Java, Go, or Python: languages that I’ve not used commercially within the last 5-10 years?

Animated GIF showing Sublime Text editor flicking through six different languages, with equivalent code showing in each.
What kind of developer writes the same program six times… for a tech test they haven’t even been asked to do? If you guessed “Dan”, you’d be correct!

Obviously, I couldn’t just let that question lie2. Let’s find out!

Contents

The Test

I fished around on Glassdoor for a bit to find a medium-sized single-sitting tech test, and found a couple of different briefs that I mashed together to create this:

In an object-oriented manner, implement an LRU (Least-Recently Used) cache:

  • The size of the cache is specified at instantiation.
  • Arbitrary objects can be put into the cache, along with a retrieval key in the form of a string. Using the same string, you can get the objects back.
  • If a put operation would increase the number of objects in the cache beyond the size limit, the cached object that was least-recently accessed (by either a put or get operation) is removed to make room for it.
  • putting a duplicate key into the cache should update the associated object (and make this item most-recently accessed).
  • Both the get and put operations should resolve within constant (O(1)) time.
  • Add automated tests to support the functionality.

My plan was to implement a solution to this challenge, in as many of the languages mentioned on my CV as possible in a single sitting.

But first, a little Data Structures & Algorithms theory:

The Theory

Simple case with O(n) complexity

The simplest way to implement such a cache might be as follows:

  • Use a linear data structure like an array or linked list to store cached items.
  • On get, iterate through the list to try to find the matching item.
    • If found: move it to the head of the list, then return it.
  • On put, first check if it already exists in the list as with get:
    • If it already exists, update it and move it to the head of the list.
    • Otherwise, insert it as a new item at the head of the list.
      • If this would increase the size of the list beyond the permitted limit, pop and discard the item at the tail of the list.
Illustration of a simple Cache class with a linked list data structure containing three items, referenced from the class by its head item. It has 'get' and 'put' methods.
It’s simple, elegant and totally the kind of thing I’d accept if I were recruiting for a junior or graduate developer. But we can do better.

The problem with this approach is that it fails the requirement that the methods “should resolve within constant (O(1)) time”3.

Of particular concern is the fact that any operation which might need to re-sort the list to put the just-accessed item at the top 4. Let’s try another design:

Achieving O(1) time complexity

Here’s another way to implement the cache:

  • Retain cache items in a doubly-linked list, with a pointer to both the head and tail
  • Add a hash map (or similar language-specific structure) for fast lookups by cache key
  • On get, check the hash map to see if the item exists.
    • If so, return it and promote it to the head (as described below).
  • On put, check the hash map to see if the item exists.
    • If so, promote it to the head (as described below).
    • If not, insert it at the head by:
      • Updating the prev of the current head item and then pointing the head to the new item (which will have the old head item as its next), and
      • Adding it to the hash map.
      • If the number of items in the hash map would exceed the limit, remove the tail item from the hash map, point the tail at the tail item’s prev, and unlink the expired tail item from the new tail item’s next.
  • To promote an item to the head of the list:
    1. Follow the item’s prev and next to find its siblings and link them to one another (removes the item from the list).
    2. Point the promoted item’s next to the current head, and the current head‘s prev to the promoted item.
    3. Point the head of the list at the promoted item.
Illustration of a more-sophisticated Cache class with a doubly-linked list data structure containing three items, referenced from the class by its head and tail item. It has 'get' and 'put' methods. It also has a separate hash map indexing each cached item by its key.
Looking at a plate of pointer-spaghetti makes me strangely hungry.

It’s important to realise that this alternative implementation isn’t better. It’s just different: the “right” solution depends on the use-case5.

The Implementation

That’s enough analysis and design. Time to write some code.

GitHub repo 'Languages' panel, showing a project that includes Java (17.2%), Go (16.8%), Python (16.4%), Ruby (15.7%), TypeScript (13.7%), PHP (11.5%), and 'Other' (8.7%).
Turns out that if you use enough different languages in your project, GitHub begins to look like itwants to draw a rainbow.

Picking a handful of the more-useful languages on my CV6, I opted to implement in:

  • Ruby (with RSpec for testing and Rubocop for linting)
  • PHP (with PHPUnit for testing)
  • TypeScript (running on Node, with Jest for testing)
  • Java (with JUnit for testing)
  • Go (which isn’t really an object-oriented language but acts a bit like one, amirite?)
  • Python (probably my weakest language in this set, but which actually ended up with quite a tidy solution)

Naturally, I open-sourced everything if you’d like to see for yourself. It all works, although if you’re actually in need of such a cache for your project you’ll probably find an alternative that’s at least as good (and more-likely to be maintained!) in a third-party library somewhere!

What did I learn?

This was actually pretty fun! I might continue to expand my repo by doing the same challenge with a few of the other languages I’ve used professionally at some point or another7.

And there’s a few takeaways I got from this experience –

Lesson #1: programming more languages can make you better at all of them

As I went along, one language at a time, I ended up realising improvements that I could make to earlier iterations.

For example, when I came to the TypeScript implementation, I decided to use generics so that the developer can specify what kind of objects they want to store in the cache, rather than just a generic Object, and better benefit type-safety. That’s when I remembered that Java supports generics, too, so I went back and used them there as well.

In the same way as speaking multiple (human) languages or studying linguistics can help unlock new ways of thinking about your communication, being able to think in terms of multiple different programming languages helps you spot new opportunities. When in 2020 PHP 8 added nullsafe operators, union types, and named arguments, I remember feeling confident using them from day one because those features were already familiar to me from Ruby8, TypeScript9, and Python10, respectively.

Lesson #2: even when I’m rusty, I can rely on my fundamentals

I’ve applied for a handful of jobs now, but if one of them had invited me to a pairing session on a language I’m rusty on (like Java!) I might’ve felt intimidated.

But it turns out I shouldn’t need to be! With my solid fundamentals and a handful of other languages under my belt, I understand when I need to step away from the code editor and hit the API documentation. Turns out, I’m in a good position to demo any of my language skills.

I remember when I was first learning Go, I wanted to make use of a particular language feature that I didn’t know whether it had. But because I’d used that feature in Ruby, I knew what to search for in Go’s documentation to see if it was supported (it wasn’t) and if so, what the syntax was11.

Lesson #3: structural rules are harder to gearshift than syntactic ones

Switching between six different languages while writing the same application was occasionally challenging, but not in the ways I expected.

I’ve had plenty of experience switching programming languages mid-train-of-thought before. Sometimes you just have to flit between the frontend and backend of your application!

But this time around I discovered: changes in structure are apparently harder for my brain than changes in syntax. E.g.:

  • Switching in and out of Python’s indentation caught me out at least once (might’ve been better if I took the time to install the language’s tools into my text editor first!).
  • Switching from a language without enforced semicolon line ends (e.g. Ruby, Go) to one with them (e.g. Java, PHP) had me make the compiler sad several times.
  • This gets even tougher when not writing the language but writing about the language: my first pass at the documentation for the Go version somehow ended up with Ruby/Python-style #-comments instead of Go/Java/TypeScript-style //-comments; whoops!

I’m guessing that the part of my memory that looks after a language’s keywords, how a method header is structured, and which equals sign to use for assignment versus comparison… are stored in a different part of my brain than the bit that keeps track of how a language is laid-out?12

Okay, time for a new job

I reckon it’s time I got back into work, so I’m going to have a look around and see if there’s any roles out there that look exciting to me.

If you know anybody who’s looking for a UK-based, remote-first, senior+, full-stack web developer with 25+ years experience and more languages than you can shake a stick at… point them at my CV, would you?

Footnotes

1 I suspect that when most software engineers look for a new job, they filter to the languages, frameworks, they feel they’re strongest at. I do a little of that, I suppose, but I’m far more-motivated by culture, sector, product and environment than I am by the shape of your stack, and I’m versatile enough that technology specifics can almost come second. So long as you’re not asking me to write VB.NET.

2 It’s sort-of a parallel to how I decided to check the other week that my Gutenberg experience was sufficiently strong that I could write standard ReactJS, too.

3 I was pleased to find a tech test that actually called for an understanding of algorithm growth/scaling rates, so I could steal this requirement for my own experiment! I fear that sometimes, in their drive to be pragmatic and representative of “real work”, the value of a comprehension of computer science fundamentals is overlooked by recruiters.

4 Even if an algorithm takes the approach of creating a new list with the inserted/modified item at the top, that’s still just a very-specific case of insertion sort when you think about it, right?

5 The second design will be slower at writing but faster at reading, and will scale better as the cache gets larger. That sounds great for a read-often/write-rarely cache, but your situation may differ.

6 Okay, my language selection was pretty arbitrary. But if I’d have also come up with implementations in Perl, and C#, and Elixir, and whatever else… I’d have been writing code all day!

7 So long as I’m willing to be flexible about the “object-oriented” requirement, there are even more options available to me. Probably the language that I last wrote longest ago would be Pascal: I wonder how much of that I remember?

8 Ruby’s safe navigation/”lonely” operator did the same thing as PHP’s nullsafe operator since 2015.

9 TypeScript got union types back in 2015, and apart from them being more-strictly-enforced they’re basically identical to PHP’s.

10 Did you know that Python had keyword arguments since its very first public release way back in 1994! How did it take so many other interpreted languages so long to catch up?

11 The feature was the three-way comparison or “spaceship operator”, in case you were wondering.

12 I wonder if anybody’s ever laid a programmer in an MRI machine while they code? I’d be really interested to see if different bits of the brain light up when coding in functional programming languages than in procedural ones, for example!

× × × ×

Variable-aspect adaptive-bitrate video… in vanilla HTML?

The video below is presented in portrait orientation, because your screen is taller than it is wide.

The video below is presented in landscape orientation, because your screen is wider than it is tall.

The video below is presented in square orientation (the Secret Bonus Square Video!), because your screen has approximately the same width as as its height. Cool!

This is possible (with a single <video> element, and without any Javascript!) thanks to some cool HTML features you might not be aware of, which I’ll briefly explain in the video. Or scroll down for the full details.

Variable aspect-ratio videos in pure HTML

I saw a 2023 blog post by Scott Jehl about how he helped Firefox 120 (re)gain support for the <source media="..."> attribute. Chrome added support later that year, and Safari already had it. This means that it’s pretty safe to do something like this:

<video controls>
  <source src="squareish.mp4"
        media="(min-aspect-ratio: 0.95) and (max-aspect-ratio: 1.05)" />
  <source src="portrait.mp4"
        media="(orientation: portrait)" />
  <source src="landscape.mp4" />
</video>
This code creates a video with three sources: squareish.mp4 which is shown to people on “squareish” viewports, failing that portrait.mp4 which is shown to people whose viewports are taller than wide, and failing that landscape.mp4 which is shown to anybody else.

That’s broadly-speaking how the video above is rendered. No JavaScript needed.

Browsers only handle media queries on videos when they initially load, so you can’t just tip your phone over or resize the window: you’ll need to reload the page, too. But it works! Give it a go: take a look at the video in both portrait and landscape modes and let me know what you think1.

Adding adaptive bitrate streaming with HLS

Here’s another cool technology that you might not have realised you could “just use”: adaptive bitrate streaming with HLS!

You’ve used adaptive bitrate streaming before, though you might not have noticed it. It’s what YouTube, Netflix, etc. are doing when your network connection degrades and you quickly get dropped-down, mid-video, to a lower-resolution version2.

Turns out you can do it on your own static hosting, no problem at all. I used this guide (which has a great description of the parameters used) to help me:

ffmpeg -i landscape.mp4 \
       -filter_complex "[0:v]split=3[v1][v2][v3]; [v1]copy[v1out]; [v2]scale=w=1280:h=720[v2out]; [v3]scale=w=640:h=360[v3out]" \
       -map "[v1out]" -c:v:0 libx264 -x264-params "nal-hrd=cbr:force-cfr=1" -b:v:0 5M -maxrate:v:0 5M -minrate:v:0 5M -bufsize:v:0 10M -preset slow -g 48 -sc_threshold 0 -keyint_min 48 \
       -map "[v2out]" -c:v:1 libx264 -x264-params "nal-hrd=cbr:force-cfr=1" -b:v:1 3M -maxrate:v:1 3M -minrate:v:1 3M -bufsize:v:1 3M -preset slow -g 48 -sc_threshold 0 -keyint_min 48 \
       -map "[v3out]" -c:v:2 libx264 -x264-params "nal-hrd=cbr:force-cfr=1" -b:v:2 1M -maxrate:v:2 1M -minrate:v:2 1M -bufsize:v:2 1M -preset slow -g 48 -sc_threshold 0 -keyint_min 48 \
       -map a:0 -c:a:0 aac -b:a:0 96k -ac 2 \
       -map a:0 -c:a:1 aac -b:a:1 96k -ac 2 \
       -map a:0 -c:a:2 aac -b:a:2 48k -ac 2 \
       -f hls -hls_time 2 -hls_playlist_type vod -hls_flags independent_segments -hls_segment_type mpegts \
       -hls_segment_filename landscape_%v/data%02d.ts \
       -master_pl_name landscape.m3u8 \
       -var_stream_map "v:0,a:0 v:1,a:1 v:2,a:2" landscape_%v.m3u8
This command splits the H.264 video landscape.mp4 into three different resolutions: the original “v1” (1920×1080, in my case, with 96kbit audio), “v2” (1280×720, with 96kbit audio), and “v3” (640×360, with 48kbit audio), each with a resolution-appropriate maximum bitrate, and forced keyframes every 48th frame. Then it breaks each of those into HLS segments (.ts files) and references them from a .m3u8 playlist.

The output from this includes:

  • Master playlist landscape.m3u8, which references the other playlists with reference to their resolution and bandwidth, so that browsers can make smart choices,
  • Playlists landscape_0.m3u8 (“v1”), landscape_1.m3u8 (“v2”), etc., each of which references the “parts” of that video,
  • Directories landscape_0/, landscape_1/ etc., each of which contain
  • data00.ts, data01.ts, etc.: the actual “chunks” that contain the video segments, which can be downloaded independently by the browser as-needed

Bringing it all together

We can bring all of that together, then, to produce a variable-aspect, adaptive bitrate, HLS-streamed video player… in pure HTML and suitable for static hosting:

<video controls>
  <source src="squareish.m3u8"
         type="application/x-mpegURL"
        media="(min-aspect-ratio: 0.95) and (max-aspect-ratio: 1.05)" />

  <source src="portrait.m3u8"
         type="application/x-mpegURL"
        media="(orientation: portrait)" />

  <source src="landscape.m3u8"
         type="application/x-mpegURL" />
</video>
You could, I suppose, add alternate types, poster images, and all kinds of other fancy stuff, but this’ll do for now.

That’ll “just work” in Safari and a handful of mobile browsers… but won’t display anything for most desktop browsers. Boo!

One solution is to also provide the standard .mp4 files as an alternate <source>, and that’s fine I guess, but you lose the benefit of HLS (and you have to store yet more files). But there’s a workaround:

Polyfill full functionality for all browsers

If you’re willing to use a JavaScript polyfill, you can make the code above work on virtually any device. I gave this a go, here, by:

  1. Including the polyfill hls.js, and
  2. Adding some JavaScript code that detects affected `<video>` elements and applying the fix if necessary:
// Find all <video>s which have HLS sources:
for( hlsVideo of document.querySelectorAll('video:has(source[type="application/x-mpegurl"]), video:has(source[type="vnd.apple.mpegurl"])') ) {
  // If the browser has native support, do nothing:
  if( hlsVideo.canPlayType('application/x-mpegurl') || hlsVideo.canPlayType('application/vnd.apple.mpegurl') ) continue;

  // If hls.js can't help fix that, do nothing:
  if ( ! Hls.isSupported() ) continue;

  // Find the best source based on which is the first one to match any applicable CSS media queries
  const bestSource = Array.from(hlsVideo.querySelectorAll('source')).find(source=>window.matchMedia(source.media).matches)

  // Use hls.js to attach the best source:
  const hls = new Hls();
  hls.loadSource(bestSource.src);
  hls.attachMedia(hlsVideo);
}
It makes me feel a little dirty to make a <video> depend on JavaScript, but if that’s the route you want to go down while we wait for HLS support to become more widespread (rather than adding different-typed sources) then that’s fine, I guess.

This was a fun dive into some technologies I’ve not had the chance to try before. A fringe benefit of being a generalist full-stack developer is that when you’re “between jobs” you get to play with all the cool things when you’re brushing up your skills before your next big challenge!

(Incidentally: if you think you might be looking to employ somebody like me, my CV is over there!)

Footnotes

1 There definitely isn’t a super-secret “square” video on this page, though. No siree. (Shh.)

2 You can tell when you get dropped to a lower-resolution version of a video because suddenly everybody looks like they’re a refugee from Legoland.

Deprecate React

I’m keeping an eye out for my next career move (want to hire me?). Off the back of that I’ve been brushing up on the kinds of skills that I might be asked to showcase in any kind of “tech test”.

Not the kind of stuff I can do with one hand tied behind my back1, but the things for which I’d enjoy feeling a little more-confident2. Stuff that’s on my CV that I’ve done and can do, but where I’d like to check before somebody asks me about it in an interview.

React? Sure, I can do that…

LinkedIn, GlassDoor, and bits of the Fediverse are a gold mine for the kinds of things that people are being asked to demonstrate in tech tests these days. Like this post:

On LinkedIn, Avantika Raj shares a coding question asked during their React Developer interview with Volkswagon Software Solutions. It reads: Create a traffic light component with green, yellow, and red lights. On clicking a button, the light should change. Initially, it should show green. After 2 minutes, it should automatically switch to red for 30 seconds, then yellow for 10 seconds, and repeat this cycle continuously.
I’d describe myself as a “stack-agnostic senior/principal full-stack/backend web developer/security engineer”3, and so this question – which feels like it’s a filter for a junior developer with a React specialisation – isn’t really my wheelhouse. Which makes it a perfect excuse for an hour of playing about with React.

My recent React experience has mostly involved Gutenberg blocks and WordPress theme component. This seemed like an excuse to check that I can wrangle a non-WordPress React stack.

Animated GIF showing traffic lights changing through their phases on-demand or on a toggleable timer.
This isn’t particularly sophisticated. I added customisable durations for each light, but otherwise it’s pretty basic.

Half an hour later, I’d proven to myself that yes, I could throw together a fresh application with React DOM and implement some React components, pass state around and whatnot.

Time to move on to the next thing, right? That’s what a normal person would do.

But that’s not the kind of person I am.

Let’s reimplement this as Web Components

What I found myself thinking was… man, this is chunky. React is… not the right tool for this job.

(Or, increasingly, any job. But I’ll get back to that.)

A minified production build of my new component and its dependencies came in at 202kB (62.3kB compressed). That feels pretty massive for something that does so-little. So as an experiment, I re-implemented my new React component as a vanilla JS Web Component using a custom element. Identical functionality, but no third-party library dependencies. Here’s what I got:

This one’s interactive. Press a button or two!

The Web Component version of this control has no dependency chain and uses no JSX, and so it has no transpilation step: the source version is production-ready. You could minify it, but modern HTTP compression makes the impact of that negligible anyway: the whole thing weighs in at 19.5kB (5.2kB compressed) without minification.

And while I appreciate of course that there’s much more to JavaScript complexity and performance than file sizes… and beyond that I appreciate that there’s a lot more to making great components than the resulting bundle size… it’s hard to argue that delivering the same functionality (and less fragility) in a twelfth of the payload isn’t significant.

Composite screenshots showing the Chrome performance metrics and Network download sizes for the React and Web Components versions of my traffic lights. LCP - React 0.06s, Web Components 0.04s. INP - React 16ms, Web Components 8ms. Transferred - React 62.3kb (compressed), 202kB (uncompressed), in 37ms, Web Components 5.2kB (compressed), 19.5kB (uncompressed), in 22ms.
By any metric you like, the Web Components version outperforms the React version of my traffic light component. And while it’s a vastly-simplified example, it scales. Performance is a UX concern, and if you favour “what we’re familiar with” over “what’s best for our users”, that has to be a conscious choice.

But there’s a bigger point here:

React is the new jQuery

I’m alarmed by the fact that I’m still seeing job ads for “React developers”, with little more requirement than an ability to “implement things in React”.

From where I’m sitting, React is the new jQuery. It:

  • Was originally built to work around missing or underdeveloped JavaScript functionality
    • e.g. React’s components prior to Web Components
    • e.g. jQuery’s manipulation prior to document.querySelectorAll
  • Continued to be valuable as a polyfill and as a standard middleware while that functionality become commonplace
  • No longer provides enough value to be worth using in a new project
    • And yet somehow gets added “out of habit” for many years

If you’ve got a legacy codebase with lots of React in it, you’re still going to need React for a while. Just like how you’re likely to continue to need jQuery for a while until you can tidy up all those edge-cases where you’re using it.

(You might even be locked-in to using both React and jQuery for some time, if say you’ve got a plugin architecture that demands backwards-compatibility: I’m looking at you, WordPress!)

But just as you’re already (hopefully) working to slowly extricate your codebases from any now-unnecessary jQuery dependencies they have… you should be working on an exit plan for your React code, too. It’s done its time; it’s served its purpose: now it’s just a redundant dependency making your bundles cumbersome and harder to debug.

Everything React gives you on the client-side – components, state/hooks, routing4, etc. – is possible (and easy) in modern JavaScript supported in all major browsers. And if you still really want an abstraction layer, there are plenty of options (and they’re all a lot lighter than React!).

The bottom line is, I suppose…

You shouldn’t be hiring “React developers”!

If you’re building a brand new project, you shouldn’t be using React. It should be considered deprecated.

If you’ve got an existing product that depends on React… you should be thinking about how you’ll phase it out over time. And with that in mind, you want to be hiring versatile developers. They’ll benefit from some experience with React, sure, but unless they can also implement for the modern Web of tomorrow, they’ll just code you deeper into your dependency on React.

It’s time you started recruiting “Front-End Developers (React experience a plus)”. Show some long-term thinking! Or else the Web is going to move on without you, and in 5-10 years you’ll struggle to recruit people to maintain your crumbling stack.

You can download all my code and try it for yourself, if you like. The README has lots more information/spicy rants, and the whole thing’s under a public domain license so you can do whatever you like with it.

Footnotes

1 Exploiting or patching an injection vulnerability, optimising an SQL query, implementing a WordPress plugin, constructing a CircleCI buildchain, expanding test coverage over a Rubygem, performing an accessibility audit of a web application, extending a set of high-performance PHP-backed REST endpoints, etc. are all – I’d hope! – firmly in the “hold my beer” category of tech test skills I’d ace, for example. But no two tech stacks are exactly alike, so it’s possible that I’ll want to brush up on some of the adjacent technologies that are in the “I can do it, but I might need to hit the docs pages” category.

2 It’s actually refreshing to be learning and revising! I’ve long held that I should learn a new programming language or framework every year or two to stay fresh and to keep abreast of what’s going on in world. I can’t keep up with every single new front-end JavaScript framework any more (and I’m not sure I’d want to!)! But in the same way as being multilingual helps unlock pathways to more-creative thought and expression even if you’re only working in your native tongue, learning new programming languages gives you a more-objective appreciation of the strengths and weaknesses of what you use day-to-day. tl;dr: if you haven’t written anything in a “new” (to you) programming language for over a year, you probably should.

3 What do job titles even mean, any more? 😂 A problem I increasingly find is that I don’t know how to describe what I do, because with 25+ years of building stuff for the Web, I can use (and have used!) most of the popular stacks, and could probably learn a new one without too much difficulty. Did I mention I’m thinking about my next role? If you think we might “click”, I’d love to hear from you…

4 Though if you’re doing routing only on the client-side, I already hate you. Consider for example the SlimJS documentation which becomes completely unusable if a third-party JavaScript CDN fails: that’s pretty fragile!

× ×

AI vs The Expert

Duration

Podcast Version

This post is also available as a podcast. Listen here, download for later, or subscribe wherever you consume podcasts.

The Expert

Eleven years ago, comedy sketch The Expert had software engineers (and other misunderstood specialists) laughing to tears at the relatability of Anderson’s (Orion Lee) situation: asked to do the literally-impossible by people who don’t understand why their requests can’t be fulfilled.

Decades ago, a client wanted their Web application to automatically print to the user’s printer, without prompting. I explained that it was impossible because “if a website could print to your printer without at least asking you first, everybody would be printing ads as you browsed the web”. The client’s response: “I don’t need you to let everybody print. Just my users.”1

So yeah, I was among those that sympathised with Anderson.

In the sketch, the client requires him to “draw seven red lines, all of them strictly perpendicular; some with green ink and some with transparent”. He (reasonably) states that this is impossible2.

Versus AI

Following one of the many fever dreams when I was ill, recently, I woke up wondering… how might an AI programmer tackle this task? I had an inkling of the answer, so I had to try it:

Conversation with an AI. I ask 'Write a JavaScript program that uses a `<canvas>` element to draw seven red lines, all of them strictly perpendicular; some with green ink and some with transparent. Can you do that?'. It replies: 'Certainly! To draw seven red lines on a <canvas> element, with some lines using green ink and others being transparent, you can use the HTML5 Canvas API. Here's a simple JavaScript program that accomplishes this:', followed by a snippet of HTML code.
Aside from specifying that I want to use JavaScript and a <canvas> element3, the question is the same as in the sketch.

When I asked gpt-4o to assist me, it initially completely ignored the perpendicularity requirement.

White square containing 7 equally-spaced parallel vertical lines: red, green, orange, red, green, orange, red.
Drawing all of the lines strictly parallel to one another was… well, the exact opposite of what was asked for, although it was at least possible.

Let’s see if it can do better, with a bit of a nudge:

Continued conversation with an AI. I ask: 'Those lines aren't perpendicular. Can you fix the code?' The AI responds 'Certainly! To draw seven lines that are strictly perpendicular to each other, we need to ensure that each line is at a 90-degree angle to the next. Here's how you can achieve that using the <canvas> element:', followed by another code sample.
This is basically how I’d anticipated the AI would respond: eager to please, willing to help, and with an eager willingness that completely ignored the infeasibility of the task.

gpt-4o claimed that the task was absolutely achievable, even clarifying that the lines would all be “strictly perpendicular to each other”… before proceeding to instead make each consecutively-drawn line be perpendicular only to its predecessor:

The same diagram, but with the 7 lines joined together into a zig-zagging snake weaving its way right, then down, then left, then down, and so on across the canvas.
This is not what I asked for. But more importantly, it’s not what I wanted. (But it is pretty much what I expected.)

You might argue that this test is unfair, and it is. But there’s a point that I’ll get to.

But first, let me show you how a different model responded. I tried the same question with the newly-released Claude 3.7 Sonnet model, and got what I’d consider to be a much better answer:

Conversation with an AI. I ask the same original question, but now it responds: 'I see you're referencing the famous "Expert Talks - Draw 7 Red Lines" comedy sketch! This is a deliberately impossible task (drawing red lines with green ink, having all lines perpendicular to each other in 2D space, etc.).Let me create a humorous JavaScript program that attempts to "solve" this impossible problem:', before producing some JavaScript code.
I find myself wondering how this model would have responded if it hadn’t already been trained on the existence of the comedy sketch. The answer that (a) it’s impossible but (b) here’s a fun bit of code that attempts to solve it anyway is pretty-much perfect, but would it have come up with it on a truly novel (but impossible) puzzle?

In my mind: an ideal answer acknowledges the impossibility of the question, or at least addresses the supposed-impossibility of it. Claude 3.7 Sonnet did well here, although I can’t confirm whether it did so because it had been trained on data that recognised the existence of “The Expert” or not (it’s clearly aware of the sketch, given its answer).

Two red lines are perpendicular to one another, followed by horizontal lines in green, semitransparent red, red, green, and semitransparent green. Each are labelled with their axis in a 7-dimensional space, and with a clarifying tooltip.
The complete page that Claude 3.7 Sonnet produced also included an explanation of the task, clarifying that it’s impossible, and a link to the video of the original sketch.

What’s the point, Dan?

I remain committed to not using AI to do anything I couldn’t do myself (and can therefore check).4 And the answer I got from gpt-4o to this question goes a long way to demonstrating why.

Suppose I didn’t know that it was impossible to make seven lines perpendicular to one another in anything less than seven-dimensional space. If that were the case, it’d be tempting to accept an AI-provided answer as correct, and ship it. And while that example is trivial (and at least a little bit silly), it’s the kind of thing that, I have no doubt, actually happens in other areas.

Chatbots eagerness to provide a helpful answer, even if no answer is possible, is a huge liability. The other week, I experimentally asked Claude 3.5 for assistance with a PHPUnit mocking challenge and it provided a whole series of answers… that were completely invalid! It later turned out that what I was trying to achieve was impossible5.

Given that its answers clearly didn’t-work there was no risk I’d have shipped it anyway, but I’m certain that there exist developers who’ve asked a chatbot for help in a domain they didn’t understood and accepted its answer while still not understanding it, which feels to me like a quick route to introducing into your code a bug that happy-path testing won’t reveal. (Y’know, something like a security vulnerability, or an accessibility failure, or whatever.)

Code assisting AI remains really interesting and occasionally useful… but it’s also a real minefield and I see a lot of naiveté about its limitations.

Footnotes

1 My client eventually took that particular requirement out of scope and I thought the matter was settled, but I heard that they later contracted a different developer to implement just that bit of functionality into the application that we delivered. I never checked, but I think that what they delivered exploited ActiveX/Java applet vulnerabilities to achieve the goal.

2 Nerds gotta nerd, and so there’s been endless debate on the Internet about whether the task is truly impossible. For example, when I first saw the video I was struck by the observation that perpendicularity within a set of lines is limited linearly by the number of dimensions you’re working in, so it’s absolutely possible to model seven lines all perpendicular to one another… if you’re working in seven dimensions. But let’s put that all aside for a moment and assume the task is truly impossible within some framework unspecified-but-implied within the universe of the sketch, ‘k?

3 Two-dimensionality feels like a fair assumed constraint, given that in the sketch Anderson tries to demonstrate the challenges of the task by using a flip-chart.

4 I also don’t use AI to produce anything creative that I then pass off as my own, because, y’know, many of these models don’t seem to respect copyright. You won’t find any AI-written content on my blog, for example, except specifically to demonstrate AI’s capabilities (or lack thereof) when discussing AI, and this is always be clearly labelled. But that’s another question.

5 In fact, I was going about the problem itself in entirely the wrong way: some minor refactoring later and I had some solid unit tests that fit the bill, and I didn’t need to do the impossible. But the AI never clocked that, and I suspect it never would have.

× × × × × ×

Reply to Vika, re: Content-Security-Policy

This is a reply to a post published elsewhere. Its content might be duplicated as a traditional comment at the original source.

Vika said:

Had a fight with the Content-Security-Policy header today. Turns out, I won, but not without sacrifices.
Apparently I can’t just insert <style> tags into my posts anymore, because otherwise I’d have to somehow either put nonces on them, or hash their content (which would be more preferrable, because that way it remains static).

I could probably do the latter by rewriting HTML at publish-time, but I’d need to hook into my Markdown parser and process HTML for that, and, well, that’s really complicated, isn’t it? (It probably is no harder than searching for Webmention links, and I’m overthinking it.)

I’ve had this exact same battle.

Obviously the intended way to use nonces in a Content-Security-Policy is to have the nonce generated, injected, and served in a single operation. So in PHP, perhaps, you might do something like this:

<?php
  $nonce = bin2hex(random_bytes(16));
  header("Content-Security-Policy: script-src 'nonce-$nonce'");
?>
<!DOCTYPE html>
<html lang="en">
<head>
  <meta charset="UTF-8">
  <title>PHP CSP Nonce Test</title>
</head>
<body>
  <h1>PHP CSP Nonce Test</h1>
  <p>
    JavaScript did not run.
  </p>

  <!-- This JS has a valid nonce: -->
  <script nonce="<?php echo $nonce; ?>">
    document.querySelector('p').textContent = 'JavaScript ran successfully.';
  </script>

  <!-- This JS does not: -->
  <script nonce="wrong-nonce">
    alert('The bad guys won!');
  </script>
</body>
</html>
Viewing this page in a browser (with Javascript enabled) should show the text “JavaScript ran successfully.”, but should not show an alertbox containing the text “The bad guys won!”.

But for folks like me – and you too, Vika,, from the sounds of things – who serve most of their pages, most of the time, from the cache or from static HTML files… and who add the CSP header on using webserver configuration… this approach just doesn’t work.

I experimented with a few solutions:

  • A long-lived nonce that rotates.
    CSP allows you to specify multiple nonces, so I considered having a rotating nonce that was applied to pages (which were then cached for a period) and delivered by the header… and then a few hours later a new nonce would be generated and used for future page generations and appended to the header… and after the cache expiry time the oldest nonces were rotated-out of the header and became invalid.
  • Dynamic nonce injection.
    I experimented with having the webserver parse pages and add nonces: randomly generating a nonce, putting it in the header, and then basically doing a s/<script/<script nonce="..."/ to search-and-replace it in.

Both of these are terrible solutions. The first one leaves a window of, in my case, about 24 hours during which a successfully-injected script can be executed. The second one effectively allowlists all scripts, regardless of their provenance. I realised that what I was doing was security theatre: seeking to boost my A-rating to an A+-rating on SecurityHeaders.com without actually improving security at all.

But the second approach gave me an idea. I could have a server-side secret that gets search-replaced out. E.g. if I “signed” all of my legitimate scripts with something like <script nonce="dans-secret-key-goes-here" ...> then I could replace s/dans-secret-key-goes-here/actual-nonce-goes-here/ and thus have the best of both worlds: static, cacheable pages, and actual untamperable nonces. So long as I took care to ensure that the pages were never delivered to anybody with the secret key still intact, I’d be sorted!

Alternatively, I was looking into whether Caddy can do something like mod_asis does for Apache: that is, serve a file “as is”, with headers included in the file. That way, I could have the CSP header generated with the page and then saved into the cache, so it’s delivered with the same none every time… until the page changes. I’d love more webservers to have an “as is” mode, but I appreciate that might be a big ask (Apache’s mechanism, I suspect, exploits the fact that HTTP/1.0 and HTTP/1.1 literally send headers, followed by two CRLFs, then content… but that’s not what happens in HTTP/2+).

So yeah, I’ll probably do a server-side-secret approach, down the line. Maybe that’ll work for you, too.

Bloomberg’s Terms

This is a repost promoting content originally published elsewhere. See more things Dan's reposted.

While perfectly legal, it is remarkable that to read a Bloomberg article, you must first agree to binding arbitration and waive your class action rights.

A pop-up notification indicating that the terms have been updated. The message states that by accepting, users agree to the updated Terms of Service, which includes an arbitration provision and class action waiver. It also mentions the processing of user information as described in the Privacy Policy, including potential sharing with third parties about the use of Bloomberg.com. A button labeled "Accept" is provided for users to acknowledge the terms.

I don’t often see dialog boxes like this one. In fact, if I go to the URL of a Bloomberg.com article, I don’t see any popups: nothing about privacy, nothing about cookies, nothing about terms of service, nothing about only being allowed to read a limited number of articles without signing up an account. I just… get… the article.

The reason for this is, most-likely, because my web browser is configured, among other things, to:

  • Block all third-party Javascript (thanks, uBlock Origin‘s “advanced mode”), except on domains where they’re explicitly allowed (and even then with a few exceptions: thanks, Ghostery),
  • Delete all cookies 30 seconds after I navigate away from a domain, except for domains that are explicitly greylisted/allowlisted (thanks, Cookie-AutoDelete), and
  • Resist other fingerprinting methods as best I can (thanks, Enhanced Tracking Protection).

But here’s the thing I’ve always wondered: if I don’t get to see a “do you accept our terms and conditions?” popup, is is still enforceable?

Obviously, one could argue that by using my browser in a non-standard configuration that explicitly results in the non-appearance of “consent” popups that I’m deliberately turning a blind eye to the popups and accepting them by my continued use of their services1. Like: if I pour a McDonalds coffee on my lap having deliberately worn blinkers that prevent me reading the warning that it’s hot, it’s not McDonalds’ fault that I chose to ignore their helpful legally-recommended printed warning on the cup, right?2

But I’d counter that if a site chooses to rely on Javascript hosted by a third party in order to ask for consent, but doesn’t rely on that same third-party in order to provide the service upon which consent is predicated, then they’re setting themselves up to fail!

The very nature of the way the Internet works means that you simply can’t rely on the user successfully receiving content from a CDN. There are all kinds of reasons my browser might not get the Javascript required to show the consent dialog, and many of them are completely outside of the visitor’s control: maybe there was a network fault, or CDN downtime, or my browser’s JS engine was buggy, or I have a disability and the technologies I use to mitigate its impact on my Web browsing experience means that the dialog isn’t read out to me. In any of these cases, a site visitor using an unmodified, vanilla, stock web browser might visit a Bloomberg article and read it without ever being asked to agree to their terms and conditions.

Would that be enforceable? I hope you’ll agree that the answer is: no, obviously not!

It’s reasonably easy for a site to ensure that consent is obtained before providing services based on that consent. Simply do the processing server-side, ask for whatever agreement you need, and only then provide services. Bloomberg, like many others, choose not to do this because… well, it’s probably a combination of developer laziness and search engine optimisation. But my gut feeling says that if it came to court, any sensible judge would ask them to prove that the consent dialog was definitely viewed by and clicked on by the user, and from the looks of things: that’s simply not something they’d be able to do!

tl;dr: if you want to fight with Bloomberg and don’t want to go through their arbitration, simply say you never saw or never agreed to their terms and conditions – they can’t prove that you did, so they’re probably unenforceable (assuming you didn’t register for an account with them or anything, of course). This same recommendation applies to many, many other websites.

Footnotes

1 I’m confident that if it came down to it, Bloomberg’s lawyers would argue exactly this.

2 I see the plaintiff’s argument that the cups were flimsy and obviously her injuries were tragic, of course. But man, the legal fallout and those “contents are hot” warnings remain funny to this day.

×

kirby vs. this blog post

This is a repost promoting content originally published elsewhere. See more things Dan's reposted.

A cute use of Javascript to make a fun post more-fun, helping to keep the Internet fun and weird.

I want to do more crap like this.

But meanwhile, I should show this post to my 8-year-old, who recently finished playing through a Kirby game and won’t stop talking about it. He might appreciate it, but perhaps in a different way to me.

Hackaday

Me, hacking challenging Javascript at work: “Damn, I need a holiday.”

Dan sits at a laptop in a hotel bar, a view of Barcelona out of the window behind him, a beer bottle alongside him.

Me, hacking challenging Javascript on sabbatical: “Ah, so relaxing.”

×

Draw Me a Comment!

Why must a blog comment be text? Why could it not be… a drawing?1

Red and black might be more traditional ladybird colours, but sometimes all you’ve got is blue.

I started hacking about and playing with a few ideas and now, on selected posts including this one, you can draw me a comment instead of typing one.

Just don’t tell the soup company what I’ve been working on, okay?

I opened the feature, experimentally (in a post available only to RSS subscribers2) the other week, but now you get a go! Also, I’ve open-sourced the whole thing, in case you want to pick it apart.

What are you waiting for: scroll down, and draw me a comment!

Footnotes

1 I totally know the reasons that a blog comment shouldn’t be a drawing; I’m not completely oblivious. Firstly, it’s less-expressive: words are versatile and you can do a lot with them. Secondly, it’s higher-bandwidth: images take up more space, take longer to transmit, and that effect compounds when – like me – you’re tracking animation data too. But the single biggest reason, and I can’t stress this enough, is… the penises. If you invite people to draw pictures on your blog, you’re gonna see a lot of penises. Short penises, long penises, fat penises, thin penises. Penises of every shape and size. Some erect and some flacid. Some intact and some circumcised. Some with hairy balls and some shaved. Many of them urinating or ejaculating. Maybe even a few with smiley faces. And short of some kind of image-categorisation AI thing, you can’t realistically run an anti-spam tool to detect hand-drawn penises.

2 I’ve copied a few of my favourites of their drawings below. Don’t forget to subscribe if you want early access to any weird shit I make.

Good Food, Bad Authorisation

I was browsing (BBC) Good Food today when I noticed something I’d not seen before: a “premium” recipe, available on their “app only”:

Screenshot showing recipes, one of which is labelled "App only" and "Premium".

I clicked on the “premium” recipe and… it looked just like any other recipe. I guess it’s not actually restricted after all?

Just out of curiosity, I fired up a more-vanilla web browser and tried to visit the same page. Now I saw an overlay and modal attempting1 to restrict access to the content:

Overlay attempting to block content to the page beneath, saying "Try 1 year for just £9.99 and save 81%".

It turns out their entire effort to restrict access to their premium content… is implemented in client-side JavaScript. Even when I did see the overlay and not get access to the recipe, all I needed to do was open my browser’s debugger and run document.body.classList.remove('tp-modal-open'); for(el of document.querySelectorAll('.tp-modal, .tp-backdrop')) el.remove(); and all the restrictions were lifted.

What a complete joke.

Why didn’t I even have to write my JavaScript two-liner to get past the restriction in my primary browser? Because I’m running privacy-protector Ghostery, and one of the services Ghostery blocks by-default is one called Piano. Good Food uses Piano to segment their audience in your browser, but they haven’t backed that by any, y’know, actual security so all of their content, “premium” or not, is available to anybody.

I’m guessing that Immediate Media (who bought the BBC Good Food brand a while back and have only just gotten around to stripping “BBC” out of the name) have decided that an ad-supported model isn’t working and have decided to monetise the site a little differently2. Unfortunately, their attempt to differentiate premium from regular content was sufficiently half-hearted that I barely noticed that, too, gliding through the paywall without even noticing were it not for the fact that I wondered why there was a “premium” badge on some of their recipes.

Screenshot from OpenSourceFood.com, circa 2007.
You know what website I miss? OpenSourceFood.com. It went downhill and then died around 2016, but for a while it was excellent.

Recipes probably aren’t considered a high-value target, of course. But I can tell you from experience that sometimes companies make basically this same mistake with much more-sensitive systems. The other year, for example, I discovered (and ethically disclosed) a fault in the implementation of the login forms of a major UK mobile network that meant that two-factor authentication could be bypassed entirely from the client-side.

These kinds of security mistakes are increasingly common on the Web as we train developers to think about the front-end first (and sometimes, exclusively). We need to do better.

Footnotes

1 The fact that I could literally see the original content behind the modal was a bit of a giveaway that they’d only hidden it, not actually protected it in any way.

2 I can see why they’d think that: personally, I didn’t even know there were ads on the site until I did the experiment above: turns out I was already blocking them, too, along with any anti-ad-blocking scripts that might have been running alongside.

× × ×

Multi-Phase Maps in FoundryVTT

FoundryVTT is a fantastic Web-based environment for tabletop roleplaying adventures1 and something I particularly enjoy is the freedom for virtually-unlimited scripting. Following a demonstration to a fellow DM at work last week I promised to throw together a quick tutorial into scripting simple multi-phase maps using Foundry.2

Why multi-phase maps?

Animated battlemap which organically grows into a leafy flower over six stages.
For this demonstration, I’ll be using AtraxianBear’s Growing Flower Dungeon.

You might use a multi-phase map to:

  • Allow the development and expansion of a siege camp outside the fortress where the heroes are holed-up.3
  • Rotate through day and night cycles or different times of day, perhaps with different things to interact with in each.4
  • Gradually flood a sewer with rising water… increasing the range of the monster that dwells within.5
  • Re-arrange parts of the dungeon when the characters flip certain switches, opening new paths… and closing others.

I’ll use the map above to create a simple linear flow, powered by a macro in the hotbar. Obviously, more-complex scenarios are available, and combining this approach with a plugin like Monk’s Active Tile Triggers can even be used to make the map appear to dynamically change in response to the movement or actions of player characters!

Setting the scene

Create a scene, using the final state of the map as the background. Then, in reverse-order, add the previous states as tiles above it.

Not shown, but highly-recommended: lock each tile when you’re done placing it, so that you don’t accidentally interact with it when you mean to e.g. drag-select multiple actors.

Make a note of the X-position that your tiles are in when they’re where they supposed to be: we’ll “move” the tiles off to the side when they’re hidden, to prevent their ghostly half-hidden forms getting in your way as game master. We’ll also use this X-position to detect which tiles have already been moved/hidden.

Also make note of each tile’s ID, so your script can reference them. It’s easiest to do this as you go along. When you’re ready to write your macro, reverse the list, because we’ll be hiding each tile in the opposite order from the order you placed them.

Writing the script

Next, create a new script macro, e.g. by clicking an empty slot in the macro bar. When you activate this script, the map will move forward one phase (or, if it’s at the end, it’ll reset).

I find Foundry’s built-in script editor a little… small? So I write my scripts in my favourite text editor and then copy-paste.

Here’s the code you’ll need – the 👈 emoji identifies the places you’ll need to modify the code, specifically:

  1. const revealed_tiles_default_x = 250 should refer to the X-position of your tiles when they’re in the correct position.
  2. const revealed_tiles_modified_x = 2825 should refer to the X-position they’ll appear at “off to the right” of your scene. To determine this, just move one tile right until it’s sufficiently out of the way of the battlemap and then check what it’s X-position is! Or just take the default X-position, add the width of your map in pixels, and then add a tiny bit more.
  3. const revealed_tiles = [ ... ] is a list of the tile IDs of each tile what will be hidden, in turn. In my example there are five of them (the sixth and final image being the scene background).
const revealed_tiles_default_x = 250;   // 👈 X-position of tiles when displayed
const revealed_tiles_modified_x = 2825; // 👈 X-position of tiles when not displayed
const revealed_tiles = [
  '2xG7S8Yqk4x1eAdr',                   // 👈 list of tile IDs in order that they should be hidden
  'SjNQDBImHvrjAHWX',                   //     (top to bottom)
  'tuYg4FvLgIla1l21',
  'auX4sj64PWmkAteR',
  'yAL4YP0I4Cv4Sevt',
].map(t=>canvas.tiles.get(t));

/*************************************************************************************************/

// Get the topmost tile that is still visible:
const next_revealed_tile_to_move = revealed_tiles.find(t=>
  t.position.x == revealed_tiles_default_x
);

// If there are NO still-visible tiles, we must need to reset the map:
if( ! next_revealed_tile_to_move ) {
  // To reset the map, we go through each tile and put it back where it belongs -
  for(tile of revealed_tiles){
    canvas.scene.updateEmbeddedDocuments("Tile", [ {
      _id: tile.id,
      x: revealed_tiles_default_x,
      hidden: false
    } ]);
  }
} else {
  // Otherwise, hide the topmost visible tile (and move it off to the side to help the GM) -
  canvas.scene.updateEmbeddedDocuments("Tile", [ {
    _id: next_revealed_tile_to_move.id,
    x: revealed_tiles_modified_x,
    hidden: true
  } ]);
}

I hope that the rest of the code is moderately self-explanatory for anybody with a little JavaScript experience, but if you’re just following this kind of simple, linear case then you don’t need to modify it anyway. But to summarise, what it does is:

  1. Finds the first listed tile that isn’t yet hidden (by comparing its X-position to the pre-set X-position).
  2. If there aren’t any such tiles, we must have hidden them all already, so perform a reset: to do this – iterate through each tile and set its X-position to the pre-set X-position, and un-hide it.
  3. Otherwise, move the first not-hidden tile to the alternative X-position and hide it.

I hope you have fun with scripting your own multi-phase maps. Just don’t get so caught-up in your awesome scenes that you fail to give the players any agency!

Footnotes

1 Also, it’s on sale at 20% off this week to celebrate its fourth anniversary. Just sayin’.

2 I can neither confirm nor deny that a multi-phase map might be in the near future of The Levellers‘ adventure…

3 AtraxianBear has a great series of maps inspired by the 1683 siege of Vienna by the Ottomans that could be a great starting point for a “gradually advancing siege” map.

4 If you’re using Dungeon Alchemist as part of your mapmaking process you can just export orthographic or perspective outputs with different times of day and your party’s regular inn can be appropriately lit for any time of day, even if the party decides to just “wait at this table until nightfall”.

5 Balatro made a stunning map with rising water as a key feature: there’s a preview available.

×

Reversing Sandwiches

Fixing Sandwiches

RotatingSandwiches.com is a website showcasing animated GIF files of rotating sandwiches1. But it’s got a problem: 2 of the 51 sandwiches rotate the wrong way. So I’ve fixed it:

The Eggplant Parm Sub is one of two sandwiches whose rotation doesn’t match the rest.

My fix is available as a userscript on GreasyFork, so you can use your favourite userscript manager2 to install it and the rotation will be fixed for you too. Here’s the code (it’s pretty simple):

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
    // ==UserScript==
    // @name        Standardise sandwich rotation on rotatingsandwiches.com
    // @namespace   rotatingsandwiches.com.danq.me
    // @match       https://rotatingsandwiches.com/*
    // @grant       GM_addStyle
    // @version     1.0
    // @author      Dan Q <https://danq.me/>
    // @license     The Unlicense / Public Domain
    // @description Some sandwiches on rotatingsandwiches.com rotate in the opposite direction to the majority. 😡 Let's fix that.
    // ==/UserScript==
     
    GM_addStyle('.q23-image-216, .q23-image-217 { transform: scaleX(-1); }');

Unless you’re especially agitated by irregular sandwich rotation, this is perhaps the most-pointless userscript ever created. So why did I go to the trouble?

Fixing Websites

Obviously, I’m telling you this as a vehicle to talk about userscripts in general and why you should be using them.

Userscripts fix and improve websites for you. Whether you want to force Reddit to stay “old Reddit” as long as possible,  make Geocaching.com’s maps more-powerful, show Twitter images uncropped by default, re-add the “cached” link to Google search results, show prices on shopping sites in terms of hours-of-your-life of work they “cost”, or just automatically click Netflix’s “yes, I’m still here, keep playing” button for maximum binge-mode, there’s a script for you. They’re like tiny browser plugins.

Screenshot from Pinterest showing many kittens, not logged-in.
Want to get endless-scroll in Pinterest without getting nagged to make a Pinterest account? There’s a userscript for that.

But the real magic is being able to remix the web your way. With just a little bit of CSS or JavaScript experience you can stop complaining that a website’s design has changed in some way you don’t like or that some functionality you use isn’t as powerful or convenient as you’d like and you can fix it.

A website I used disables scrolling until all their (tracking, advertising, etc.) JavaScript loads, and my privacy blocker blocks those files: I could cave and disable my browser’s privacy tools… but it was almost as fast to add setInterval(()=>document.body.style.overflow='', 200); to a userscript and now it’s fixed.

Don’t want a Sports section on your BBC News homepage (not just the RSS feed!)? document.querySelector('a[href="/sport"]').closest('main > div').remove(). Sorted.

I’m a huge fan of building your own tools to “scratch your own itch”. Userscripts are a highly accessible introduction to doing so that even beginner programmers can get on board with and start getting value from. More-advanced scripts can do immensely clever and powerful things, but even if you just use them to apply a few light CSS touches to your favourite websites, that’s still a win.

Footnotes

1 Remember when a website’s domain name used to be connected to what it was for? RotatingSandwiches.com does.

2 I favour ViolentMonkey.

×

April Features!

I’m testing a handful of highly-experimental new features on my personal website using multivariate (“A/B”) testing.

Screenshot of the recent "Quickly Solving JigsawExplorer Puzzles" blog post with a new "Dark mode" switch hovering over it.
“Dark Mode” is just one of the new features I’m testing out.

If you visit within the next day or so you’re likely to be randomly-selected to try out one of them. (If you’re not selected, you can manually enable one of the experiments.)

I’d love to hear your feedback on these Very Serious New Features! Let me know which one(s) you see and whether you think they should become permanent fixtures on my site.