Known Leaders is an open-source
program that combines WikiData with a random generator to come up with almost-invariably inaccurate but sometimes hilarious facts. Jim Kang came up with it during Recurse Center‘s Never Graduate Week. Go have a play, or read more about how and why he made it.
I just spent a lightweight week in Rome with fellow members of Automattic‘s Team Fire.
Among our goals for the week was an attempt to strengthen the definition of who are team are, what we work on, and how and why we do so. That’s
basically a team-level identity, mission, vision, and values, right?
We were missing two members of our team, but one was able to remote-in (the other’s on parental leave!).
The cards sat on my ‘plane tickets for a fortnight because it was just about the only way I’d remember to pack them.
Normally when you play Dixit, you select a card from your hand – each shows a unique piece of artwork – and try to describe it in a way that’s precise enough that some
of the other players will later be able to pick it out of a line-up, but ambiguous enough that not all the other players will. It’s a delicate balancing act. Even when our old
Geek Night was in full swing we didn’t used to play it often because our well-established group’s cornucopia of in-jokes and references made it trivially easy to “target”
your descriptions at specific players1, but it’s still a solid icebreaker activity.
Can you see your team’s values symbolised in any Dixit cards?
Perhaps it was the fantasy artwork that inspired us or maybe it just says something about how my team sees themselves, but what we came up with had a certain… swords-and-sorcery… even
Dungeons & Dragons… feel to it.
The projects my team are responsible for aren’t actually monsters, but they can be complex, multifaceted, and unintuitive. And have a high AC.
Ou team’s new identity isn’t finalised, but I love the fact that we’ve been able to inject a bit of fun and whimsy into it. At our last draft, my team looks to be defined as comprising:
Gareth, level 62 Pathfinder, leading the way through the wilds
Bero, Level 5 Battlesmith, currently lost in the void
Dan (me!), Level 5 Arcane Trickster, breaking locks and stealing treasure
Cem, Level 4 Dragonslayer, smashing doors and bugs alike
Lae, Level 7 Pirate, seabound rogue with eyes on the horizon
Kyle, Level 5 Apprentice Bard, master of words and magic
Simran, Level 6 Apprentice Code Witch, weaving spells from nature
I think that’s pretty awesome.
Footnotes
1 Also: I don’t own any of the expansion packs and playing with the same cards over and
over again gets a bit samey.
2 The “levels” are simply the number of years each teammate has been an Automattician,
plus one.
I’m off work sick today: it’s just a cold, but it’s had a damn good go at wrecking my lungs and I feel pretty lousy. You know how when you’ve got too much of a brain-fog to trust
yourself with production systems but you still want to write code (or is that just me?), so this morning I threw together a really,
really stupid project which you can play online here.
It’s a board game. Well, the digital edition of one. Also, it’s not very good.
It’s inspired by a toot by Mason”Tailsteak” Williams (whom I’ve mentioned before once or
twice). At first I thought I’d try to calculate the odds of winning at his proposed game, or how many times one might expect to play before winning,
but I haven’t the brainpower for that in my snot-addled brain. So instead I threw together a terrible, terrible digital implementation.
Go play it if, like me, you’ve got nothing smarter that your brain can be doing today.
I swear that I used to be good at Mastermind when I was a kid. But now, when it’s my turn to break
the code that one of our kids has chosen, I fail more often than I succeed. That’s no good!
If you didn’t have me pegged as a board gamer… where the hell have you been?
Mastermind and me
Maybe it’s because I’m distracted; multitasking doesn’t help problem-solving. Or it’s because we’re “Super” Mastermind, which differs from the one I had as a child in that
eight (not six) peg colours are available and secret codes are permitted to have duplicate peg colours. These changes increase the possible permutations from 360 to 4,096, but the
number of guesses allowed only goes up from 8 to 10. That’s hard.
The set I had as a kid was like this, I think. Photo courtesy ZeroOne; CC-BY-SA license.
Hey, that’s an idea. Let’s crack the code… by writing some code!
This online edition plays a lot like the version our kids play, although the peg colours are different. Next guess should be an
easy solve!
Representing a search space
The search space for Super Mastermind isn’t enormous, and it lends itself to some highly-efficient computerised storage.
There are 8 different colours of peg. We can express these colours as a number between 0 and 7, in three bits of binary, like this:
Decimal
Binary
Colour
0
000
Red
1
001
Orange
2
010
Yellow
3
011
Green
4
100
Blue
5
101
Pink
6
110
Purple
7
111
White
There are four pegs in a row, so we can express any given combination of coloured pegs as a 12-bit binary number. E.g. 100 110 111 010 would represent the
permutation blue (100), purple (110), white (111), yellow (010). The total search space, therefore, is the range of numbers from
000000000000 through 111111111111… that is: decimal 0 through 4,095:
Decimal
Binary
Colours
0
000000000000
Red, red, red, red
1
000000000001
Red, red, red, orange
2
000000000010
Red, red, red, yellow
…………
4092
111111111100
White, white, white, blue
4093
111111111101
White, white, white, pink
4094
111111111110
White, white, white, purple
4095
111111111111
White, white, white, white
Whenever we make a guess, we get feedback in the form of two variables: each peg that is in the right place is a bull; each that represents a peg in the secret code but
isn’t in the right place is a cow (the names come from Mastermind’s precursor, Bulls & Cows). Four bulls
would be an immediate win (lucky!), any other combination of bulls and cows is still valuable information. Even a zero-score guess is valuable- potentially very valuable! – because it
tells the player that none of the pegs they’ve guessed appear in the secret code.
If one of Wordle‘s parents was Scrabble, then this was the other. Just ask its Auntie Twitter.
Solving with Javascript
The latest versions of Javascript support binary literals and bitwise operations, so we can encode and decode between arrays of four coloured pegs (numbers 0-7) and the number 0-4,095
representing the guess as shown below. Decoding uses an AND bitmask to filter to the requisite digits then divides by the order of magnitude. Encoding is just a reduce
function that bitshift-concatenates the numbers together.
/** * Decode a candidate into four peg values by using binary bitwise operations. */function decodeCandidate(candidate){
return [
(candidate &0b111000000000) /0b001000000000,
(candidate &0b000111000000) /0b000001000000,
(candidate &0b000000111000) /0b000000001000,
(candidate &0b000000000111) /0b000000000001
];
}
/** * Given an array of four integers (0-7) to represent the pegs, in order, returns a single-number * candidate representation. */function encodeCandidate(pegs) {
return pegs.reduce((a, b)=>(a <<3) + b);
}
With this, we can simply:
Produce a list of candidate solutions (an array containing numbers 0 through 4,095).
Choose one candidate, use it as a guess, and ask the code-maker how it scores.
Eliminate from the candidate solutions list all solutions that would not score the same number of bulls and cows for the guess that was made.
Repeat from step #2 until you win.
Step 3’s the most important one there. Given a function getScore( solution, guess ) which returns an array of [ bulls, cows ] a given guess would
score if faced with a specific solution, that code would look like this (I’m convined there must be a more-performant way to eliminate candidates from the list with XOR
bitmasks, but I haven’t worked out what it is yet):
/** * Given a guess (array of four integers from 0-7 to represent the pegs, in order) and the number * of bulls (number of pegs in the guess that are in the right place) and cows (number of pegs in the * guess that are correct but in the wrong place), eliminates from the candidates array all guesses * invalidated by this result. Return true if successful, false otherwise. */function eliminateCandidates(guess, bulls, cows){
const newCandidatesList = data.candidates.filter(candidate=>{
const score = getScore(candidate, guess);
return (score[0] == bulls) && (score[1] == cows);
});
if(newCandidatesList.length ==0) {
alert('That response would reduce the candidate list to zero.');
returnfalse;
}
data.candidates = newCandidatesList;
chooseNextGuess();
returntrue;
}
I continued in this fashion to write a full solution (source code). It uses ReefJS for
component rendering and state management, and you can try it for yourself right in your web browser. If you play against the online version I mentioned you’ll need to transpose the colours in your head: the physical version I play with the kids has pink and
purple pegs, but the online one replaces these with brown and black.
Testing the solution
Let’s try it out against the online version:
As expected, my code works well-enough to win the game every time I’ve tried, both against computerised and in-person opponents. So – unless you’ve been actively thinking about the
specifics of the algorithm I’ve employed – it might surprise you to discover that… my solution is very-much a suboptimal one!
My code has only failed to win a single game… and that turned out to because my opponent, playing overexcitedly, cheated in the third turn. To be fair, my code didn’t lose
either, though: it identified that a mistake must have been made and we declared the round void when we identified the problem.
My solution is suboptimal
A couple of games in, the suboptimality of my solution became pretty visible. Sure, it still won every game, but it was a blunt instrument, and anybody who’s seriously thought about
games like this can tell you why. You know how when you play e.g. Wordle (but not in “hard mode”) you sometimes want to type in a word that can’t possibly be the
solution because it’s the best way to rule in (or out) certain key letters? This kind of strategic search space bisection reduces the mean number of guesses you need to solve the
puzzle, and the same’s true in Mastermind. But because my solver will only propose guesses from the list of candidate solutions, it can’t make this kind of improvement.
My blog post about Break Into Us used a series of visual metaphors to show search space dissection, including this one. If you missed
it, it might be worth reading.
Search space bisection is also used in my adverserial hangman game, but in this case the aim is to split the search space in such a way that no
matter what guess a player makes, they always find themselves in the larger remaining portion of the search space, to maximise the number of guesses they have to make. Y’know, because
it’s evil.
A great first guess, assuming you’re playing against a random code and your rules permit the code to have repeated colours, is a “1122” pattern.
There are mathematically-derived heuristics to optimise Mastermind strategy. The first
of these came from none other than Donald Knuth (legend of computer science, mathematics, and pipe organs) back in 1977. His solution,
published at probably the height of the game’s popularity in the amazingly-named Journal of Recreational Mathematics, guarantees a solution to the six-colour version of the
game within five guesses. Ville [2013] solved an
optimal solution for a seven-colour variant, but demonstrated how rapidly the tree of possible moves grows and the need for early pruning – even with powerful modern computers – to
conserve memory. It’s a very enjoyable and readable paper.
But for my purposes, it’s unnecessary. My solver routinely wins within six, maybe seven guesses, and by nonchalantly glancing at my phone in-between my guesses I can now reliably guess
our children’s codes quickly and easily. In the end, that’s what this was all about.
I really love Dungeondraft, an RPG battle map generator. It’s got great compatibility with
online platforms like Foundry VTT and Roll20, but if you’re looking to make maps for tabletop play,
there’s a few tips I can share:
Tabletop players can’t zoom in and will appreciate you printing with good contrast.
Planning and designing
Dungeondraft has (or can be extended with) features to support light levels and shadow-casting obstructions, openable doors and windows, line-of sight etc… great to have when you’re
building for Internet-enabled tabletops, but pointless when you’re planning to print out your map! Instead:
Think about scale: I’m printing to A4 sheets and using inch-size squares, so every 11 x 8 squares equates to one sheet of paper. Knowing this, I can multiply-up to a
whole number of sheets of paper and this informs my decisions about how to best make use of the maps (and what will and won’t fit on my dining table!).
Focus on legibility: Your printer probably won’t have the same kind of resolution as your screen, and your players can’t “zoom in” to get details. Play with the grid
styles (under Map Settings) to find what works best for you, and try not to clash with your floor patterns. If you’re printing in monochrome, use the “Printer-Friendly” camera filter
(also under Map Settings, or in the Export Options dialog) to convert to gorgeous line-art. Make sure critical elements have sufficient contrast that they’ll stand out when printed or
your players might walk right over that chest, campfire, or bookshelf.
Think about exposure: You don’t get digital “fog of war” on the tabletop! Think about how you’re going to reveal the map to your players: plan to print in multiple
sections to put together, jigsaw style, or have card to “cover” bits of the map. Think about how the tool can help you here: e.g. if you’ve got multiple buildings the players can
explore, use a higher “level” or roof layer to put roofs on your buildings, then print the relevant parts of that level separately: now you’ve got a thematic cover-up that you can
remove to show the insides of the building. Go the other way around for secret doors: print the empty wall on your main map (so players can’t infer the location of the secret door by
the inclusion of a cover-up) and the secret door/passage on the overlay, so you can stick it onto the map when they find it.
If you’re printing in black and white, line art can be a gorgeous look.
Printing it out
There’s no “print” option in Dungeondraft, so – especially if your map spans multiple “pages” – you’ll need a multi-step process to printing it out. With a little practice, it’s not too
hard or time-consuming, though:
Gimp makes light work of converting a PNG into a PDF.
Export your map (level by level) from Dungeondraft as PNG files. The default settings are fine, but pay attention to the
“Overlay level” setting if you’re using smart or complex cover-ups as described above.
To easily spread your map across multiple pages, you’ll need to convert it to a PDF. I’m using Gimp to do this. Simply open the PNG in Gimp, make any post-processing/last minute changes that you couldn’t manage in Dungeondraft, then click File >
Export As… and change the filename to have a .pdf extension. You could print directly from Gimp, but in my experience PDF reader software does a much better job at multi-page printing.
Check the print preview before you click the button!
Open your PDF in an appropriate reader application with good print management. I’m using Foxit, which is… okay? Print it, selecting “tile large pages” to tell it to print across multiple sheets. Assuming you’ve produced a map an appropriate size
for your printer’s margins, your preview should be perfect. If not, you can get away with reducing the zoom level by up to a percent or two without causing trouble for your miniatures.
If you’d like the page breaks to occur at specific places (for exposure/reveal reasons), go back to Gimp and pad one side of the image by increasing the canvas size.
Check the level of “overlap” specified: I like to keep mine low and use the print margins as the overlapping part of my maps when I tape them together, but you’ll want to see how your
printer behaves and adapt accordingly.
The overlap provides stability, rigidity, and an explanation as to exactly what that character tripped over when they rolled a critical fail on a DEX check.
If you’re sticking together multiple pages to make a single large map, trim off the bottom and right margins of each page: if you printed with cut marks, this is easy enough even
without a guillotine. Then tape them together on the underside, taking care to line-up the features on the map (it’s not just your players who’ll appreciate a good, visible grid: it’s
useful when lining-up your printouts to stick, too!).
I keep my maps rolled-up in a box. If you do this too, just be ready with some paperweights to keep the edges from curling when you unfurl them across your gaming table. Or cut into
separate rooms and mount to stiff card for that “jigsaw” effect! Whatever works best for you!
Any hefty tome, e.g. the 5e Player’s Handbook, can act as a paperweight.
New rules for old games! The Board Game Remix Kit is a collection of tips, tweaks, reimaginings and completely new games that you can play with the board and
pieces from games you might already own: Monopoly, Cluedo, Scrabble and Trivial Pursuit.
The 26 rule tweaks and new games include:
Full Houses: poker, but played with Monopoly properties
Citygrid: a single-player city-building game
Use Your Words: Scrabble with storytelling
Them’s Fightin’ Words: a game of making anagrams, and arguing about which one would win in a fight
Hunt the Lead Piping: hiding and searching for the Cluedo pieces in your actual house
Guess Who Done It: A series of yes/no questions to identify the murderer (contributed by Meg Pickard)
Zombie Mansion: use the lead piping to defend the Cluedo mansion
Judy Garland on the Moon with a Bassoon: a drawing game that uses the answers to trivia questions as prompts
The Board Game Remix Kit was originally released in 2010 by the company Hide&Seek (which closed in 2014). We are releasing it here as a pdf (for
phones/computers) and an epub (for ereaders) under a CC-BY-SA license.
If you enjoy the Kit and can afford it, please consider a donation to the World Health Organisation’s COVID-19
Response Fund.
Confined to your house? What a great opportunity to play board games with your fellow confinees.
Only got old family classics like Monopoly, Cluedo and Scrabble? Here’s a guide to mixing-them-up into new, fun, and highly-playable alternatives. Monopoly certainly needs it.
A long while ago, inspired by Nick Berry‘s analysis of optimal Hangman strategy, I worked it backwards to find the
hardest words to guess when playing Hangman. This week, I showed these to my colleague Grace – who turns out to be a fan of word puzzles – and our conversation inspired me to go a little deeper. Is it possible, I
thought, for me to make a Hangman game that cheats by changing the word it’s thinking of based on the guesses you make in order to make it as difficult as possible for you to
win?
The principle is this: every time the player picks a letter, but before declaring whether or not it’s found in the word –
Make a list of all possible words that would fit into the boxes from the current game state.
If there are lots of them, still, that’s fine: let the player’s guess go ahead.
But if the player’s managing to narrow down the possibilities, attempt to change the word that they’re trying to guess! The new word must be:
Legitimate: it must still be the same length, have correctly-guessed letters in the same places, and contain no letters that have been declared to be incorrect
guesses.
Harder: after resolving the player’s current guess, the number of possible words must be larger than the number of possible words that would have
resulted otherwise.
Yeah, you’re screwed now.
You might think that this strategy would just involve changing the target word so that you can say “nope” to the player’s current guess. That happens a lot, but it’s not always the
case: sometimes, it’ll mean changing to a different word in which the guessed letter also appears. Occasionally, it can even involve changing from a word in which the guessed
letter didn’t appear to one in which it does: that is, giving the player a “freebie”. This may seem counterintuitive as a strategy, but it sometimes makes sense: if
saying “yeah, there’s an E at the end” increases the number of possible words that it might be compared to saying “no, there are no Es” then this is the right move for a
cheating hangman.
Playing against a cheating hangman also lends itself to devising new strategies as a player, too, although I haven’t yet looked deeply into this. But logically, it seems that the
optimal strategy against a cheating hangman might involve making guesses that force the hangman to bisect the search space: knowing that they’re always going to adapt towards the
largest set of candidate words, a perfect player might be able to make guesses to narrow down the possibilities as fast as possible, early on, only making guesses that they actually
expect to be in the word later (before their guess limit runs out!).
The game is brutally-difficult, but surprisingly fun, and you can have it tell you when and how it cheats so you can begin to understand its strategy.
I also find myself wondering how easily I could adapt this into a “helpful hangman”: a game which would always change the word that you’re trying to guess in order to try to make you
win. This raises the possibility of a whole new game, “suicide hangman”, in which the player is trying to get themselves killed and so is trying to pick letters that can’t
possibly be in the word and the hangman is trying to pick words in which those letters can be found, except where doing so makes it obvious which letters the player must avoid next.
Maybe another day.
In the meantime, you’re welcome to go play the game (and let me know what you think, below!) and, if you’re of such an inclination, read the source code. I’ve used some seriously ugly techniques to make this work, including regular expression metaprogramming (using
regular expressions to write regular expressions), but the code should broadly make sense if you want to adapt it. Have fun!
Update 26 September 2019, 16:23: I’ve now added “helpful mode”, where the computer tries to cheat on your behalf
rather than against you, but it’s not as helpful as you’d think because it assumes you’re playing optimally and have already memorised the dictionary!
I’m increasingly convinced that Friedemann Friese‘s 2009 board game Power Grid: Factory Manager (BoardGameGeek) presents gamers with a highly-digestible model of the energy economy in a capitalist society.
In Factory Manager, players aim to financially-optimise a factory over time, growing production and delivery capacity through upgrades in workflow, space, energy, and staff
efficiency. An essential driving factor in the game is that energy costs will rise sharply throughout. Although it’s not always clear in advance when or by how much, this increase in
the cost of energy is always at the forefront of the savvy player’s mind as it’s one of the biggest factors that will ultimately impact their profit.
8 $money per $unit of electricity I use? That’s a rip off! Or a great deal! I don’t know!
Given that players aim to optimise for turnover towards the end of the game (and as a secondary goal, for the tie-breaker: at a specific point five rounds after the game begins) and not
for business sustainability, the game perhaps-accidentally reasonably-well represents the idea of “flipping” a business for a profit. Like many business-themed games, it favours
capitalism… which makes sense – money is an obvious and quantifiable way to keep score in a board game! – but it still bears repeating.
There’s one further mechanic in Factory Manager that needs to be understood: a player’s ability to control the order in which they take their turn and their capacity to
participate in the equipment auctions that take place at the start of each round is determined by their manpower-efficiency in the previous round. That is: a player who
operates a highly-automated factory running on a skeleton staff benefits from being in the strongest position for determining turn order and auctions in their next turn.
My staff room is empty. How about yours?
The combination of these rules leads to an interesting twist: in the final turn – when energy costs are at their highest and there’s no benefit to holding-back staff to
monopolise the auction phase in the nonexistent subsequent turn – it often makes most sense strategically to play what I call the “sweatshop strategy”. The player switches off
the automated production lines to save on the electricity bill, drags in all the seasonal workers they can muster, dusts off the old manpower-inefficient machines mouldering in the
basement, and gets their army of workers cranking out widgets!
With indefinitely-increasing energy prices and functionally-flat staff costs, the rules of the game would always eventually reach the point at which it is most cost-effective
to switch to slave cheap labour rather than robots. but Factory Manager‘s fixed-duration means that this point often comes for all players in many games at the same
predictable point: a tipping point at which the free market backslides from automation to human labour to keep itself alive.
There are parallels in the real world. Earlier this month, Tim Watkins wrote:
The demise of the automated car wash may seem trivial next to these former triumphs of homo technologicus but it sits on the same continuum. It is just one of a gathering
list of technologies that we used to be able to use, but can no longer express (through market or state spending) a purpose for. More worrying, however, is the direction in which we
are willingly going in our collective decision to move from complexity to simplicity. The demise of the automated car wash has not followed a return to the practice of people
washing their own cars (or paying the neighbours’ kid to do it). Instead we have more or less happily accepted serfdom (the use of debt and blackmail to force people to work) and
slavery (the use of physical harm) as a reasonable means of keeping the cost of cleaning cars to a minimum (similar practices are also keeping the cost of food down in the UK).
This, too, is precisely what is expected when the surplus energy available to us declines.
I love Factory Manager, but after reading Watkins’ article, it’ll probably feel a little different to play it, now. It’s like that moment when, while reading the rules, I first
poured out the pieces of Puerto Rico. Looking through them, I thought for a moment about what the “colonist”
pieces – little brown wooden circles brought to players’ plantations on ships in a volume commensurate with the commercial demand for manpower – represented. And that realisation adds
an extra message to the game.
Beneath its (fabulous) gameplay, Factory Manager carries a deeper meaning encouraging the possibility of a discussion about capitalism, environmentalism, energy, and
sustainability. And as our society falters in its ability to fulfil the techno-utopian dream, that’s perhaps a discussion we need to be having.
Seriously, this film is awesome.
But for now, go watch Sorry to Bother You, where you’ll find further parallels… and at least you’ll get to laugh as you do
so.
VICTORIA, BC – After data collection from thousands of parties across the country, reports are coming in that the annoying person who brings an acoustic guitar to a party is now
officially less hated than the person who expects everyone to sit down and play Cards Against Humanity.
“At least when some asshole starts playing the tune to Wonderwall you can get up and go to a different room,” says Michelle Kalleta, 22, avid partygoer. “Jerks who show up with any
sort of card game expect everyone to play, and the last thing I want to do at a party is sit in a circle with a bunch of people who think they’re hilariously edgy.”
For the last few months, I’ve been GMing a GURPS campaign (that was originally a Warhammer Fantasy Roleplay 1st-edition campaign, in turn built upon a mixture of commercially published and homegrown
modules, including, in turn, an AD&D module…) for a few friends.
So far, it’s included such gems as a player-written poem in a fictional language, another player’s drawing of the most-cinematic action sequence they’ve experienced so far… and the opportunity, during a play session that
coincided with a player’s birthday, to explain the layout of a ruined tower by presenting them with a cake baked into the shape of the terrain.
“So we’re… here?” asked a player, jabbing with his finger at the cream-filled section of the tower at which he was standing.
But mostly I wanted to make this post so that I had a point of context in case I ever get around to open-sourcing some of the digital tools I’ve been developing to help streamline our
play sessions. For example, most of our battle maps and exploration are presented on a ‘board’ comprised of a flat screen monitor stripped of its stand and laid on its back, connected
via the web to a tool that allows me to show, hide, or adapt parts of it from my laptop or mobile phone. Player stats, health, and cash, as well as the date, time, position of the sun
as well as the phases of the moons are similarly tracked and are available via any player’s mobile phone at any time.
A rare instance of us using a paper-based battle map. Despite the fact that we play in-person, we’ve used digital tools to save table space!
These kinds of tools have been popular for ‘long-distance’/Internet roleplaying for years, but I think there’s a lot of potential in locally-linked, tabletop-enhancing
(rather than replacing) tools that deliver some of the same benefit to the (superior, in my opinion) experience of ‘proper’ face-to-face adventure gaming. Now, at least,
when I tell you for example about some software I wrote to help calculate the position of the sun in the sky of a fictional world, you’ll have a clue why I would do such a thing in the
first place.
What’s the hardest word to guess, when playing hangman? I’ll come back to that.
Whatever could the missing letters be?
Last year, Nick Berry wrote a fantastic blog post about the optimal strategy for Hangman. He showed that the best guesses
to make to get your first “hit” in a game of hangman are not the most-commonly occurring letters in written English, because these aren’t the most commonly-occurring
letters in individual words. He also showed that the first guesses should be adjusted based on the length of the word (the most common letter in 5-letter words is ‘S’, but the most
common letter in 6-letter words is ‘E’). In short: hangman’s a more-complex game than you probably thought it was! I’d like to take his work a step further, and work out which word is
the hardest word: that is – assuming you’re playing an optimal strategy, what word takes the most-guesses?
The rules of hangman used to be a lot more brutal. Nowadays, very few people die as a result of the game.
First, though, we need to understand how hangman is perfectly played. Based on the assumption that the “executioner” player is choosing words randomly, and that no clue is given as to
the nature of the word, we can determine the best possible move for all possible states of the game by using a data structure known as a tree. Suppose our opponent has chosen a
three-letter word, and has drawn three dashes to indicate this. We know from Nick’s article that the best letter to guess is A. And then, if our guess is wrong, the next
best letter to guess is E. But what if our first guess is right? Well, then we’ve got an “A” in one or more positions on the board, and we need to work out the next best
move: it’s unlikely to be “E” – very few three-letter words have both an “A” and an “E” – and of course what letter we should guess next depends entirely on what positions
the letters are in.
There are billions of possible states of game play, but you can narrow them down quickly with strategic guessing.
What we’re actually doing here is a filtering exercise: of all of the possible letters we could choose, we’re considering what possible results that could have. Then for
each of those results, we’re considering what guesses we could make next, and so on. At each stage, we compare all of the possible moves to a dictionary of all possible
words, and filter out all of the words it can’t be: after our first guess in the diagram above, if we guess “A” and the board now shows “_ A _”, then we know that of the
600+ three-letter words in the English language, we’re dealing with one of only about 134. We further refine our guess by playing the odds: of those words, more of them have a “C” in
than any other letter, so that’s our second guess. If it has a C in, that limits the options further, and we can plan the next guess accordingly. If it doesn’t have a C
in, that still provides us with valuable information: we’re now looking for a three-letter word with an A in the second position and no letter C: that cuts it
down to 124 words (and our next guess should be ‘T’). This tree-based mechanism for working out the best moves is comparable to that used by other game-playing computers. Hangman is
simple enough that it can be “solved” by contemporary computers (like draughts –
solved in 2007 – but unlike chess: while modern chess-playing
computers can beat humans, it’s still theoretically possible to build future computers that will beat today’s computers).
Zen Hangman asks the really important questions. If a man has one guess left and refused to pick a letter, does he live forever, or not at all?
Now that we can simulate the way that a perfect player would play against a truly-random executioner, we can use this to simulate games of hangman for every possible word
(I’m using version 0.7 of this British-English dictionary).
In other words, we set up two computer players: the first chooses a word from the dictionary, the second plays “perfectly” to try to guess the word, and we record how many guesses it
took. So that’s what I did. Here’s the Ruby code I used. It’s heavily-commented and
probably pretty understandable/good learning material, if you’re into that kind of thing. Or if you fancy optimising it, there’s plenty of scope for that too (I knocked it out on a
lunch break; don’t expect too much!). Or you could use it as the basis to make a playable hangman game. Go wild.
The hardest three-letter hangman words. “Sly” is particularly… well, sly.
Running the program, we can see that the hardest three-letter word is “xxv”, which would take 22 guesses (20 of them wrong!) to get. But aside from the roman numeral for 25, I don’t
think that “xxv” is actually a word. Perhaps my dictionary’s not very good. “Oak”, though, is definitely a word, and at 20 guesses (17 wrong), it’s easily enough to hang your opponent
no matter how many strokes it takes to complete the gallows.
Interestingly, “oaks” is an easier word than “oak” (although it’s still very difficult): the addition of an extra letter to a word does not make it harder, especially when that letter
is common.
There are more tougher words in the four-letter set, like the devious “quiz”, “jazz”, “zinc”, and “faux”. Pick one of those and your opponent – unless they’ve seen this blog post! – is
incredibly unlikely to guess it before they’re swinging from a rope.
“Hazing foxes, fucking cockily” is not only the title of a highly-inappropriate animated film, but also a series of very challenging Hangman words.
As we get into the 5, 6, and 7-letter words you’ll begin to notice a pattern: that the hardest words with any given number of letters get easier the longer
they are. That’s kind of what you’d expect, I suppose: if there were a hypothetical word that contained every letter in the alphabet, then nobody would ever fail to (eventually) get it.
Some of the longer words are wonderful, like: dysprosium, semivowel, harrumph, and googolplex.
When we make a graph of each word length, showing which proportion of the words require a given number of “wrong” guesses (by an optimised player), we discover a “sweet spot” window in
which we’ll find all of the words that an optimised player will always fail to guess (assuming that we permit up to 10 incorrect guesses before they’re disqualified). The
window seems small for the number of times I remember seeing people actually lose at hangman, which implies to me that human players consistently play sub-optimally, and do not
adequately counteract that failing by applying an equal level of “smart”, intuitive play (knowing one’s opponent and their vocabulary, looking for hints in the way the game is
presented, etc.).
The “sweet spot” in the bottom right is the set of words which you would expect a perfect player to fail to guess, assuming that they’re given a limit of 10 “wrong” guesses.
In case you’re interested, then, here are the theoretically-hardest words to throw at your hangman opponent. While many of the words there feel like they would quite-rightly be
difficult, others feel like they’d be easier than their ranking would imply: this is probably because they contain unusual numbers of vowels or vowels in unusual-but-telling positions,
which humans (with their habit, inefficient under normal circumstances, of guessing an extended series of vowels to begin with) might be faster to guess than a
computer.