Enigma, the Bombe, and Typex

This is a repost promoting content originally published elsewhere. See more things Dan's reposted.

How to guides

How to encrypt/decrypt with Enigma

We’ll start with a step-by-step guide to decrypting a known message. You can see the result of these steps in CyberChef here. Let’s say that our message is as follows:

XTSYN WAEUG EZALY NRQIM AMLZX MFUOD AWXLY LZCUZ QOQBQ JLCPK NDDRW F

And that we’ve been told that a German service Enigma is in use with the following settings:

Rotors III, II, and IV, reflector B, ring settings (Ringstellung in German) KNG, plugboard (Steckerbrett)AH CO DE GZ IJ KM LQ NY PS TW, and finally the rotors are set to OPM.

Enigma settings are generally given left-to-right. Therefore, you should ensure the 3-rotor Enigma is selected in the first dropdown menu, and then use the dropdown menus to put rotor III in the 1st rotor slot, II in the 2nd, and IV in the 3rd, and pick B in the reflector slot. In the ring setting and initial value boxes for the 1st rotor, put K and O respectively, N and P in the 2nd, and G and M in the 3rd. Copy the plugboard settings AH CO DE GZ IJ KM LQ NY PS TW into the plugboard box. Finally, paste the message into the input window.

The output window will now read as follows:

HELLO CYBER CHEFU SERST HISIS ATEST MESSA GEFOR THEDO CUMEN TATIO N

The Enigma machine doesn’t support any special characters, so there’s no support for spaces, and by default unsupported characters are removed and output is put into the traditional five-character groups. (You can turn this off by disabling “strict input”.) In some messages you may see X used to represent space.

Encrypting with Enigma is exactly the same as decrypting – if you copy the decrypted message back into the input box with the same recipe, you’ll get the original ciphertext back.

Plugboard, rotor and reflector specifications

The plugboard exchanges pairs of letters, and is specified as a space-separated list of those pairs. For example, with the plugboard AB CD, A will be exchanged for B and vice versa, C for D, and so forth. Letters that aren’t specified are not exchanged, but you can also specify, for example, AA to note that A is not exchanged. A letter cannot be exchanged more than once. In standard late-war German military operating practice, ten pairs were used.

You can enter your own components, rather than using the standard ones. A rotor is an arbitrary mapping between letters – the rotor specification used here is the letters the rotor maps A through Z to, so for example with the rotor ESOVPZJAYQUIRHXLNFTGKDCMWB, A maps to E, B to S, and so forth. Each letter must appear exactly once. Additionally, rotors have a defined step point (the point or points in the rotor’s rotation at which the neighbouring rotor is stepped) – these are specified using a < followed by the letters at which the step happens.

Reflectors, like the plugboard, exchange pairs of letters, so they are entered the same way. However, letters cannot map to themselves.

How to encrypt/decrypt with Typex

The Typex machine is very similar to Enigma. There are a few important differences from a user perspective:

  • Five rotors are used.
  • Rotor wirings cores can be inserted into the rotors backwards.
  • The input plugboard (on models which had one) is more complicated, allowing arbitrary letter mappings, which means it functions like, and is entered like, a rotor.
  • There was an additional plugboard which allowed rewiring of the reflector: this is supported by simply editing the specified reflector.

Like Enigma, Typex only supports enciphering/deciphering the letters A-Z. However, the keyboard was marked up with a standardised way of representing numbers and symbols using only the letters. You can enable emulation of these keyboard modes in the operation configuration. Note that this needs to know whether the message is being encrypted or decrypted.

How to attack Enigma using the Bombe

Let’s take the message from the first example, and try and decrypt it without knowing the settings in advance. Here’s the message again:

XTSYN WAEUG EZALY NRQIM AMLZX MFUOD AWXLY LZCUZ QOQBQ JLCPK NDDRW F

Let’s assume to start with that we know the rotors used were III, II, and IV, and reflector B, but that we know no other settings. Put the ciphertext in the input window and the Bombe operation in your recipe, and choose the correct rotors and reflector. We need one additional piece of information to attack the message: a “crib”. This is a section of known plaintext for the message. If we know something about what the message is likely to contain, we can guess possible cribs.

We can also eliminate some cribs by using the property that Enigma cannot encipher a letter as itself. For example, let’s say our first guess for a crib is that the message begins with “Hello world”. If we enter HELLO WORLD into the crib box, it will inform us that the crib is invalid, as the W in HELLO WORLD corresponds to a W in the ciphertext. (Note that spaces in the input and crib are ignored – they’re included here for readability.) You can see this in CyberChef here

Let’s try “Hello CyberChef” as a crib instead. If we enter HELLO CYBER CHEF, the operation will run and we’ll be presented with some information about the run, followed by a list of stops. You can see this here. Here you’ll notice that it says Bombe run on menu with 0 loops (2+ desirable)., and there are a large number of stops listed. The menu is built from the crib you’ve entered, and is a web linking ciphertext and plaintext letters. (If you’re maths inclined, this is a graph where letters – plain or ciphertext – are nodes and states of the Enigma machine are edges.) The machine performs better on menus which have loops in them – a letter maps to another to another and eventually returns to the first – and additionally on longer menus. However, menus that are too long risk failing because the Bombe doesn’t simulate the middle rotor stepping, and the longer the menu the more likely this is to have happened. Getting a good menu is a mixture of art and luck, and you may have to try a number of possible cribs before you get one that will produce useful results.

Bombe menu diagram

In this case, if we extend our crib by a single character to HELLO CYBER CHEFU, we get a loop in the menu (that U maps to a Y in the ciphertext, the Y in the second cipher block maps to A, the A in the third ciphertext block maps to E, and the E in the second crib block maps back to U). We immediately get a manageable number of results. You can see this here. Each result gives a set of rotor initial values and a set of identified plugboard wirings. Extending the crib further to HELLO CYBER CHEFU SER produces a single result, and it has also recovered eight of the ten plugboard wires and identified four of the six letters which are not wired. You can see this here.

We now have two things left to do:

  1. Recover the remaining plugboard settings.
  2. Recover the ring settings.

This will need to be done manually.

Set up an Enigma operation with these settings. Leave the ring positions set to A for the moment, so from top to bottom we have rotor III at initial value E, rotor II at C, and rotor IV at G, reflector B, and plugboard DE AH BB CO FF GZ LQ NY PS RR TW UU.

You can see this here. You will immediately notice that the output is not the same as the decryption preview from the Bombe operation! Only the first three characters – HEL – decrypt correctly. This is because the middle rotor stepping was ignored by the Bombe. You can correct this by adjusting the ring position and initial value on the right-hand rotor in sync. They are currently A and G respectively. Advance both by one to B and H, and you’ll find that now only the first two characters decrypt correctly.

Keep trying settings until most of the message is legible. You won’t be able to get the whole message correct, but for example at F and L, which you can see here, our message now looks like:

HELLO CYBER CHEFU SERTC HVSJS QTEST KESSA GEFOR THEDO VUKEB TKMZM T

At this point we can recover the remaining plugboard settings. The only letters which are not known in the plugboard are J K V X M I, of which two will be unconnected and two pairs connected. By inspecting the ciphertext and partially decrypted plaintext and trying pairs, we find that connecting IJ and KM results, as you can see here, in:

HELLO CYBER CHEFU SERST HISIS ATEST MESSA GEFOR THEDO CUMEO TMKZK T

This is looking pretty good. We can now fine tune our ring settings. Adjusting the right-hand rotor to G and M gives, as you can see here,

HELLO CYBER CHEFU SERST HISIS ATEST MESSA GEFOR THEDO CUMEN WMKZK T

which is the best we can get with only adjustments to the first rotor. You now need to adjust the second rotor. Here, you’ll find that anything from D and F to Z and B gives the correct decryption, for example here. It’s not possible to determine the exact original settings from only this message. In practice, for the real Enigma and real Bombe, this step was achieved via methods that exploited the Enigma network operating procedures, but this is beyond the scope of this document.

What if I don’t know the rotors?

You’ll need the “Multiple Bombe” operation for this. You can define a set of rotors to choose from – the standard WW2 German military Enigma configurations are provided or you can define your own – and it’ll run the Bombe against every possible combination. This will take up to a few hours for an attack against every possible configuration of the four-rotor Naval Enigma! You should run a single Bombe first to make sure your menu is good before attempting a multi-Bombe run.

You can see an example of using the Multiple Bombe operation to attack the above example message without knowing the rotor order in advance here.

What if I get far too many stops?

Use a longer or different crib. Try to find one that produces loops in the menu.

What if I get no stops, or only incorrect stops?

Are you sure your crib is correct? Try alternative cribs.

What if I know my crib is right, but I still don’t get any stops?

The middle rotor has probably stepped during the encipherment of your crib. Try a shorter or different crib.

How things work

How Enigma works

We won’t go into the full history of Enigma and all its variants here, but as a brief overview of how the machine works:

Enigma uses a series of letter->letter conversions to produce ciphertext from plaintext. It is symmetric, such that the same series of operations on the ciphertext recovers the original plaintext.

The bulk of the conversions are implemented in “rotors”, which are just an arbitrary mapping from the letters A-Z to the same letters in a different order. Additionally, to enforce the symmetry, a reflector is used, which is a symmetric paired mapping of letters (that is, if a given reflector maps X to Y, the converse is also true). These are combined such that a letter is mapped through three different rotors, the reflector, and then back through the same three rotors in reverse.

To avoid Enigma being a simple Caesar cipher, the rotors rotate (or “step”) between enciphering letters, changing the effective mappings. The right rotor steps on every letter, and additionally defines a letter (or later, letters) at which the adjacent (middle) rotor will be stepped. Likewise, the middle rotor defines a point at which the left rotor steps. (A mechanical issue known as the double-stepping anomaly means that the middle rotor actually steps twice when the left hand rotor steps.)

The German military Enigma adds a plugboard, which is a configurable pair mapping of letters (similar to the reflector, but not requiring that every letter is exchanged) applied before the first rotor (and thus also after passing through all the rotors and the reflector).

It also adds a ring setting, which allows the stepping point to be adjusted.

Later in the war, the Naval Enigma added a fourth rotor. This rotor does not step during operation. (The fourth rotor is thinner than the others, and fits alongside a thin reflector, meaning this rotor is not interchangeable with the others on a real Enigma.)

There were a number of other variants and additions to Enigma which are not currently supported here, as well as different Enigma networks using the same basic hardware but different rotors (which are supported by supplying your own rotor configurations).

How Typex works

Typex is a clone of Enigma, with a few changes implemented to improve security. It uses five rotors rather than three, and the rightmost two are static. Each rotor has more stepping points. Additionally, the rotor design is slightly different: the wiring for each rotor is in a removable core, which sits in a rotor housing that has the ring setting and stepping notches. This means each rotor has the same stepping points, and the rotor cores can be inserted backwards, effectively doubling the number of rotor choices.

Later models (from the Mark 22, which is the variant we simulate here) added two plugboards: an input plugboard, which allowed arbitrary letter mappings (rather than just pair switches) and thus functioned similarly to a configurable extra static rotor, and a reflector plugboard, which allowed rewiring the reflector.

How the Bombe works

The Bombe is a mechanism for efficiently testing and discarding possible rotor positions, given some ciphertext and known plaintext. It exploits the symmetry of Enigma and the reciprocal (pairwise) nature of the plugboard to do this regardless of the plugboard settings. Effectively, the machine makes a series of guesses about the rotor positions and plugboard settings and for each guess it checks to see if there are any contradictions (e.g. if it finds that, with its guessed settings, the letter A would need to be connected to both B and C on the plugboard, that’s impossible, and these settings cannot be right). This is implemented via careful connection of electrical wires through a group of simulated Enigma machines.

A full explanation of the Bombe’s operation is beyond the scope of this document – you can read the source code, and the authors also recommend Graham Ellsbury’s Bombe explanation, which is very clearly diagrammed.

Implementation in CyberChef

Enigma/Typex

Enigma and Typex were implemented from documentation of their functionality.

Enigma rotor and reflector settings are from GCHQ’s documentation of known Enigma wirings. We currently simulate all basic versions of the German Service Enigma; most other versions should be possible by manually entering the rotor wirings. There are a few models of Enigma, or attachments for the Service Enigma, which we don’t currently simulate. The operation was tested against some of GCHQ’s working examples of Enigma machines. Output should be letter-for-letter identical to a real German Service Enigma. Note that some Enigma models used numbered rather than lettered rotors – we’ve chosen to stick with the easier-to-use lettered rotors.

There were a number of different Typex versions over the years. We implement the Mark 22, which is backwards compatible with some (but not completely with all, as some early variants supported case sensitivity) older Typex models. GCHQ also has a partially working Mark 22 Typex. This was used to test the plugboards and mechanics of the machine. Typex rotor settings were changed regularly, and none have ever been published, so a test against real rotors was not possible. An example set of rotors have been randomly generated for use in the Typex operation. Some additional information on the internal functionality was provided by the Bombe Rebuild Project.

The Bombe

The Bombe was likewise implemented on the basis of documentation of the attack and the machine. The Bombe Rebuild Project at the National Museum of Computing answered a number of technical questions about the machine and its operating procedures, and helped test our results against their working hardware Bombe, for which the authors would like to extend our thanks.

Constructing menus from cribs in a manner that most efficiently used the Bombe hardware was another difficult step of operating the real Bombes. We have chosen to generate the menu automatically from the provided crib, ignore some hardware constraints of the real Bombe (e.g. making best use of the number of available Enigmas in the Bombe hardware; we simply simulate as many as are necessary), and accept that occasionally the menu selected automatically may not always be the optimal choice. This should be rare, and we felt that manual menu creation would be hard to build an interface for, and would add extra barriers to users experimenting with the Bombe.

The output of the real Bombe is optimised for manual verification using the checking machine, and additionally has some quirks (the rotor wirings are rotated by, depending on the rotor, between one and three steps compared to the Enigma rotors). Therefore, the output given is the ring position, and a correction depending on the rotor needs to be applied to the initial value, setting it to W for rotor V, X for rotor IV, and Y for all other rotors. We felt that this would require too much explanation in CyberChef, so the output of CyberChef’s Bombe operation is the initial value for each rotor, with the ring positions set to A, required to decrypt the ciphertext starting at the beginning of the crib. The actual stops are the same. This would not have caused problems at Bletchley Park, as operators working with the Bombe would never have dealt with a real or simulated Enigma, and vice versa.

By default the checking machine is run automatically and stops which fail silently discarded. This can be disabled in the operation configuration, which will cause it to output all stops from the actual Bombe hardware instead. (In this case you only get one stecker pair, rather than the set identified by the checking machine.)

Optimisation

A three-rotor Bombe run (which tests 17,576 rotor positions and takes about 15-20 minutes on original Turing Bombe hardware) completes in about a fifth of a second in our tests. A four-rotor Bombe run takes about 5 seconds to try all 456,976 states. This also took about 20 minutes on the four-rotor US Navy Bombe (which rotates about 30 times faster than the Turing Bombe!). CyberChef operations run single-threaded in browser JavaScript.

We have tried to remain fairly faithful to the implementation of the real Bombe, rather than a from-scratch implementation of the underlying attack. There is one small deviation from “correct” behaviour: the real Bombe spins the slow rotor on a real Enigma fastest. We instead spin the fast rotor on an Enigma fastest. This means that all the other rotors in the entire Bombe are in the same state for the 26 steps of the fast rotor and then step forward: this means we can compute the 13 possible routes through the lower two/three rotors and reflector (symmetry means there are only 13 routes) once every 26 ticks and then save them. This does not affect where the machine stops, but it does affect the order in which those stops are generated.

The fast rotors repeat each others’ states: in the 26 steps of the fast rotor between steps of the middle rotor, each of the scramblers in the complete Bombe will occupy each state once. This means we can once again store each state when we hit them and reuse them when the other scramblers rotate through the same states.

Note also that it is not necessary to complete the energisation of all wires: as soon as 26 wires in the test register are lit, the state is invalid and processing can be aborted.

The above simplifications reduce the runtime of the simulation by an order of magnitude.

If you have a large attack to run on a multiprocessor system – for example, the complete M4 Naval Enigma, which features 1344 possible choices of rotor and reflector configuration, each of which takes about 5 seconds – you can open multiple CyberChef tabs and have each run a subset of the work. For example, on a system with four or more processors, open four tabs with identical Multiple Bombe recipes, and set each tab to a different combination of 4th rotor and reflector (as there are two options for each). Leave the full set of eight primary rotors in each tab. This should complete the entire run in about half an hour on a sufficiently powerful system.

To celebrate their centenary, GCHQ have open-sourced very-faithful reimplementations of Enigma, Typex, and Bombe that you can run in your browser. That’s pretty cool, and a really interesting experimental toy for budding cryptographers and cryptanalysts!

I Used The Web For A Day On Internet Explorer 8

This is a repost promoting content originally published elsewhere. See more things Dan's reposted.

Who In The World Uses IE8?

Before we start; a disclaimer: I am not about to tell you that you need to start supporting IE8.

There’s every reason to not support IE8. Microsoft officially stopped supporting IE8, IE9 and IE10 over three years ago, and the Microsoft executives are even telling you to stop using Internet Explorer 11.

But as much as we developers hope for it to go away, it just. Won’t. Die. IE8 continues to show up in browser stats, especially outside of the bubble of the Western world.

Sure, you aren’t developing for IE8 any more. But you should be developing with progressive enhancement, and if you do that right, you get all kinds of compatibility, accessibility, future- and past-proofing built-in. This isn’t just about supporting the (many) African countries where IE8 usage remains at over 1%… it’s about supporting the Web’s openness and archivibility and following best-practice in your support of new technologies.

Going beyond the Golden Ratio

This is a repost promoting content originally published elsewhere. See more things Dan's reposted.

I show that for the same reason that the golden ratio, ϕ=1.6180334.., can be considered the most irrational number, that 1 + √2 can be considered the 2nd most irrational number, and indeed why (9 + √221)/10 can be considered the 3rd most irrational number.

Let us imagine a game between two kids, Emily and Sam – both strong and determined in their own way who spend their entire lives trying to outwit each other, instead of doing their homework. (A real life Generative Adversial Network…)

Emily, proudly reminds us that she simultaneously bears the same first name as Emily Davison, the most famous of British suffragettes; Emily Balch, Nobel Peace Prize laureate; Emilie du Chatelet, who wrote the first French translation and commentary of Isaac Newton’s “Principia”; Emily Roebling, Chief Engineer of New York’s iconic Brooklyn Bridge; Emily Bronte author of Wuthering Heights; Emily Wilson, the first female editor of ‘New Scientist‘ publication; and also Emmy Noether, who revolutionized the field of theoretical physics.

On the other side we have Sam (and all his minion friends, who are aptly called Sam-002, Sam-003, Sam-004) who is part human / part robot and plays Minecraft or watches Youtube, 24/7.

They agree to play a game where Emily thinks of a number, and then Sam (with the possible help of his minions) has 60 seconds to find any fractions that are equal to Emily’s number.

And so the game begins…

Emily says “8.5″.

Sam & friends quickly reply with “85/10″,… “34/4″,…  “17/2″,… “425/50″,…

They soon realize that all these answers are equally valid because they are all equivalent fractions. Being competitive they want to pick a single winner, so they all agree that the best answer is the one with the lowest denominator. And so, 17/2 is deemed the best answer.

This time, Emily tries to make it harder by picking ‘0.123456‘. After only a slight pause, Sam  slyly says “123456/1000000“.

Emily’s annoyed with this answer. She knows that although the best answer would be the irreducible fraction 1929/15625, Sam’s answer is still valid answer, and furthermore he will always be able to instantly answer like this if she picks any number with a terminating decimal expansion.

So this time, Emily picks “π”.

Delightful exploration of the idea that while all irrational numbers are irrational, some can be considered more-irrational than others if you consider the complexity of the convergent series of fractions necessary to refine the representation of it. Some of this feels to me like the intersection between meta-mathematics and magic, but it’s well-written enough that I was able to follow along all the way to the end and I think that you should give it a go, too.

Killed by Google – The Google Graveyard & Cemetery

This is a repost promoting content originally published elsewhere. See more things Dan's reposted.

Fusion Tables… Fabric… Inbox… Google+… goo.gl… Goggles… Site Search… Glass… Now… Code… Bump!… Gears… Desktop Search…

Just some of the projects and services that Google has offered and then killed; this site aims to catalogue them all. Some, like Wave, were given to the community (Wave lived on for a while as an Apache project but is now basically dead), but most, like Reader, were assassinated in a misguided attempt to drive traffic to other services (ultimately, Reader was killed perhaps to try to get people onto Google+, which was then also killed).

Google can’t be trusted to maintain the services of theirs that you depend upon (relevant XKCD?). That’s not a phenomenon that’s unique to Google, of course: it’s perhaps just that they produce so many new and often-experimental services that they inevitably cease supporting more of them than some of the many other providers who’ve killed the silos that people depended upon.

How could things be better? For a start, Google could make a better commitment to open-source and developing standards rather than platforms. But if you don’t think you can trust them to do that – and you can’t – then the only solution for individuals is to use fewer Google products to break the Google-monoculture. Encourage the competition to weaken their position, and break free from silos in general where it’s possible to do so.

148+ projects and services dead. But hey, we’re getting Stadia so everything’s okay, right? <sigh>

Review of The Vine Inn

This review of The Vine Inn originally appeared on Google Maps. See more reviews by Dan.

Perfectly good pub food served by friendly staff. Prices a little higher than I’d expect for what you get, but not completely unreasonable. Play equipment in the kid-friendly garden was a hit with my little ones.

How Can Free Porn Be Feminist?

This is a repost promoting content originally published elsewhere. See more things Dan's reposted.

If you search “free porn” on Google, you get 1,400,000,000 hits. That’s a lot of porn. From vanilla lovers to BBW aficionados, kink and BDSM enthusiasts, foot fetishists and golden shower fans, there’s something for everyone. All at your fingertips, and all for free.

Although free porn is an accessible way for us to explore and embrace our sexuality, it relies on a business model that exploits sex workers and filmmakers. So while viewers are getting off, creators are the ones getting screwed. We boycott fast fashion brands for exploiting factory workers, we go vegan in the name of animal rights, we ban plastic straws to save the ocean, so where’s that same energy when it comes to protecting sex workers?

Free porn sites operate on pirated and unregulated user-generated content. Users can upload clips even though they’re infringing copyright, and stolen content goes up faster than studios can issue demands for it to be taken down. Award-winning feminist adult filmmaker Erika Lust tells Refinery29 that at the time of writing her team had been fruitlessly chasing Pornhub, asking them to take down some of her XConfessions films. “[Free porn sites] steal from studios, while at the same time profit from unregulated amateur production. This adds to the capacity for exploitation towards the performers, and the illusion that porn is free leads to the assumption that sex work is not work,” says Lust. “Most of the performers involved in these videos did not give their consent for their film to be pirated and hosted on a free porn site.” And they’re not making a penny, either.

Pay for your porn, folks!

We are actively destroying the web

This is a repost promoting content originally published elsewhere. See more things Dan's reposted.

One of the central themes of my talk on The Lean Web is that we as developers repeatedly take all of the great things the web and browsers give us out-of-the-box, break them, and then re-implement them poorly with JavaScript.

This point smacked me in the face hard a few weeks ago after WebAIM released their survey of the top million websites.

As Ethan Marcotte noted in his article on the survey:

Pages containing popular JavaScript frameworks were more likely to have accessibility errors than those that didn’t use those frameworks.

JavaScript routing has always perplexed me.

You take something the browser just gives you for free, break it with JavaScript, then reimplement it with more JavaScript, often poorly. You have to account for on-page clicks, on-site clicks, off-site clicks, forward and back button usage, and so on.

JavaScript routing has always perplexed me, too. Back when SPA-centric front-end frameworks started taking off I thought that there must be something wrong with me, as a developer. Why was I unable to see why this “new hotness” was so popular, so immediately ubiquitous? I taught myself a couple of different frameworks in the hope that in learning to use them in anger I’d “click” and understand why this approach to routing made any sense, but I still couldn’t get it.

That’s when I remembered, later than I ought to have, that just because something is popular doesn’t mean that it’s a good idea.#

Front-end routing isn’t necessarily poisonous. By building on-top of what you already have in a progressive-enhancement kind-of way (like unpoly does for example!) you can potentially provide some minor performance or look-and-feel improvements to people in ideal circumstances (right browser(s), right compatibility, no bugs, no blocks, no accessibility needs, no “power users” who like to open-in-new-tab and the like, speedy connection, etc.) without damaging the fundamentals of what makes your web application work… but you’ve got to appreciate that doing this is going to be more work. For some applications, that’s worthwhile.

But when you do it at the expense of the underlying fundamentals… when you say “we’re moving everything to the front-end so we’re not going to bother with real URLs any more”… that’s when you break the web. And in doing so, you break a lot of other things too:

  • You break your user experience for people who don’t fit into your perfect vision of what your users look like in terms of technology, connection, or able-bodiedness
  • You break the sustainability and archivability of your site, making it into another piece of trash that’ll be lost to the coming digital dark age
  • You break the usability of the site by anything but your narrow view of what’s right
  • You break a lot of the technology that’s made the web as great as it is already: caching, manipulatable URLs, widespread compatibility… and many other things become harder when you have to re-invent the wheel to get basic features like preloading, sharability/bookmarking, page saving, the back button, stateful refreshes, SEO, hyperlinks…

UK online pornography age block triggers privacy fears

This is a repost promoting content originally published elsewhere. See more things Dan's reposted.

The government will next week confirm the launch date for a UK-wide age block on online pornography as privacy campaigners continue to raise concerns about how websites and age verification companies will use the data they collect.

The plan for implementing the long-delayed age block, which has been beset by technical difficulties, is expected to be announced alongside the government’s other proposals for tackling online content harmful to children, although it could be several months before the system is fully up and running.

The age block will require commercial pornography sites to show that they are taking sufficient steps to verify their users are over 18, such as by uploading a passport or driving licence or by visiting a newsagent to buy a pass only available to adults. Websites which fail to comply risk substantial fines or having their websites banned by all British internet service providers.

It’s a good job that the government doesn’t have anything big and complicated to be working on, right now, so they have loads of free time to establish a sex-shaming, unenforceable, and inevitably-ineffective law to impinge upon the liberties of individuals. Sigh.

Fighting uphill

This is a repost promoting content originally published elsewhere. See more things Dan's reposted.

As someone with a good deal of interest in the digital accessibility space, I follow WebAIM’s work closely. Their survey results are priceless insights into how disabled people actually use the web, so when the organization speaks with authority on a subject, I listen.

WebAIM’s accessibility analysis of the top 1,000,000 homepages was released to the public on February 27, 2019. I’ve had a few days to process it, and frankly, it’s left me feeling pretty depressed. In a sea of already demoralizing findings, probably the most notable one is that pages containing ARIA—a specialized language intended to aid accessibility—are actually more likely to have accessibility issues.

I don’t think this is intentional malice on the part of authors, but it is worth saying that the road to hell is paved with good intentions. These failures via omission and ignorance actively separate people from their civil rights.

I view the issue largely as an education problem, and that education is tied into what the market demands.

Although the replies to this Twitter thread are heartwarming, I realistically understand that accessibility knowledge isn’t what employers are largely demanding. Because of this, people entering into the web design and development space simply may not be aware of accessibility as a technical concern.

Overwhelmingly, crushingly, we shove new developers towards learning JavaScript single page application frameworks (SPAs). While many of these frameworks pay lip service towards preserving accessibility, if you do your homework you find that the majority of them were built without assistive technology in mind. These considerations were bolted on later, when their creators figured out that the things they threw away to get a more app-like experience actually mattered.

My go-to examples are routing and focus management. It’s a sad, sorry state of affairs that this critical functionality oftentimes requires third party plugins to make them capable of interfacing with assistive technology. The decision to use SPAs, and all that come with them, can often come from baseless nerd navelgazing—many business owners would be livid to find out that the technology choices their teams are making are actively incurring legal liability.

Punching down

It’s too easy and too irresponsible to lay blame solely on new developers. Turning again to the WebAIM survey, we know that over 50% of all form inputs are not labeled. This is basic stuff, things that people who have been working in the industry for any significant length of time should know. How can we expect the advanced, state-driven stuff to be built robustly if we’re all failing HTML 101?

What if we’re losing?

It’s a tough question, but one I think is worth asking.

In some respects, practicing web accessibility has never been better. Firefox has an accessibility inspector now, which is straight-up amazing. We have near-magical developer tools (plural!), dedicated conferences, podcasts, meetups, and highly paid people in influential positions making grandiose declarations about the importance of empathy.

And yet, WebAIM’s report. All these incremental improvements aren’t compounding at an equal or greater pace than the things they’re trying to combat.

It’s code and design issues stemming from a market demand problem, yes. But I also think it’s a process problem. Namely, we can’t shovel all our blame on the developers—classically the go-to scapegoats for organizational failure.

We’re all to blame for the state of things—I’m no exception. A lack of understanding and wholesale adoption of antipatterns are also at fault. Just because a big name company does something doesn’t mean it’s intrinsically good.

Technology solutions to social problems

If we can’t get the majority of web practitioners to care about, much less implement accessible websites, what can be done? Browsers already describe websites the best way they know how, via the Document Object Model (DOM). Assistive technologies describe what the DOM contains the best they can, even utilizing specialized heuristics to accommodate code that isn’t quite good enough.

But “isn’t quite good enough” isn’t the same as outright bad—these specialized programs can only do so much.

Seeing machines

I’ve been paying attention to Mozilla’s efforts to create an interstitial popup blocker. For those unfamiliar with interstitials, they’re the annoying (often inaccessible) on-page modals that commonly ask you to do things like sign up for newsletters.

The trick here is these interstitial are different from traditional popups in that you can’t just block anything that spawns from the page you’re currently visiting. They’re a little more tricky, in that it’s just another “layer” of the website you’re visiting, and therefore can’t be clipped away with tidy logic.

Mozilla’s approach is to ask for examples of interstitials people find on the web, and then use that corpus of information to train a machine learning algorithm to understand what an interstitial popup “looks” like. Armed with that knowledge, it can then strip away the code of anything that qualifies as interstitial-ish.

It’s a fiendishly clever idea, and probably one of the few applications of machine learning I’ve encountered that actually has merit. It also got me wondering: if we can’t change how assistive technology generates descriptions of the DOM, can we change how it views websites instead?

If we can teach a computer to identify what all the various bits that make up a website look like, maybe we can attack the problem of inaccessible experiences from a slightly different direction. Once the computer “views” a page and reports on what it sees, it can then read out the text contained in those identified areas. Screen readers sort of already do this, and even have specialized functionality for when the text isn’t actually text.

We’re starting to see hints of this kind of thinking already. Examples that come to mind are Sarah Drasner’s brilliant CodePen that uses Azure’s Computer Vision API to automatically generate alt descriptions for images. Airbnb’s sketching interfaces project is also a tiny, powerful glimpse into this sort of future.

However

It’s really easy to say someone should do something, but it’s far more difficult to actually do it. Debating the merits of hypotheticals only takes you so far.

I’m not naïve enough to think this sort of idea would require a non-trivial amount of engineering to create. The field of digital accessibility is small and commonly viewed as unglamorous work, so I’m not holding my breath for venture capital firms to line up for the chance to give me funding for this half-baked concept.

There’s also the uncomfortable truth that this sort of automation is only as good as the data it’s trained on, and the field of machine learning is rife with algorithmic bias. When you start to use data at scale to make decisions, you also perpetuate the biases inherent in that data.

Furthermore, when you rely on this approach to navigate the web, you start to get into a very uncomfortable problem in delivering equivalent experiences; namely editorializing the experience for someone instead of presenting it to them the way someone who wasn’t relying on that technology would.

A practical example of this is automatically generated alt descriptions. If a system is built to reject certain kinds of information—say nudity—it won’t generate the information a person who doesn’t rely on the description will be privy to. It also may not be the nudity the system thinks it is.

A classical Greek statue meets all the criteria for a naked person, yet it is not. There have been, however, situations where it is flagged as pornography and a description is not generated. If you need an example of how this sort of thing falls apart at scale, just look at tumblr.

Another way of saying it: implicitly defining the parameters of what is acceptable for expression via automation can have the effect of reducing individual autonomy. This is unconscionable.

Finally, not every disabled user is a screen reader user. The machine learning approach doesn’t work for many different kinds of disability situations, notably cognitive concerns.

Social solutions to technology problems

I’m a big nerd, so of course I led with an idea for software. But all too often we conflate creating something with creating good.

As touched on earlier, it seems like the pace of inaccessible digital experiences is moving far faster than our attempts to fix them. I’m skeptical of technology’s ability to solve the problem on its own.

It’s also far more easy to destroy than it is to repair. If you don’t believe me, spend some time conducting a manual website accessibility audit. It oftentimes feels like a tedious, frustrating, thankless experience that firmly paints you as the enemy for people who just want to move fast and break things. However, it is a very vital thing to do.

So, what can we do about this state of affairs?

Learn from history

Digital accessibility is a niche practice. That’s not a value judgement, it’s just the way things are. Again, it’s hard to fault someone for creating an inaccessible experience if they simply haven’t learned the concept exists.

And yet, seventy percent of websites are non-compliant. It’s a shocking statistic. What if I told you that seventy percent of all bridges were structurally unsound?

Some engineers who work with physical materials have a constant reminder of the gravity of the decisions they make. They wear iron rings to be reminded that they have an obligation to the public good, and that actual lives are on the line. I like that idea a lot—I think it’s a concept we as an industry could benefit from if we borrowed from it thematically.

It’d take some organizing to get to a place where we do such a thing. And maybe that’s a good thing—right now it feels like we’re an industry of overpaid, fly-by-night plumbers who have the luxury of saying they don’t believe in using wrenches.

Directed effort

It was a bitter, frustrating, oftentimes thankless task, but we should also acknowledge that web standards won. It took a ton of time and effort to get to this point, but think about what didn’t make it: closed, centralized, brittle technologies that were pay-to-play and difficult to understand and maintain.

We should also think about what technologies are available to us today, how they serve the people that use them, and how so much of it is built from these standards. While it may feel frustrating doing the work now, maybe that inflection point is just beyond the horizon.

Reframing

Selfishly, I’d love a future where it’s commonplace for interview candidates to be selected not only because of their JavaScript prowess, but also because they can offer a sound explanation of why using a button element is important.

I’m really excited to see digital accessibility get more mainstream attention, but I’m also concerned. I don’t want it to have fifteen minutes of fame. I want it to be a first class, top-of-mind consideration for everyone in the industry.

I really admire the people who are using their privilege as an influential industry member to push for this reality. Ethan Marcotte and Sara Soueidan come immediately to mind—they are doing an amazing job lending credibility to the practice as they learn more about the space. This is also not to diminish their other efforts, which have done so much to drive the web forward.

It’s been great seeing more and more accessibility talks appearing on the conference circuit, as well. The subject matter hasn’t historically gotten a lot of mainstream stage presence. This meant that there have been less opportunities for people to discover digital accessibility was even was a concern, much less be positioned as a glamorous subject that was worth spending a few thousand dollars on a ticket to hear someone talk about.

I also think the push to diversify our industry voices has helped bring accessibility concerns to the forefront, as well as other important topics. And speaking of diversification, I’d be remiss if I didn’t mention Kat Holmes’ work on Inclusive Design. If you want to read a brilliant treatise on reframing, read Mismatch.

Acknowledgment

I also think it’s worth acknowledging that we’re all standing on the shoulders of giants. New voices (such as myself), are speaking about what we’ve learned largely due to the fact that there’s been existing material to learn from. People like Léonie Watson, Marco Zehe, Steve Faulkner, Glenda Sims, Billy Gregory, Lainey Feingold, Mike Paciello, to name a few.

They’ve done incredible work in this space, and are continuing to do so. It’d be wise to listen to what they have to say.


This is a personal post on a personal website, so it’s admittedly a little more rough and glum than what I usually put out. However, I don’t have the right to be tired or demotivated. I’m frustrated for sure, and feelings of defeatism are hard to quell, but the stakes are too high for self-pity.

I’m also not so arrogant as to assume my ideas are new in this space. I don’t have comments on my blog, but if you want to talk about anything this post covered, feel free to chime in on Twitter.

I’ll keep writing, and I’ll keep pushing for what is important. I hope you’ll join me.

Note #13168

Kiwi flower

Kiwi flowers look exactly like you’d think they would. #woah