Where’s My Elephant?

This article is a repost promoting content originally published elsewhere. See more things Dan's reposted.

The “where’s my elephant?” theory takes it name, of course, from The Simpsons episode in which Bart gets an elephant (Season 5, episode 17, to be precise). For those of you who don’t know the episode: Bart wins a radio contest where you have to answer a phone call with the phrase, “KBBL is going to give me something stupid.” That “something stupid” turns out to be either $10,000, or “the gag prize”: a full-grown African elephant. Much to the presenters’ surprise, Bart chooses the elephant — which is a problem for the radio station, since they don’t actually have an elephant to give him. After some attempts at negotiation (the presenters offer Principal Skinner $10,000 to go about with his pants pulled down for the rest of the school year; the presenters offer to use the $10,000 to turn Skinner into “some sort of lobster-like creature”), Bart finds himself kicked out of the radio station, screaming “where’s my elephant?”

…the “where’s my elephant?” theory holds the following:

  1. If you give someone a joke option, they will take it.
  2. The joke option is a (usually) a joke option for a reason, and choosing it will cause everyone a lot of problems.
  3. In time, the joke will stop being funny, and people will just sort of lose interest in it.
  4. No one ever learns anything.

For those that were surprised when Trump was elected or Brexit passed a referendum, the “Where’s My Elephant?” theory of history may provide some solace. With reference to Boaty McBoatface and to the assassination of Qasem Soleimani, Tom Whyman pitches that “joke” options will be selected significantly more-often that you’d expect or that they should.

Our society is like Bart Simpson. But can we be a better Bart Simpson?

If that didn’t cheer you up: here’s another article, which more-seriously looks at the political long-game that Remainers in Britain might consider working towards.

Regarding the Thoughtful Cultivation of the Archived Internet

This article is a repost promoting content originally published elsewhere. See more things Dan's reposted.

With 20+ years of kottke.org archives, I’ve been thinking about this issue [continuing to host old content that no longer reflects its authors views] as well. There are many posts in the archive that I am not proud of. I’ve changed my mind in some cases and no longer hold the views attributed to me in my own words. I was too frequently a young and impatient asshole, full of himself and knowing it all. I was unaware of my privilege and too frequently assumed things of other people and groups that were incorrect and insensitive. I’ve amplified people and ideas in the past that I wouldn’t today.

Very much this! As another blogger with a 20+ year archive, I often find myself wondering how much of an impression of me is made upon my readers by some of my older posts, and what it means to retain them versus the possibility – never yet exercised – of deleting them. I certainly have my fair share of posts that don’t represent me well or that are frankly embarrassing, in hindsight!

I was thinking about this recently while following a thread on BoardGameGeek in which a poster advocated for the deletion of a controversial article from the site because, as they said:

…people who stumble on our site and see this game listed could get a very (!!!) bad impression of the hobby…

This is a similar concern: a member of an online community is concerned that a particular piece of archived content does not reflect well on them. They don’t see any way in which the content can be “fixed”, and so they propose that it is removed from the community. Parallels can be drawn to the deletionist faction within Wikipedia (if you didn’t know that Wikipedia had large-scale philosophical disputes before now, you’re welcome: now go down the meta-wiki rabbit hole).

As for my own blog, I fall on the side of retention: it’s impossible to completely “hide” my past by self-censorship anyway as there’s sufficient archives and metadata to reconstruct it, and moreover it feels dishonest to try. Instead, though, I do occasionally append rebuttals to older content – where I’ve time! – to help contextualise them and show that they’re outdated. I’ve even considered partially automating this by e.g. adding a “tag” that I can rapidly apply to older posts that haven’t aged well which would in turn add a disclaimer to the top of them.

Cool URIs don’t change. But the content behind them can. The fundamental message ought to be preserved, where possible, and so appending and retaining history seems to be a more-valid approach than wholesale deletion.

The Simple Joy of “No Phones Allowed”

This article is a repost promoting content originally published elsewhere. See more things Dan's reposted.

A few nights ago I saw Jack White in concert. It was a wonderful night, and a big part of that was due to a new rule he has imposed on all his tour dates: no phones.

When you arrive, you have to put your phone into a neoprene pouch, supplied by a company called Yondr, which they lock and give back to you. If you want to use your phone during the show, you can go into the concourse and unlock it by touching it to one of several unlocking bases. The concert area itself remains screen-free.

The effect was immediately noticeable upon entering the concert bowl. Aside from the time-travel-like strangeness of seeing a crowd devoid of blue screens, there was a palpable sense of engagement, as though—and it sounds so strange to say it—everyone came just so they could be there.

The most-significant observation in this article, in my mind, was that even putting a 20-second delay to people using their phones – that is, having to walk out to the concourse to unlock their bags – was sufficient to dramatically remove the temptation for their use. That’s amazing, but unsurprising: Veritasium recently did a video about boredom and how the desire to avoid ever feeling bored (despite its scientifically-demonstrable benefits), coupled with the easy access of instant stimulation from our smart devices, leads us into the well-known check phone, replace, check again cycle (or else “zombie smartphoning”).

I’ve been trying to be better about paying less-attention to my phone, this year, and it’s gone well… except that (as David also observes in the linked article) I’ve become more acutely aware of the feeling of the conversational/interpersonal “void” created when somebody else chances a “quick check” of their phone under the table. I used to blame social media mostly for this – and I still think that it’s an issue, and it’s certainly true that my Facebook/Twitter/Reddit-heavier-using friends are the biggest culprits for getting lost in their devices – but I’ve come to see it as a bigger, more-human issue, coupled with the availability of the new technologies around us.

Similar to how we eat too much fat, and sugar, and meat… simply because it’s available, we crave the stimulation that we can easily get from the device in our pocket to such an extent that we’ve become unhealthy in our habits.

“Stop Thinking About Consent: It Isn’t Possible and It Isn’t Right”

This article is a repost promoting content originally published elsewhere. See more things Dan's reposted.

For a philosopher, Helen Nissenbaum is a surprisingly active participant in shaping how we collect, use, and protect personal data. Nissenbaum, who earned her PhD from Stanford, is a professor of information science at Cornell Tech, New York City, where she focuses on the intersection of politics, ethics, and values in technology and digital media — the hard stuff. Her framework for understanding digital privacy has deeply influenced real-world policy.

In addition to several books and countless papers, she’s also coauthored privacy plug-ins for web browsers including TrackMeNot, AdNauseum, and Adnostic. Nissenbaum views these pieces of code as small efforts at rationalizing a marketplace where opaque consent agreements give consumers little bargaining power against data collectors as they extract as much information, and value from this information, as they can. Meanwhile, these practices offer an indefinite value proposition to consumers while compromising the integrity of digital media, social institutions, and individual security.

Neural nets respond to pranks like children do

A recent article by Janelle Shane talked about her recent experience with Microsoft Azure’s image processing API. If you’ve not come across her work before, I recommend starting with her candy hearts, or else new My Little Pony characters, invented by a computer. Anyway:

The Azure image processing API is a software tool powered by a neural net, a type of artificial intelligence that attempts to replicate a particular model of how (we believe) brains to work: connecting inputs (in this case, pixels of an image) to the entry nodes of a large, self-modifying network and reading the output, “retraining” the network based on feedback from the quality of the output it produces. Neural nets have loads of practical uses and even more theoretical ones, but Janelle’s article was about how confused the AI got when shown certain pictures containing (or not containing!) sheep.

A foggy field, incorrectly identified by an AI as containing sheep.
There are probably sheep in the fog somewhere, but they’re certainly not visible.

The AI had clearly been trained with lots of pictures that contained green, foggy, rural hillsides and sheep, and had come to associate the two. Remember that all the machine is doing is learning to associate keywords with particular features, and it’s clearly been shown many pictures that “look like” this that do contain sheep, and so it’s come to learn that “sheep” is one of the words that you use when you see a scene like this. Janelle took to Twitter to ask for pictures of sheep in unusual places, and the Internet obliged.

An AI mistakes a sheep for a dog when it is held by a child.
When the sheep is held by a child, it becomes a “dog”.

Many of the experiments resulting from this – such as the one shown above – work well to demonstrate this hyper-focus on context: a sheep up a tree is a bird, a sheep on a lead is a dog, a sheep painted orange is a flower, and so on. And while we laugh at them, there’s something about them that’s actually pretty… “human”.

Annabel with a goat.
Our eldest really loves cats. Also goats, apparently. Azure described this photo as “a person wearing a costume”, but it did include keywords such as “small”, “girl”, “petting”, and… “dog”.

I say this because I’ve observed similar quirks in the way that small children pick up language, too (conveniently, I’ve got a pair of readily-available subjects, aged 4 and 1, for my experiments in language acquisition…). You’ve probably seen it yourself: a toddler whose “training set” of data has principally included a suburban landscape describing the first cow they see as a “dog”. Or when they use a new word or phrase they’ve learned in a way that makes no sense in the current context, like when our eldest interrupted dinner to say, in the most-polite voice imaginable, “for God’s sake would somebody give me some water please”. And just the other day, the youngest waved goodbye to an empty room, presumably because it’s one that he often leaves on his way up to bed

Annabel snuggling one of Nanna Doreen's cats.
“A cat lying on a blanket”, says Azure, completely overlooking the small child in the picture. I guess the algorithm was trained on an Internet’s worth of cat pictures and didn’t see as much of people-with-cats.

For all we joke, this similarity between the ways in which artificial neural nets and small humans learn language is perhaps the most-accessible evidence that neural nets are a strong (if imperfect) model for how brains actually work! The major differences between the two might be simply that:

  1. Our artificial neural nets are significantly smaller and less-sophisticated than most biological ones.
  2. Biological neural nets (brains) benefit from continuous varied stimuli from an enormous number of sensory inputs, and will even self-stimulate (via, for example, dreaming) – although the latter is something with which AI researchers sometimes experiment.
John looking out of the window.
“Ca’! Ca’! Ca’!” Maybe if he shouts it excitedly enough, one of the cats (or dogs, which are for now just a special kind of cat) he’s spotted will give in and let him pet it. But I don’t fancy his chances.

Things we take as fundamental, such as the nouns we assign to the objects in our world, are actually social/intellectual constructs. Our minds are powerful general-purpose computers, but they’re built on top of a biology with far simpler concerns: about what is and is-not part of our family or tribe, about what’s delicious to eat, about which animals are friendly and which are dangerous, and so on. Insofar as artificial neural nets are an effective model of human learning, the way they react to “pranks” like these might reveal underlying truths about how we perceive the world.

And maybe somewhere, an android really is dreaming of an electric sheep… only it’s actually an electric cat.

A foggy field, incorrectly identified by an AI as containing sheep.× An AI mistakes a sheep for a dog when it is held by a child.× Annabel with a goat.× Annabel snuggling one of Nanna Doreen's cats.× John looking out of the window.×

An Oxford book store is cashing in on the success of The Good Place by selling the moral philosophy and ethics books Chidi references in the series.

This article is a repost promoting content originally published elsewhere. See more things Dan's reposted.

An Oxford book store is celebrating the success of The Good Place by selling the moral philosophy and ethics books referenced by Chidi Anagonye (William Jackson Harper) in the series – and its efforts are going viral.

The popular NBC and Netflix series aired its season two finale last week, and to commemorate that, Oxford’s Broad Street branch of Blackwell’s has put up a book stand titled ‘Chidi’s Choice’.

If you’ve not been watching The Good Place then, well: you should have been.

In Defense Of A Boring, Comfortable Life

This article is a repost promoting content originally published elsewhere. See more things Dan's reposted.

They call it “the easy life” for a reason.

I got my degree from a small, public college in upstate New York.

People went there, got their degrees and then went off to have quiet and sometimes boring lives. Although a degree from the State University of New York (SUNY) system is a valuable commodity outside of New York, when you’re surrounded by hundreds of thousands of graduates in your home state, it doesn’t get you very far. And for most of the people who got a degree from my school, and others like it, that was OK by them. A comfortable, unadventurous life was something they wanted. In part, that’s why they went to a state school in the first place. As SUNY Potsdam will tell anyone who asks, most of their students come from within a two- to three-hour driving distance because they want to be close to home…

Orange

Which came first, orange or orange?

Let me try that again: which came first, the colour or the fruit?

A variety of shades of orange.
Oranges

Still not quite right – one more try: which came first, orange, the English name of the colour, or orange, the English name of the fruit? What I really want to know is: is the fruit named after the colour or the colour after the fruit? (I find it hard to believe that the two share a name and colour simply by coincidence)

Orange fruit and blossom hanging from the tree.
Oranges

It turns out that the fruit came first. Prior to the introduction of oranges to Western Europe in around the 16th or 17th century by Portugese merchants, English-speaking countries referred to the colour by the name ġeolurēad. Say that Old English word out loud and you’ll hear its roots: it’s a combination of the historical versions of the words “yellow” and “red”. Alternatively, people substituted words like “gold” or “amber”:  also both words for naturally-occurring substances whose identity is confirmed by their colouration.

Bitter oranges growing in Prague (they don't naturally occur there; these ones are in a botanical garden).
Green oranges. These oranges are what are now known as ‘bitter oranges’, the only variety to grow naturally: the ‘sweet oranges’ you’re used to eating are entirely a domesticated species.

There wasn’t much need for a dedicated word in English to describe the colour, before the introduction of the fruit, because there wasn’t much around of that colour. The colour orange isn’t common in nature: a few fruits, copper-rich soils and rocks, a small number of tropical fish, a handful of flowers… and of course autumn leaves during that brief period before they go brown and are washed away by Britain’s encroaching winter weather.

A "rainbow" of the visible spectrum, with key colour "areas" marked.
The names for the parts of the visible spectrum are reasonably arbitrary, but primary colours tend to cover a broader “space” than secondary ones; presumably because its easier for humans to distinguish between colours that trigger multiple types of receptors in the eye.

Brent Berlin and Paul Kay theorise that the evolution of a language tends towards the introduction of words for particular colours in a strict order: so words to distinguish between green and blue (famously absent in Japanese, Vietnamese, and Thai) are introduced before brown is added, which in term appears before the distinction of pink, orange, and grey. At a basic level, this seems to fit: looking at a variety of languages and their words for different colours, you’ll note that the ‘orange’ column is filled far less-often than the ‘brown’ column, which in turn is filled less-often than the ‘green’ column.

Electromagnetic spectrum with visible light highlighted
Of course, from a non-anthropocentric perspective, the “visible spectrum” is just a tiny part of the range of frequencies of electromagnetic radiation that we, and other animals, make use of.

This is a rather crude analogy, of course, because some languages go further than others in their refinement of a particular area of the spectrum. Greek, for example, breaks down what we would call “blue” into τυρκουάζ (turquoise) and κυανό (azure), and arguably βιολέ (violet), although a Greek-speaker would probably put the latter down as a shade of purple, rather than of blue. It makes sense, I suppose, that languages are expected to develop a name for the colour “red” no later than they do for other colours (other than to differentiate between darkness and lightness) – a lot of important distinctions in biology, food, and safety depend on our ability to communicate about red things! But it seems to me that we’ve still got a way to go, working on our linguistic models of colour.

The CIE 1931 colour space.
Factor in the ability of the human eye to distinguish between different colours, and you get a far more-complex picture that a simple linear spectrum.

If we’d evolved on Mars (and were still a sighted, communicative, pack creature, but – for some reason – still had a comparable range and resolution of colour vision), our languages would probably contain an enormous variety of words for colours in the 650-750 nanometre wavelengths (the colours that English speakers universally call “red”). Being able to navigate the red planet based on the different ratios of hematites in the rocks, plains, soils and dusts would doubtless mean that the ability to linguistically distinguish between a dark-red feature and a medium-red feature could be of great value!

Photograph of Mars as taken by a rover.
Mars. It’s pretty damn red.

The names we have for colours represent a part of our history, and our environment. From an anthropological and linguistic perspective, that’s incredibly interesting.

A rainbow (middle), compared to its computed calculation (below) and a sample of the EM spectrum (top).
All six colours of the rainbow. No, wait… nine? Three? A hundred? It’s all about how you name them.

If it weren’t for the ubiquity of, say, violets and lavender in the Northern hemisphere, perhaps the English language wouldn’t have been for a word for that particular colour, and the rainbow would have six colours instead of seven. And if I’d say, “Richard Of York Gave Battle In…”, nobody would know how to finish the sentence.

In other news, I recently switched phone network, and I’m now on Orange (after many years on Vodafone). There is no connection between this fact and this blog post; I just thought I’d share.

A variety of shades of orange.× Orange fruit and blossom hanging from the tree.× Bitter oranges growing in Prague (they don't naturally occur there; these ones are in a botanical garden).× A "rainbow" of the visible spectrum, with key colour "areas" marked.× Electromagnetic spectrum with visible light highlighted× The CIE 1931 colour space.× Photograph of Mars as taken by a rover.× A rainbow (middle), compared to its computed calculation (below) and a sample of the EM spectrum (top).×