Review of Bodleian Library Café

This review of Bodleian Library Café originally appeared on Google Maps. See more reviews by Dan.

Nice cakes and large mugs of hot drinks in the fabulous Blackwell Hall. Beware the “no laptops” policy in the afternoon, though: they’re militant about policing it!

Neural nets respond to pranks like children do

A recent article by Janelle Shane talked about her recent experience with Microsoft Azure’s image processing API. If you’ve not come across her work before, I recommend starting with her candy hearts, or else new My Little Pony characters, invented by a computer. Anyway:

The Azure image processing API is a software tool powered by a neural net, a type of artificial intelligence that attempts to replicate a particular model of how (we believe) brains to work: connecting inputs (in this case, pixels of an image) to the entry nodes of a large, self-modifying network and reading the output, “retraining” the network based on feedback from the quality of the output it produces. Neural nets have loads of practical uses and even more theoretical ones, but Janelle’s article was about how confused the AI got when shown certain pictures containing (or not containing!) sheep.

A foggy field, incorrectly identified by an AI as containing sheep.
There are probably sheep in the fog somewhere, but they’re certainly not visible.

The AI had clearly been trained with lots of pictures that contained green, foggy, rural hillsides and sheep, and had come to associate the two. Remember that all the machine is doing is learning to associate keywords with particular features, and it’s clearly been shown many pictures that “look like” this that do contain sheep, and so it’s come to learn that “sheep” is one of the words that you use when you see a scene like this. Janelle took to Twitter to ask for pictures of sheep in unusual places, and the Internet obliged.

An AI mistakes a sheep for a dog when it is held by a child.
When the sheep is held by a child, it becomes a “dog”.

Many of the experiments resulting from this – such as the one shown above – work well to demonstrate this hyper-focus on context: a sheep up a tree is a bird, a sheep on a lead is a dog, a sheep painted orange is a flower, and so on. And while we laugh at them, there’s something about them that’s actually pretty… “human”.

Annabel with a goat.
Our eldest really loves cats. Also goats, apparently. Azure described this photo as “a person wearing a costume”, but it did include keywords such as “small”, “girl”, “petting”, and… “dog”.

I say this because I’ve observed similar quirks in the way that small children pick up language, too (conveniently, I’ve got a pair of readily-available subjects, aged 4 and 1, for my experiments in language acquisition…). You’ve probably seen it yourself: a toddler whose “training set” of data has principally included a suburban landscape describing the first cow they see as a “dog”. Or when they use a new word or phrase they’ve learned in a way that makes no sense in the current context, like when our eldest interrupted dinner to say, in the most-polite voice imaginable, “for God’s sake would somebody give me some water please”. And just the other day, the youngest waved goodbye to an empty room, presumably because it’s one that he often leaves on his way up to bed

Annabel snuggling one of Nanna Doreen's cats.
“A cat lying on a blanket”, says Azure, completely overlooking the small child in the picture. I guess the algorithm was trained on an Internet’s worth of cat pictures and didn’t see as much of people-with-cats.

For all we joke, this similarity between the ways in which artificial neural nets and small humans learn language is perhaps the most-accessible evidence that neural nets are a strong (if imperfect) model for how brains actually work! The major differences between the two might be simply that:

  1. Our artificial neural nets are significantly smaller and less-sophisticated than most biological ones.
  2. Biological neural nets (brains) benefit from continuous varied stimuli from an enormous number of sensory inputs, and will even self-stimulate (via, for example, dreaming) – although the latter is something with which AI researchers sometimes experiment.
John looking out of the window.
“Ca’! Ca’! Ca’!” Maybe if he shouts it excitedly enough, one of the cats (or dogs, which are for now just a special kind of cat) he’s spotted will give in and let him pet it. But I don’t fancy his chances.

Things we take as fundamental, such as the nouns we assign to the objects in our world, are actually social/intellectual constructs. Our minds are powerful general-purpose computers, but they’re built on top of a biology with far simpler concerns: about what is and is-not part of our family or tribe, about what’s delicious to eat, about which animals are friendly and which are dangerous, and so on. Insofar as artificial neural nets are an effective model of human learning, the way they react to “pranks” like these might reveal underlying truths about how we perceive the world.

And maybe somewhere, an android really is dreaming of an electric sheep… only it’s actually an electric cat.

× × × × ×

Games to Play With Your Child in Which You Barely Have to Move or Talk

This is a repost promoting content originally published elsewhere. See more things Dan's reposted.

Games to Play With Your Child in Which You Barely Have to Move or Talk by by Raquel D’Apice (The Ugly Volvo)

I see a lot of ideas online for things to do with your child, but most of them are a lot of work. Many of them involve an unnerving amount of craftiness and/or require going out to buy things. Almost all of them involve moving around which, many days, is fine, but some days can be pretty rough. N…

I see a lot of ideas online for things to do with your child, but most of them are a lot of work. Many of them involve an unnerving amount of craftiness and/or require going out to buy things. Almost all of them involve moving around which, many days, is fine, but some days can be pretty rough. Not that I don’t love getting down on the floor and playing with my kid (I love it a great deal) but I’m an adult in my mid-thirties. I can pretend to be a dinosaur for about 90 minutes (something I happily list on my professional resume) but after an hour and a half, all bets are off. And given that many days I’m home with my son for over eight hours, things can get a bit dicey.

I’ve taken the liberty of brainstorming some fun child/parent activities in which your child can be adventurous and creative and you can lie on the sofa reading a book. Here’s my list so far.

We’ve Always Hated Girls Online: A Wayback Machine Investigation

This is a repost promoting content originally published elsewhere. See more things Dan's reposted.

We’ve Always Hated Girls Online: A Wayback Machine Investigation by Julianne Aguilar (Longreads)

Julianne Aguilar | Longreads | February 2018 | 14 minutes (2,894 words)
Once upon a time, in 1999, when the internet was small, when it came through your phone and not just on your phone, when the first browser war had not yet been won, when you had to teach yourself a few lines of code if you want…

Once upon a time, in 1999, when the internet was small, when it came through your phone and not just on your phone, when the first browser war had not yet been won, when you had to teach yourself a few lines of code if you wanted to exist online, when the idea of broadcasting your real name for anyone to see was unthinkable — in those early days, before Twitter revolutions, before Facebook Live homicides, when the internet was small and most people didn’t understand it, and only the nerds hung out there  even then, it was already happening.

Even then, people hated girls on the internet.

Lunar Lander

Just want to play my game without reading this whole post? Play the game here – press a key, mouse button, or touch the screen to fire the thrusters, and try to land at less than 4 m/s with as much fuel left over as possible.

In 1969, when all the nerds were still excited by sending humans to the moon instead of flinging cars around the sun, the hottest video game was Rocket (or Lunar) for the PDP-8. Originally implemented in FOCAL by high school student Jim Storer and soon afterwards ported to BASIC (the other dominant language to come as standard with microcomputers), Rocket became the precursor to an entire genre of video games called “Lunar Lander games“.

Source code of Rocket and sample output.
Like many pieces of microcomputer software of the time, Rocket was distributed as printed source code that you’d need to carefully type in at the other end.

The aim of these games was to land a spacecraft on the moon or similar body by controlling the thrust (and in some advanced versions, the rotation) of the engine. The spacecraft begins in freefall towards the surface and will accelerate under gravity: this can be counteracted with thrust, but engaging the engine burns through the player’s limited supply of fuel. Furthermore, using fuel lowers the total mass of the vessel (a large proportion of the mass of the Apollo landers was fuel for use in the descent stage) which reduces its inertia, giving the engine more “kick” which must be compensated for during the critical final stages. It sounds dry and maths-y, but I promise that graphical versions can usually be played entirely “by eye”.

Atari's Lunar Lander (1979)
Atari’s 1979 adaptation is perhaps the classic version you’d recognise, although its release was somewhat overshadowed by their other vector-graphics space-themed release in 1979: Asteroids.

Let’s fast-forward a little. In 1997 I enrolled to do my A-levels at what was then called Preston College, where my Computing tutor was a chap called Kevin Geldard: you can see him at 49 seconds into this hilariously low-fi video which I guess must have been originally shot on VHS despite being uploaded to YouTube in 2009. He’s an interesting chap in his own right whose contributions to my career in computing deserve their own blog post, but for the time being all you need to know is that he was the kind of geek who, like me, writes software “for fun” more often than not. Kevin owned a Psion 3 palmtop – part of a series of devices with which I also have a long history and interest – and he taught himself to program OPL by reimplementing a favourite game of his younger years on it: his take on the classic mid-70s-style graphical Lunar Lander.

Psion Series 3
I never owned a Psion Series 3 (pictured), but I bought a Series 5mx in early 2000 out of my second student loan cheque, ultimately wrote most of my undergraduate dissertation using it, and eventually sold it to a collector in about 2009 for almost as much as I originally paid for it. The 5mx was an amazing bit of kit. But I’ll blog about that another day, I guess.

My A-level computing class consisted of a competitive group of geeky lads, and we made sort-of a personal extracurricular challenge to ourselves of re-implementing Kevin’s take on Lunar Lander using Turbo Pascal, the primary language in which our class was taught. Many hours out-of-class were spent in the computer lab, tweaking and comparing our various implementations (with only ocassional breaks to play Spacy, CivNet, or my adaptation of LORD2): later, some of us would extend our competition by going on to re-re-implement in Delphi, Visual Basic, or Java, or by adding additional levels relating to orbital rendezvous or landing on other planetary bodies. I was quite proud of mine at the time: it was highly-playable, fun, and – at least on your first few goes – moderately challenging.

Dan's Lunar Lander (2018)
I sometimes wonder what it would have looked like if I’d have implemented my 1997 Lunar Lander today. So I did.

Always game to try old new things, and ocassionally finding time between the many things that I do to code, I decided to expand upon my recently-discovered interest in canvas coding to bring back my extracurricular Lunar Lander game of two decades ago in a modern format. My goals were:

  • A one-button version of a classic “straight descent only” lunar lander game (unlike my 1997 version, which had 10 engine power levels, this remake has just “on” and “off”)
  • An implementation based initially on real physics (although not necessarily graphically to scale)… and then adapted as necessary to give a fun/playability balance that feels good
  • Runs in a standards-compliant browser without need for plugins: HTML5, Canvas, Javascript
  • Adapts gracefully to any device, screen resolution, and orientation with graceful degredation/progressive enhancement

You can have a go at my game right here in your web browser! The aim is to reach the ground travelling at a velocity of no more than 4 m/s with the maximum amount of fuel left over: this, if anything, is your “score”. My record is 52% of fuel remaining, but honestly anything in the 40%+ range is very good. Touch the screen (if it’s a touchscreen) or press a mouse button or any key to engage your thrusters and slow your descent.

End point of Dan's Lunar Lander (2018)
“Houston, the Eagle has landed.” Kerbal Space Program, it isn’t. Here’s a very good landing: 3 m/s with 48% of the fuel tank remaining.

And of course it’s all open-source, so you’re more than welcome to take it, rip it apart, learn from it, or make something better out of it.

× × ×

Digest for February 2018

Summary

This month I developed a workaround to WordPress/Jetpack’s (terrible) CAPTCHA, replaced “love” with “butt” in song lyrics (snortle), and argued with a pension provider that I shouldn’t need a doctor’s letter to justify changing my gender on their website from “not provided” to “not specified” because that isn’t actually a change of gender identity, just a change in how their systems store gender.

I also shared ideas on the Web’s position as a universal platform making it attractive as a universal solution, its approach to URLs as one of the (many) problems with AMP, and a link to one of the most-complex and interactive Abstruse Goose comics ever.

All posts

Posts marked by an asterisk (*) are referenced by the summary above.

Articles

Checkins

Reposts

Reposts marked with a dagger (†) include my comments or interpretation.

Bureaucracy vs. Common Sense

In addition to the pension I get from my “day job” employer, I maintain a pension pot with a separate private provider which I top up with money from my freelance work. I logged in to that second pension provider’s (reliably shonky, web-standards-violating) website about a month ago and found that I couldn’t do anything because they’d added a new mandatory field to the “My Profile” page and I wasn’t allowed to do anything else until I’d filled it out. No problem, I thought: a few seconds won’t kill me.

Neon sign showing the words "Work Harder"
If I’m lucky, I might be able to afford to retire this century.

The newly-added field turned out to be “Gender”, and as it was apparently unacceptable to leave this unspecified (as would be my preference: after all, I’ll certainly be retiring after November 2018, when gender will cease to have any legal bearing on retirement age), I clicked the drop-down to see what options they’d provided. “Not provided”, “Male”, and “Female” were the options: fine, I thought, I’ll just pick “Not provided” and be done with it. And for a while, everything seemed fine.

Gender field with options "Not provided", "Male", "Female".
Leaving the field as the undefined “Select One” option wasn’t valid (I tried!) so I changed the value.

Over three weeks later I received a message from them saying that they hadn’t yet been able to action the changes to my profile because they hadn’t yet received hard-copy documentary evidence from me. By this point, I’d forgotten about the minor not-really-a-change change I’d made and assumed that whatever they were on about must probably be related to my unusual name. I sent a message back to them to ask exactly what kind of evidence they needed to see. And that’s when things got weird.

I received a message back – very-definitely from a human – to say that what they needed to see what evidence of my gender change. That is, my change of gender from “not specified” to “not provided”.

Fluttershy says "If I had fingers, I'd be showing you one."
Fluttershy gets it.

They went on to suggest that I could get my doctor to certify a letter verifying my gender change. Needless to say, I haven’t made an appointment to try to get my GP to sign a document that confirms that my gender is “not provided”. Instead, I’ve emailed back to ask them to read what they just asked me for again, and perhaps this time they’ll engage both brain cells and try to think about what they’re actually asking, rather than getting tied up in knots in their own bureaucratic process. Let’s see how that goes.