Tower of the Five Orders, Oxford OX1 3BW, United Kingdom.
Rating: ⭐⭐⭐⭐⭐
This iconic Oxford landmark is named for the architectural characteristics of each of its five floors. Each exhibits a different order – or “style” – of classical architecture: from
bottom to top – tuscan, doric, ionic, corinthian and composite. Part of the joy of “discovering” the tower, visiting as a tourist, comes from the fact that despite it’s size it’s
unlikely to be the first thing you see as you enter the quad: coming in from the Great Gate, for example, it won’t be until you turn around and look up that you see it… and even at a
glance you won’t necessarily observe its unusual architecture unless you’ve been told to look specifically at the columns.
Nice cakes and large mugs of hot drinks in the fabulous Blackwell Hall. Beware the “no laptops” policy in the afternoon, though: they’re militant about policing it!
The Azure image processing API is a software tool powered by a neural net, a type of artificial intelligence that attempts to replicate a particular model of how (we believe)
brains to work: connecting inputs (in this case, pixels of an image) to the entry nodes of a large, self-modifying network and reading the output, “retraining” the network based on
feedback from the quality of the output it produces. Neural nets have loads of practical uses and even more theoretical ones, but Janelle’s article was about how confused the
AI got when shown certain pictures containing (or not containing!) sheep.
There are probably sheep in the fog somewhere, but they’re certainly not visible.
The AI had clearly been trained with lots of pictures that contained green, foggy, rural hillsides and sheep, and had come to
associate the two. Remember that all the machine is doing is learning to associate keywords with particular features, and it’s clearly been shown many pictures that “look like” this
that do contain sheep, and so it’s come to learn that “sheep” is one of the words that you use when you see a scene like this. Janelle took to Twitter to ask for pictures of sheep in unusual places, and the Internet obliged.
When the sheep is held by a child, it becomes a “dog”.
Many of the experiments resulting from this – such as the one shown above – work well to demonstrate this hyper-focus on context: a sheep up a tree is a bird, a sheep on a lead
is a dog, a sheep painted orange is a flower, and so on. And while we laugh at them, there’s something about them that’s actually pretty… “human”.
Our eldest really loves cats. Also goats, apparently. Azure described this photo as “a person wearing a costume”, but it did include keywords such as “small”, “girl”, “petting”, and…
“dog”.
I say this because I’ve observed similar quirks in the way that small children pick up language, too (conveniently, I’ve got a pair of readily-available subjects, aged 4 and 1, for my
experiments in language acquisition…). You’ve probably seen it yourself: a toddler whose “training set” of data has principally included a suburban landscape describing the first cow
they see as a “dog”. Or when they use a new word or phrase they’ve learned in a way that makes no sense in the current context, like when our eldest interrupted dinner to say, in the
most-polite voice imaginable, “for God’s sake would somebody give me some water please”. And just the other day, the youngest waved goodbye to an empty room, presumably because
it’s one that he often leaves on his way up to bed
“A cat lying on a blanket”, says Azure, completely overlooking the small child in the picture. I guess the algorithm was trained on an Internet’s worth of cat pictures and didn’t see
as much of people-with-cats.
For all we joke, this similarity between the ways in which artificial neural nets and small humans learn language is perhaps the most-accessible evidence that neural nets are a
strong (if imperfect) model for how brains actually work! The major differences between the two might be simply that:
Our artificial neural nets are significantly smaller and less-sophisticated than most biological ones.
Biological neural nets (brains) benefit from continuous varied stimuli from an enormous number of sensory inputs, and will even self-stimulate (via, for example, dreaming) –
although the latter is something with which AI researchers sometimes experiment.
“Ca’! Ca’! Ca’!” Maybe if he shouts it excitedly enough, one of the cats (or dogs, which are for now just a special kind of cat) he’s spotted will give in and let him pet it. But I
don’t fancy his chances.
Things we take as fundamental, such as the nouns we assign to the objects in our world, are actually social/intellectual constructs. Our minds are powerful general-purpose computers,
but they’re built on top of a biology with far simpler concerns: about what is and is-not part of our family or tribe, about what’s delicious to eat, about which animals are friendly
and which are dangerous, and so on. Insofar as artificial neural nets are an effective model of human learning, the way they react to “pranks” like these might reveal underlying truths
about how we perceive the world.
Dwitter is a community of <canvas> artists who write Javascript programs of no
more than 140 characters to produce beautiful graphical output. Some favourites: Bluer waves, d/3697, d/4388. If the demoscene were invented
today, this is what it might look like.
I see a lot of ideas online for things to do with your child, but most of them are a lot of work. Many of them involve an unnerving amount of craftiness and/or require going out to
buy things. Almost all of them involve moving around which, many days, is fine, but some days can be pretty rough. N…
I see a lot of ideas online for things to do with your child, but most of them are a lot of work. Many of them involve an unnerving amount of craftiness and/or require going out to
buy things. Almost all of them involve moving around which, many days, is fine, but some days can be pretty rough. Not that I don’t love getting down on the floor and playing with my
kid (I love it a great deal) but I’m an adult in my mid-thirties. I can pretend to be a dinosaur for about 90 minutes (something I happily list on my professional resume) but after an
hour and a half, all bets are off. And given that many days I’m home with my son for over eight hours, things can get a bit dicey.
I’ve taken the liberty of brainstorming some fun child/parent activities in which your child can be adventurous and creative and you can lie on the sofa reading a book. Here’s my list
so far.
Julianne Aguilar | Longreads | February 2018 | 14 minutes (2,894 words)
Once upon a time, in 1999, when the internet was small, when it came through your phone and not just on your phone, when the first browser war had not yet been won, when you had to
teach yourself a few lines of code if you want…
Once upon a time, in 1999, when the internet was small, when it came through your phone and not just on your phone, when the first browser war had not yet been won,
when you had to teach yourself a few lines of code if you wanted to exist online, when the idea of broadcasting your real name for anyone to see was unthinkable — in those early days,
before Twitter revolutions, before Facebook Live homicides, when the internet was small and most people didn’t understand it, and only the nerds hung out there — even
then, it was already happening.
Hi, I'm Tom Scott. These are some of the things I've made and done. They'll probably come back to haunt me in a few years' time. (Want to get in touch about …
Just want to play my game without reading this whole post? Play the game here – press a key, mouse button, or touch the screen to fire the
thrusters, and try to land at less than 4 m/s with as much fuel left over as possible.
In 1969, when all the nerds were still excited by sending humans to the moon instead of flinging cars around the sun, the hottest video game was Rocket (or Lunar) for the PDP-8. Originally implemented in FOCAL by high school student Jim Storer and soon afterwards ported to BASIC (the other dominant language to come as
standard with microcomputers), Rocket became the precursor to an entire genre of video games called “Lunar Lander games“.
Like many pieces of microcomputer software of the time, Rocket was distributed as printed source code that you’d need to carefully type in at the other end.
The aim of these games was to land a spacecraft on the moon or similar body by controlling the thrust (and in some advanced versions, the rotation) of the engine. The spacecraft begins
in freefall towards the surface and will accelerate under gravity: this can be counteracted with thrust, but engaging the engine burns through the player’s limited supply of fuel.
Furthermore, using fuel lowers the total mass of the vessel (a large proportion of the mass of the Apollo landers was fuel for use in the descent stage) which reduces its inertia,
giving the engine more “kick” which must be compensated for during the critical final stages. It sounds dry and maths-y, but I promise that graphical versions can usually be played
entirely “by eye”.
Atari’s 1979 adaptation is perhaps the classic version you’d recognise, although its release was somewhat overshadowed by their other vector-graphics space-themed release in 1979:
Asteroids.
Let’s fast-forward a little. In 1997 I enrolled to do my A-levels at what was then called Preston College, where my Computing tutor was a chap
called Kevin Geldard: you can see him at 49 seconds into this hilariously low-fi video which I guess must have been originally shot on
VHS despite being uploaded to YouTube in 2009. He’s an interesting chap in his own right whose contributions to my career in computing deserve their own blog post, but for the time
being all you need to know is that he was the kind of geek who, like me, writes software “for fun” more often than not. Kevin owned a Psion 3 palmtop – part of a series of devices with
which I also have a long history and interest – and he taught himself to program OPL by reimplementing a favourite game of his younger years on it: his take on the classic mid-70s-style graphical Lunar Lander.
I never owned a Psion Series 3 (pictured), but I bought a Series 5mx in early 2000 out of my second student loan cheque, ultimately wrote most of my undergraduate dissertation using
it, and eventually sold it to a collector in about 2009 for almost as much as I originally paid for it. The 5mx was an amazing bit of kit. But I’ll blog about that another day, I
guess.
My A-level computing class consisted of a competitive group of geeky lads, and we made sort-of a personal extracurricular challenge to ourselves of re-implementing Kevin’s take on
Lunar Lander using Turbo Pascal, the primary language in which our class was taught. Many hours out-of-class were spent
in the computer lab, tweaking and comparing our various implementations (with only ocassional breaks to play Spacy, CivNet, or my adaptation of LORD2): later, some of us would extend our competition by
going on to re-re-implement in Delphi, Visual Basic, or Java, or by adding additional levels relating to orbital rendezvous or landing on other planetary bodies. I was quite
proud of mine at the time: it was highly-playable, fun, and – at least on your first few goes – moderately challenging.
I sometimes wonder what it would have looked like if I’d have implemented my 1997 Lunar Lander today. So I did.
Always game to try old new things, and ocassionally finding time between the many things that I do to code, I decided to expand upon my recently-discovered
interest in canvas coding to bring back my extracurricular Lunar Lander game of two decades ago in a modern format. My goals were:
A one-button version of a classic “straight descent only” lunar lander game (unlike my 1997 version, which had 10 engine power levels, this remake has just “on” and “off”)
An implementation based initially on real physics (although not necessarily graphically to scale)… and then adapted as necessary to give a fun/playability balance that feels good
Runs in a standards-compliant browser without need for plugins: HTML5, Canvas, Javascript
Adapts gracefully to any device, screen resolution, and orientation with graceful degredation/progressive enhancement
You can have a go at my game right here in your web browser! The aim is to reach the ground travelling at a velocity of no more than 4 m/s
with the maximum amount of fuel left over: this, if anything, is your “score”. My record is 52% of fuel remaining, but honestly anything in the 40%+ range is very good. Touch the screen
(if it’s a touchscreen) or press a mouse button or any key to engage your thrusters and slow your descent.
“Houston, the Eagle has landed.” Kerbal Space Program, it isn’t. Here’s a very good landing: 3 m/s with 48% of the fuel tank remaining.
And of course it’s all open-source, so you’re more than welcome to take it, rip it apart, learn from it, or make something better out
of it.
In addition to the pension I get from my “day job” employer, I maintain a pension pot with a separate private provider which I top up with
money from my freelance work. I logged in to that second pension provider’s (reliably shonky, web-standards-violating) website about a month ago and found that I couldn’t do anything
because they’d added a new mandatory field to the “My Profile” page and I wasn’t allowed to do anything else until I’d filled it out. No problem, I thought: a few seconds won’t kill me.
If I’m lucky, I might be able to afford to retire this century.
The newly-added field turned out to be “Gender”, and as it was apparently unacceptable to leave this unspecified (as would be my preference: after all,
I’ll certainly be retiring after November 2018, when gender will cease to have any legal bearing on retirement age), I clicked the drop-down to see what options they’d provided. “Not
provided”, “Male”, and “Female” were the options: fine, I thought, I’ll just pick “Not provided” and be done with it. And for a while, everything seemed fine.
Leaving the field as the undefined “Select One” option wasn’t valid (I tried!) so I changed the value.
Over three weeks later I received a message from them saying that they hadn’t yet been able to action the changes to my profile because they hadn’t yet received hard-copy documentary
evidence from me. By this point, I’d forgotten about the minor not-really-a-change change I’d made and assumed that whatever they were on about must probably be related to my unusual name. I sent a message back to them to ask exactly what kind of evidence they needed to see. And that’s when things got weird.
I received a message back – very-definitely from a human – to say that what they needed to see what evidence of my gender change. That is, my change of gender from “not specified” to
“not provided”.
Fluttershy gets it.
They went on to suggest that I could get my doctor to certify a letter verifying my gender change. Needless to say, I haven’t made an appointment to try to get my GP to sign a document
that confirms that my gender is “not provided”. Instead, I’ve emailed back to ask them to read what they just asked me for again, and perhaps this time they’ll engage both
brain cells and try to think about what they’re actually asking, rather than getting tied up in knots in their own bureaucratic process. Let’s see how that goes.