This month I reminisced about that time Paul made and ate a Birmingham Egg (with help from Jon) and used doing so as a comment on Web siloisation and how it may be reducing
diversity of “weird” content online. I also contemplated what recent observations about neural nets might mean for our understanding of
child psychology and language development (in an only slightly tongue-in-cheek way).
Ruth, JTA, the kids and I took a snowsports holiday to the French alps, where I also found a
handful of geocaches: GLV85XH3, GLV85X2Z, and GLV85W40.
I normally reserve my “on this day” posts to look back at my own archived content, but once in a while I get a moment of nostalgia for something of
somebody else’s that “fell off the web”. And so I bring you something you probably haven’t seen in over a decade: Paul and Jon‘s Birmingham Egg.
It was a simpler time: a time when YouTube was a new “fringe” site (which is probably why I don’t have a surviving copy of the original video) and not yet owned by Google, before
Facebook was universally-available, and when original Web content remained decentralised (maybe we’re moving back in that
direction, but I wouldn’t count on it…). And only a few days after issue 175 of the b3ta newsletter wrote:
* BIRMINGHAM EGG - Take 5 scotch eggs, cut in
half and cover in masala sauce. Place in
Balti dish and serve with naan and/or chips.
We'll send a b3ta t-shirt to anyone who cooks
this up, eats it and makes a lovely little
photo log / write up of their adventure.
It was a simpler time, when, having fewer responsibilities, we were able to do things like this “for the
lulz”. But more than that, it was still at the tail-end of the era in which individuals putting absurd shit online was still a legitimate art form on the Web. Somewhere along the
way, the Web got serious and siloed. It’s not all a bad thing, but it does mean that we’re publishing less weirdness than we were back
then.
On a skiing holiday I took a day to go geocaching. This was my third find. Wonderful location, although I came (on foot) from the top and down rather than the bottom and up. Lovely
location, FP awarded.
On a skiing holiday I took a day to go geocaching. This was my second find. Shopkeeper was looking strangely at me through the window so I pretended to be interested in the sculpture
and took photos until he stopped watching. Cache was in third place I looked.
On a skiing holiday I took a day to go geocaching. This was my first (and easiest) find. Log very wet, unable to sign, but photo attached of me and cache (taken some way away from GZ)
as proof of find.
On a skiing holiday I took a day to go geocaching. Even with the hint, unable to find this cache: suspect it must be buried under ice and snow? If that’s possible, perhaps worth adding
to the cache description or else removing the “winter” attribute? (Or maybe I just gave up too easily!)
Tower of the Five Orders, Oxford OX1 3BW, United Kingdom.
Rating: ⭐⭐⭐⭐⭐
This iconic Oxford landmark is named for the architectural characteristics of each of its five floors. Each exhibits a different order – or “style” – of classical architecture: from
bottom to top – tuscan, doric, ionic, corinthian and composite. Part of the joy of “discovering” the tower, visiting as a tourist, comes from the fact that despite it’s size it’s
unlikely to be the first thing you see as you enter the quad: coming in from the Great Gate, for example, it won’t be until you turn around and look up that you see it… and even at a
glance you won’t necessarily observe its unusual architecture unless you’ve been told to look specifically at the columns.
Nice cakes and large mugs of hot drinks in the fabulous Blackwell Hall. Beware the “no laptops” policy in the afternoon, though: they’re militant about policing it!
The Azure image processing API is a software tool powered by a neural net, a type of artificial intelligence that attempts to replicate a particular model of how (we believe)
brains to work: connecting inputs (in this case, pixels of an image) to the entry nodes of a large, self-modifying network and reading the output, “retraining” the network based on
feedback from the quality of the output it produces. Neural nets have loads of practical uses and even more theoretical ones, but Janelle’s article was about how confused the
AI got when shown certain pictures containing (or not containing!) sheep.
The AI had clearly been trained with lots of pictures that contained green, foggy, rural hillsides and sheep, and had come to
associate the two. Remember that all the machine is doing is learning to associate keywords with particular features, and it’s clearly been shown many pictures that “look like” this
that do contain sheep, and so it’s come to learn that “sheep” is one of the words that you use when you see a scene like this. Janelle took to Twitter to ask for pictures of sheep in unusual places, and the Internet obliged.
Many of the experiments resulting from this – such as the one shown above – work well to demonstrate this hyper-focus on context: a sheep up a tree is a bird, a sheep on a lead
is a dog, a sheep painted orange is a flower, and so on. And while we laugh at them, there’s something about them that’s actually pretty… “human”.
I say this because I’ve observed similar quirks in the way that small children pick up language, too (conveniently, I’ve got a pair of readily-available subjects, aged 4 and 1, for my
experiments in language acquisition…). You’ve probably seen it yourself: a toddler whose “training set” of data has principally included a suburban landscape describing the first cow
they see as a “dog”. Or when they use a new word or phrase they’ve learned in a way that makes no sense in the current context, like when our eldest interrupted dinner to say, in the
most-polite voice imaginable, “for God’s sake would somebody give me some water please”. And just the other day, the youngest waved goodbye to an empty room, presumably because
it’s one that he often leaves on his way up to bed
For all we joke, this similarity between the ways in which artificial neural nets and small humans learn language is perhaps the most-accessible evidence that neural nets are a
strong (if imperfect) model for how brains actually work! The major differences between the two might be simply that:
Our artificial neural nets are significantly smaller and less-sophisticated than most biological ones.
Biological neural nets (brains) benefit from continuous varied stimuli from an enormous number of sensory inputs, and will even self-stimulate (via, for example, dreaming) –
although the latter is something with which AI researchers sometimes experiment.
Things we take as fundamental, such as the nouns we assign to the objects in our world, are actually social/intellectual constructs. Our minds are powerful general-purpose computers,
but they’re built on top of a biology with far simpler concerns: about what is and is-not part of our family or tribe, about what’s delicious to eat, about which animals are friendly
and which are dangerous, and so on. Insofar as artificial neural nets are an effective model of human learning, the way they react to “pranks” like these might reveal underlying truths
about how we perceive the world.
Dwitter is a community of <canvas> artists who write Javascript programs of no
more than 140 characters to produce beautiful graphical output. Some favourites: Bluer waves, d/3697, d/4388. If the demoscene were invented
today, this is what it might look like.