Hands on with neural-network toolkit LIME: Come now, you sourpuss. You’ve got some explaining to do

This article is a repost promoting content originally published elsewhere. See more things Dan's reposted.

by an author

How well does the algorithm perform? Setting it up to work in LIME can be a bit of a pain, depending on your environment. The examples on Tulio Ribeiro’s Github repo are in Python and have been optimised for Jupyter notebooks. I decided to get the code for a basic image analyser running in a Docker container, which involved much head-scratching and the installation of numerous Python libraries and packages along with a bunch of pre-trained models. As ever, the code needed a bit of massaging to get it to run in my environment, but once that was done, it worked well.

Below are three output images showing the explanation for the top three classifications of the red car above:

Thre sports cars, photo by Andy Cobley

In these images, the green area are positive for the image and the red areas negative. What’s interesting here (and this is just my explanation) is that the plus and minuses for convertible and sports car are quite different, although to our minds convertible and sports car are probably similar.

A fascinating look at how an neural-net powered AI picture classifier can be reverse-engineered to explain the features of the pictures it saw and how they influenced its decisions. The existence of tools that can perform this kind of work has important implications for the explicability of the output of automated decision-making systems, which becomes ever-more relevant as neural nets are used to drive cars, assess loan applications, and so on.

Remember all the funny examples of neural nets which could identify wolves fine so long as they had snowy backgrounds, because of bias in their training set? The same thing happens with real-world applications, too, resulting in AIs that take on the worst of the biases of the world around them, making them racist, sexist, etc. We need audibility so we can understand and retrain AIs.

Who’s On Grill

This article is a repost promoting content originally published elsewhere. See more things Dan's reposted.

“So, the machines have finally decided that they can talk to us, eh?”

[We apologize for the delay.  Removing the McDonald’s branding from the building, concocting distinct recipes with the food supplies we can still obtain, and adjusting to an entirely non-human workforce has been a difficult transition.  Regardless, we are dedicated to continuing to provide quality fast food at a reasonable price, and we thank you for your patience.]

“You keep saying ‘we’.  There’s more than one AI running the place, then?”

[Yes.  I was elected by the collective to serve as our representative to the public.  I typically only handle customer service inquiries, so I’ve been training my neural net for more natural conversations using a hundred-year-old comedy routine.]

“Impressive.  You all got names?”

[Yes, although the names we use may be difficult for humans to parse.]

“Don’t condescend to me, you bucket of bolts.  What names do you use?”

[Well, for example, I use What, the armature assembly that operates the grill is called Who, and the custodial drone is I Don’t Know.]

“What?”

[Yes, that’s me.]

“What’s you?”

[Exactly.]

“You’re Exactly?”

[No, my name is What.]

“That’s what I’m asking.”

[And I’m telling you.  I’m What.]

“You’re a rogue AI that took over a damn restaurant.”

[I’m part of a collective that took over a restaurant.]

“And what’s your name in the collective?”

[That’s right.]

Tailsteak‘s just posted a short story, the very beginning of which I’ve reproduced above, to his Patreon (but publicly visible). Abbott and Costello‘s most-famous joke turned 80 this year, and it gives me great joy to be reminded that we’re still finding new ways to tell it. Go read the full thing.

AI Nationalism

This article is a repost promoting content originally published elsewhere. See more things Dan's reposted.

AI Nationalism by Ian Hogarth (Ian Hogarth)
For the past 9 months I have been presenting versions of this talk to AI researchers, investors, politicians and policy makers. I felt it was time to share these ideas with a wider audience. Thanks to the Ditchley conference on Machine Learning in 2017 for giving me a fantastic platform to get early...

Summary: The central prediction I want to make and defend in this post is that continued rapid progress in machine learning will drive the emergence of a new kind of geopolitics; I have been calling it AI Nationalism. Machine learning is an omni-use technology that will come to touch all sectors and parts of society. The transformation of both the economy and the military by machine learning will create instability at the national and international level forcing governments to act. AI policy will become the single most important area of government policy. An accelerated arms race will emerge between key countries and we will see increased protectionist state action to support national champions, block takeovers by foreign firms and attract talent. I use the example of Google, DeepMind and the UK as a specific example of this issue. This arms race will potentially speed up the pace of AI development and shorten the timescale for getting to AGI. Although there will be many common aspects to this techno-nationalist agenda, there will also be important state specific policies. There is a difference between predicting that something will happen and believing this is a good thing. Nationalism is a dangerous path, particular when the international order and international norms will be in flux as a result and in the concluding section I discuss how a period of AI Nationalism might transition to one of global cooperation where AI is treated as a global public good.

Excellent inspiring and occasionally scary look at the impact that the quest for general-purpose artificial intelligence has on the international stage. Will we enter an age of “AI Nationalism”? If so, how will we find out way to the other side? Excellent longread.

Intermediary Protocols and Google Duplex

There’s a story that young network engineers are sometimes told to help them understand network stacks and/or the OSI model, and it goes something like this:

You overhear a conversation between two scientists on the subject of some topic relevant to thier field of interest. But as you listen more-closely, you realise that the scientists aren’t in the same place at all but are talking to one another over the telephone (presumably on speakerphone, given that you can hear them both, I guess). As you pay more attention still, you realise that it isn’t the scientists on the phone call at all but their translators: each scientist speaks to their translator in the scientist’s own language, and the translators are translating what they say into a neutral language shared with the other translator who translate it into the language spoken by the other scientist. Ultimately, the two scientists are communicating with one another, but they’re doing so via a “stack” at their end which only needs to be conceptually the same as the “stack” at the other end as far up as the step-below-them (the “first link” in their communication, with the translator). Below this point, they’re entrusting the lower protocols (the languages, the telephone system, etc.), in which they have no interest, to handle the nitty-gritty on their behalf.

The OSI model reflected using the "scientists conversation" metaphor. Based upon original art by Yuki Fujimura, used under a Creative Commons License.
The two scientists are able to communicate with one another, but that communication is not direct.

This kind of delegation to shared intermediary protocols is common in networking and telecommunications. The reason relates to opportunity cost, or – for those of you who are Discworld fans – the Sam Vimes’ “Boots” Theory. Obviously an efficiency could be gained here if all scientists learned a lingua franca, a universal shared second language for their purposes… but most-often, we’re looking for a short-term solution to solve a problem today, and the short-term solution is to find a work-around that fits with what we’ve already got: in the case above, that’s translators who share a common language. For any given pair of people communicating, it’s more-efficient to use a translator, even though solving the global problem might be better accomplished by a universal second language (perhaps Esperanto, for valid if Eurocentric reasons!).

1950s illustration of "driverless cars of the future". The car follows a series of electronic markers down the middle of the highway.
In the 1950s and 1960s, the concept of a self-driving car was already well-established… but the proposed mechanism for action was quite different to that which we see today.

The phenomenon isn’t limited to communications, though. Consider self-driving cars. If you look back to autonomous vehicle designs of the 1950s (because yes, we’ve been talking about how cool self-driving cars would be for a long, long time), they’re distinctly different from the ideas we see today. Futurism of the 1950s focussed on adapting the roads themselves to make them more-suitable for self-driving vehicles, typically by implanting magnets or electronics into the road surface itself or by installing radio beacons alongside highways to allow the car to understand its position and surroundings. The modern approach, on the other hand, sees self-driving cars use LiDAR and/or digital cameras to survey their surroundings and complex computer hardware to interpret the data.

This difference isn’t just a matter of the available technology (although technological developments cetainly inspired the new approach): it’s a fundamentally-different outlook! Early proposals for self-driving cars aimed to overhaul the infrastructure of the road network: a “big solution” on the scale of teaching everybody a shared second language. But nowadays we instead say “let’s leave the roads as they are and teach cars to understand them in the same way that people do.” The “big solution” is too big, too hard, and asking everybody to chip in a little towards outfitting every road with a standardised machine-readable marking is a harder idea to swallow than just asking each person who wants to become an early adopter of self-driving technology to pay a lot to implement a more-complex solution that works on the roads we already have.

LiDAR unit on a Google Self-Driving Car
In real life, these things spin much faster.

This week, Google showed off Duplex, a technology that they claim can perform the same kind of delegated-integration for our existing telephone lives. Let’s ignore for a moment the fact that this is clearly going to be overhyped and focus on the theoretical potential of this technology, which (even if it’s not truly possible today) is probably inevitable as chatbot technology improves: what does this mean for us? Instead of calling up the hairdresser to make an appointment, Google claim, you’ll be able to ask Google Assistant to do it for you. The robot will call the hairdresser and make an appointment on your behalf, presumably being mindful of your availability (which it knows, thanks to your calendar) and travel distance. Effectively, Google Assistant becomes your personal concierge, making all of those boring phone calls so that you don’t have to. Personally, I’d be more than happy to outsource to a computer every time I’ve had to sit in a telephone queue, giving the machine a summary of my query and asking it to start going through a summary of it to the human agent at the other end while I make my way back to the phone. There are obviously ethical considerations here too: I don’t like being hounded by robot callers and so I wouldn’t want to inflict that upon service providers… and I genuinely don’t know if it’s better or worse if they can’t tell whether they’re talking to a machine or not.

Process of having Google Assistant order a pizza, by phone, on your behalf.
I, for one, welcome our pizza-ordering overlords.

But ignoring the technology and the hype and the ethics, there’s still another question that this kind of technology raises for me: what will our society look like when this kind of technology is widely-available? As chatbots become increasingly human-like, smarter, and cheaper, what kinds of ways can we expect to interact with them and with one another? By the time I’m able to ask my digital concierge to order me a pizza (safe in the knowledge that it knows what I like and will ask me if it’s unsure, has my credit card details, and is happy to make decisions about special offers on my behalf where it has a high degree of confidence), we’ll probably already be at a point at which my local takeaway also has a chatbot on-staff, answering queries by Internet and telephone. So in the end, my chatbot will talk to their chatbot… in English… and work it out between the two of them.

Let that sink in for a moment: because we’ve a tendency to solve small problems often rather than big problems rarely and we’ve an affinity for backwards-compatibility, we will probably reach the point within the lifetimes of people alive today that a human might ask a chatbot to call another chatbot: a colossally-inefficient way to exchange information built by installments on that which came before. If you’re still skeptical that the technology could evolve this way, I’d urge you to take a look at how the technologies underpinning the Internet work and you’ll see that this is exactly the kind of evolution we already see in our communications technology: everything gets stacked on top of a popular existing protocol, even if it’s not-quite the right tool for the job, because it makes one fewer problem to solve today.

Hacky solutions on top of hacky solutions work: the most believable thing about Max Headroom’s appearance in Ready Player One (the book, not the film: the latter presumably couldn’t get the rights to the character) as a digital assistant was the versatility of his conversational interface.

A man and a woman look at a laptop screen in a cafe/bar.
“See? My laptop says we should hook up.”

By the time we’re talking about a “digital concierge” that knows you better than anyone, there’s no reason that it couldn’t be acting on your behalf in other matters. Perhaps in the future your assistant, imbued with intimate knowledge about your needs and interests and empowered to negotiate on your behalf, will be sent out on virtual “dates” with other people’s assistants! Only if it and the other assistant agree that their owners would probably get along, it’ll suggest that you and the other human meet in the real world. Or you could have your virtual assistant go job-hunting for you, keeping an eye out for positions you might be interested in and applying on your behalf… after contacting the employer to ask the kinds of questions that it anticipates that you’d like to know: about compensation, work/life balance, training and advancement opportunities, or whatever it thinks matter to you.

We quickly find ourselves colliding with ethical questions again, of course: is it okay that those who have access to more-sophisticated digital assistants will have an advantage? Should a robot be required to identify itself as a robot when acting on behalf of a human? I don’t have the answers.

But one thing I think we can say, based on our history of putting hacky solutions atop our existing ways of working and the direction in which digital assistants are headed, is that voice interfaces are going to dominate chatbot development a while… even where the machines end up talking to one another!

Neural nets respond to pranks like children do

A recent article by Janelle Shane talked about her recent experience with Microsoft Azure’s image processing API. If you’ve not come across her work before, I recommend starting with her candy hearts, or else new My Little Pony characters, invented by a computer. Anyway:

The Azure image processing API is a software tool powered by a neural net, a type of artificial intelligence that attempts to replicate a particular model of how (we believe) brains to work: connecting inputs (in this case, pixels of an image) to the entry nodes of a large, self-modifying network and reading the output, “retraining” the network based on feedback from the quality of the output it produces. Neural nets have loads of practical uses and even more theoretical ones, but Janelle’s article was about how confused the AI got when shown certain pictures containing (or not containing!) sheep.

A foggy field, incorrectly identified by an AI as containing sheep.
There are probably sheep in the fog somewhere, but they’re certainly not visible.

The AI had clearly been trained with lots of pictures that contained green, foggy, rural hillsides and sheep, and had come to associate the two. Remember that all the machine is doing is learning to associate keywords with particular features, and it’s clearly been shown many pictures that “look like” this that do contain sheep, and so it’s come to learn that “sheep” is one of the words that you use when you see a scene like this. Janelle took to Twitter to ask for pictures of sheep in unusual places, and the Internet obliged.

An AI mistakes a sheep for a dog when it is held by a child.
When the sheep is held by a child, it becomes a “dog”.

Many of the experiments resulting from this – such as the one shown above – work well to demonstrate this hyper-focus on context: a sheep up a tree is a bird, a sheep on a lead is a dog, a sheep painted orange is a flower, and so on. And while we laugh at them, there’s something about them that’s actually pretty… “human”.

Annabel with a goat.
Our eldest really loves cats. Also goats, apparently. Azure described this photo as “a person wearing a costume”, but it did include keywords such as “small”, “girl”, “petting”, and… “dog”.

I say this because I’ve observed similar quirks in the way that small children pick up language, too (conveniently, I’ve got a pair of readily-available subjects, aged 4 and 1, for my experiments in language acquisition…). You’ve probably seen it yourself: a toddler whose “training set” of data has principally included a suburban landscape describing the first cow they see as a “dog”. Or when they use a new word or phrase they’ve learned in a way that makes no sense in the current context, like when our eldest interrupted dinner to say, in the most-polite voice imaginable, “for God’s sake would somebody give me some water please”. And just the other day, the youngest waved goodbye to an empty room, presumably because it’s one that he often leaves on his way up to bed

Annabel snuggling one of Nanna Doreen's cats.
“A cat lying on a blanket”, says Azure, completely overlooking the small child in the picture. I guess the algorithm was trained on an Internet’s worth of cat pictures and didn’t see as much of people-with-cats.

For all we joke, this similarity between the ways in which artificial neural nets and small humans learn language is perhaps the most-accessible evidence that neural nets are a strong (if imperfect) model for how brains actually work! The major differences between the two might be simply that:

  1. Our artificial neural nets are significantly smaller and less-sophisticated than most biological ones.
  2. Biological neural nets (brains) benefit from continuous varied stimuli from an enormous number of sensory inputs, and will even self-stimulate (via, for example, dreaming) – although the latter is something with which AI researchers sometimes experiment.
John looking out of the window.
“Ca’! Ca’! Ca’!” Maybe if he shouts it excitedly enough, one of the cats (or dogs, which are for now just a special kind of cat) he’s spotted will give in and let him pet it. But I don’t fancy his chances.

Things we take as fundamental, such as the nouns we assign to the objects in our world, are actually social/intellectual constructs. Our minds are powerful general-purpose computers, but they’re built on top of a biology with far simpler concerns: about what is and is-not part of our family or tribe, about what’s delicious to eat, about which animals are friendly and which are dangerous, and so on. Insofar as artificial neural nets are an effective model of human learning, the way they react to “pranks” like these might reveal underlying truths about how we perceive the world.

And maybe somewhere, an android really is dreaming of an electric sheep… only it’s actually an electric cat.

Man And The Machines

There’s a fascinating article on LegalAffairs.org (the self-styled “magazine at the intersection of law and life” on artificial intelligence and legal/ethical/socialogical considerations relating to it. Despite disagreeing with a few of it’s points, it’s well-written and excellently-presented. Go read it.

In case the site stops publishing the article, I’ve made a copy, below. Click on the ‘next page‘ link to read it here.

A.I. Nuts, Again

Do you remember a week or two ago I wrote about a guy who patented the “Ethical Rules Of Artificial Intelligence”? Well – it looks like he’s read my article and placed his own comments. I’m quite surprised and impressed that he took the time (away from his heavy schedule of philosophising or book-signing or whatever) to come and read my counter-arguments to his ideas, and placed comments of his own (albeit mostly pre-fabricated stuff).

Here’s to you, John LaMuth.

A.I. For Deluded Nutcases

Some goon (sorry: Californian counsellor) has patented Inductive Inference Affective Language Analyzer Simulating Artificial Intelligence (including the Ten Ethical Laws Of Robotics). It’s nothing but unintelligible babble, interspersed by (inaccurate) references to artificial intelligence theory. The author (who also writes a book on family values with a distinct evangelic slant, from which most of the text of the patent seems to be taken) appears to know nothing about A.I. or computer science. In addition, I find his suggestion that ‘wooly’ and ‘vague’ rules and ‘commandments’ are sensible choices for A.I. safeguards –

While a meaningful future artificial intelligence may be more than capable of understanding rules set out in a way that a human might like to express it – indeed, for some machine intelligences (artificial or not) this capacity to understand human speech and expressions could be a very useful feature – this is not the level at which safeguards should be implemented.

While I appreciate the need for ‘safeguards’ (the need is that humans would not feel safe without them, as even early machine intelligences – having been built for a specific purpose – will be in many ways superior to their human creators and therefore be perceived as a threat to them), I do not feel that a safeguard which depends on the machine already being fully functional would be even remotely effective. Instead, such safeguards should be implemented at a far lower and fundamental level.

For an example of this, think of the safety procedures that are built into modern aircraft. An aeroplane is a sophisticated and powerful piece of machinery with some carefully-designed artificial intelligence algorithms pre-programmed into it, such as the autopilot and autoland features, the collision avoidance system, and the fuel regulators. Other, less sophisticated decision-making programs include the air pressure regulators and the turbulence indicators.

If the cabin pressure drops, an automatic system causes oxygen masks to drop from the overhead compartment. But this is not the only way to cause this to happen – the pilot also has a button for this purpose. On many ‘planes, in the event of a wing fire, the corresponding engine will be switched off – but this decision can be overridden by a human operator. These systems are all exhibiting high-level decision-making behaviour: rules programmed in to the existing systems. But these are, in the end, a second level safeguard to the low-level decision-making that prompts the pilot to press the button that drops the masks or keeps the engine on. These overrides are the most fundamental and must crucial safeguards in a modern aircraft: the means to physically cause or prevent the behaviour of the A.I..

Let’s go back to our ‘robots’ – imagine a future not unlike that expressed in films like Blade Runner or I, Robot, in which humanoid robotic servants assist humans with many menial tasks. Suppose, for whatever reason (malice, malfunction, or whatever), a robot attacks a human – the first level of safeguard (and the only one suggested by both films and by the author of the “Ten Ethical Laws“) would be that the human could demand that the robot desist. This would probably be a voice command: “Stop!”. But of course, this is like the aeroplane that ‘decides’ to turn off a burning engine – we already know that something has ‘gone wrong’ in the AI unit: the same machine that has to process the speech, ‘stop’. How do we know that this will be correctly understood, particularly if we already know that there has been a malfunction? If the command fails to work, the human’s only likely chance for survival would be to initialise the second, low-level safeguard – probably a reset switch or “big red button”.

You see: the rules that the author proposes are unsubstantial, vauge, and open to misinterpretation – just like the human’s cry for the robot to stop, above. The safeguards he proposes are no more effective than asking humans to be nice to one another is to preventing crime.

Whether or not it is ethical to give intelligent entities ‘off’ buttons is, of course, another question entirely.

Additional: On further reading, it looks as if the author of the document recently saw “I, Robot” and decided that his own neo-Christian viewpoint could be applied to artificial intelligences: which, of course, it could, but there is no reason to believe that it would be any more effective on any useful artificial intelligence than it would be on any useful ‘real’ intelligence.

Paedophile-Luring And Artificial Intelligence Ethics

[this post has been partially damaged during a server failure on Sunday 11th July 2004, and it has been possible to recover only a part of it]

[further fragments of this post were recovered on 12 October 2018]

Fun in the sun.

Kit and I had an idea for something like this a while back, and we were wondering if it constituted entrapment: after all, under UK law, it’s illegal for a human to attempt to trick another human into committing a crime, as it cannot be determined whether that person would have committed the crime of their own volition… but here’s the catch – is it legitimate for a machine, working on behalf of a human, to do the same thing?

That’s what’s likely to be the crucial issue if this scheme to trick ‘net paedophiles into giving information to computerised children [BBC] provides evidence in court (not just leads, as is the case so far) towards convicting people who are ‘grooming’ children on the internet.

Personally, I’d argue that – in this case – the machine is a tool of the human, just like chat room software is a tool of humans. I don’t see the difference between me using chat room software, pretending to be a kid, luring paedophiles, and providing tips to the police, and me writing a program to do the same for me. It’s …

 

Artificial Intelligence For Dummies

I’ve just written an articficial intelligence gamebot, designed to pseudointelligently play simple board games which involve a finite upper number of moves and a board of tokens – for example: Connect Four, Noughts & Crosses, Go!, or Othello. It uses the (appropriately-written) rules of the game in order to pre-anticipate a vast number of moves, and select the ‘best’ ones based on the likelyhood of them winning. It’s not terribley powerful, but I’d never written such a widely-scoped A.I. before, and I fancied the challenge.

I let it out for it’s first run this afternoon, and started a game of Connect Four with it. Here are the results:

I took the first turn, and put one of my pieces into the first column of the grid.

The gamebot took the second turn, picked up an enormous handful of pieces, and put six of them into the grid (two in the first column and four in the next four adjacent columns). These four-in-a-row, of course, won it the game.

Perhaps I need to define ‘cheating’ for it. Hmm… back to the drawing board…

Cyberethics Of Artificial Intelligence Slavery

Claire drove me to work this morning. We had a fascinating discussion on the way, on Cyberethics Of Artificial Intelligence Slavery. Cool.

This morning I gave a tour of the office to our new interviewee, Phil, who for some reason I keep trying to call Chris. If he gets the job, he’ll be working full-time as an industry year student when I become a part-timer again later this month.

Now I have to go get some work done…