A.I. Nuts, Again

Do you remember a week or two ago I wrote about a guy who patented the “Ethical Rules Of Artificial Intelligence”? Well – it looks like he’s read my article and placed his own comments. I’m quite surprised and impressed that he took the time (away from his heavy schedule of philosophising or book-signing or whatever) to come and read my counter-arguments to his ideas, and placed comments of his own (albeit mostly pre-fabricated stuff).

Here’s to you, John LaMuth.

A.I. For Deluded Nutcases

Some goon (sorry: Californian counsellor) has patented Inductive Inference Affective Language Analyzer Simulating Artificial Intelligence (including the Ten Ethical Laws Of Robotics). It’s nothing but unintelligible babble, interspersed by (inaccurate) references to artificial intelligence theory. The author (who also writes a book on family values with a distinct evangelic slant, from which most of the text of the patent seems to be taken) appears to know nothing about A.I. or computer science. In addition, I find his suggestion that ‘wooly’ and ‘vague’ rules and ‘commandments’ are sensible choices for A.I. safeguards –

While a meaningful future artificial intelligence may be more than capable of understanding rules set out in a way that a human might like to express it – indeed, for some machine intelligences (artificial or not) this capacity to understand human speech and expressions could be a very useful feature – this is not the level at which safeguards should be implemented.

While I appreciate the need for ‘safeguards’ (the need is that humans would not feel safe without them, as even early machine intelligences – having been built for a specific purpose – will be in many ways superior to their human creators and therefore be perceived as a threat to them), I do not feel that a safeguard which depends on the machine already being fully functional would be even remotely effective. Instead, such safeguards should be implemented at a far lower and fundamental level.

For an example of this, think of the safety procedures that are built into modern aircraft. An aeroplane is a sophisticated and powerful piece of machinery with some carefully-designed artificial intelligence algorithms pre-programmed into it, such as the autopilot and autoland features, the collision avoidance system, and the fuel regulators. Other, less sophisticated decision-making programs include the air pressure regulators and the turbulence indicators.

If the cabin pressure drops, an automatic system causes oxygen masks to drop from the overhead compartment. But this is not the only way to cause this to happen – the pilot also has a button for this purpose. On many ‘planes, in the event of a wing fire, the corresponding engine will be switched off – but this decision can be overridden by a human operator. These systems are all exhibiting high-level decision-making behaviour: rules programmed in to the existing systems. But these are, in the end, a second level safeguard to the low-level decision-making that prompts the pilot to press the button that drops the masks or keeps the engine on. These overrides are the most fundamental and must crucial safeguards in a modern aircraft: the means to physically cause or prevent the behaviour of the A.I..

Let’s go back to our ‘robots’ – imagine a future not unlike that expressed in films like Blade Runner or I, Robot, in which humanoid robotic servants assist humans with many menial tasks. Suppose, for whatever reason (malice, malfunction, or whatever), a robot attacks a human – the first level of safeguard (and the only one suggested by both films and by the author of the “Ten Ethical Laws“) would be that the human could demand that the robot desist. This would probably be a voice command: “Stop!”. But of course, this is like the aeroplane that ‘decides’ to turn off a burning engine – we already know that something has ‘gone wrong’ in the AI unit: the same machine that has to process the speech, ‘stop’. How do we know that this will be correctly understood, particularly if we already know that there has been a malfunction? If the command fails to work, the human’s only likely chance for survival would be to initialise the second, low-level safeguard – probably a reset switch or “big red button”.

You see: the rules that the author proposes are unsubstantial, vauge, and open to misinterpretation – just like the human’s cry for the robot to stop, above. The safeguards he proposes are no more effective than asking humans to be nice to one another is to preventing crime.

Whether or not it is ethical to give intelligent entities ‘off’ buttons is, of course, another question entirely.

Additional: On further reading, it looks as if the author of the document recently saw “I, Robot” and decided that his own neo-Christian viewpoint could be applied to artificial intelligences: which, of course, it could, but there is no reason to believe that it would be any more effective on any useful artificial intelligence than it would be on any useful ‘real’ intelligence.

Paedophile-Luring And Artificial Intelligence Ethics

[this post has been partially damaged during a server failure on Sunday 11th July 2004, and it has been possible to recover only a part of it]

[further fragments of this post were recovered on 12 October 2018]

Fun in the sun.

Kit and I had an idea for something like this a while back, and we were wondering if it constituted entrapment: after all, under UK law, it’s illegal for a human to attempt to trick another human into committing a crime, as it cannot be determined whether that person would have committed the crime of their own volition… but here’s the catch – is it legitimate for a machine, working on behalf of a human, to do the same thing?

That’s what’s likely to be the crucial issue if this scheme to trick ‘net paedophiles into giving information to computerised children [BBC] provides evidence in court (not just leads, as is the case so far) towards convicting people who are ‘grooming’ children on the internet.

Personally, I’d argue that – in this case – the machine is a tool of the human, just like chat room software is a tool of humans. I don’t see the difference between me using chat room software, pretending to be a kid, luring paedophiles, and providing tips to the police, and me writing a program to do the same for me. It’s …

 

Artificial Intelligence For Dummies

I’ve just written an artificial intelligence gamebot, designed to pseudointelligently play simple board games which involve a finite upper number of moves and a board of tokens – for example: Connect Four, Noughts & Crosses, Go!, or Othello. It uses the (appropriately-written) rules of the game in order to pre-anticipate a vast number of moves, and select the ‘best’ ones based on the likelihood of them winning. It’s not terribly powerful, but I’d never written such a widely-scoped A.I. before, and I fancied the challenge.

I let it out for it’s first run this afternoon, and started a game of Connect Four with it. Here are the results:

I took the first turn, and put one of my pieces into the first column of the grid.

The gamebot took the second turn, picked up an enormous handful of pieces, and put six of them into the grid (two in the first column and four in the next four adjacent columns). These four-in-a-row, of course, won it the game.

Perhaps I need to define ‘cheating’ for it. Hmm… back to the drawing board…

Cyberethics Of Artificial Intelligence Slavery

Claire drove me to work this morning. We had a fascinating discussion on the way, on Cyberethics Of Artificial Intelligence Slavery. Cool.

This morning I gave a tour of the office to our new interviewee, Phil, who for some reason I keep trying to call Chris. If he gets the job, he’ll be working full-time as an industry year student when I become a part-timer again later this month.

Now I have to go get some work done…