A.I. For Deluded Nutcases

Some goon (sorry: Californian counsellor) has patented Inductive Inference Affective Language Analyzer Simulating Artificial Intelligence (including the Ten Ethical Laws Of Robotics). It’s nothing but unintelligible babble, interspersed by (inaccurate) references to artificial intelligence theory. The author (who also writes a book on family values with a distinct evangelic slant, from which most of the text of the patent seems to be taken) appears to know nothing about A.I. or computer science. In addition, I find his suggestion that ‘wooly’ and ‘vague’ rules and ‘commandments’ are sensible choices for A.I. safeguards –

While a meaningful future artificial intelligence may be more than capable of understanding rules set out in a way that a human might like to express it – indeed, for some machine intelligences (artificial or not) this capacity to understand human speech and expressions could be a very useful feature – this is not the level at which safeguards should be implemented.

While I appreciate the need for ‘safeguards’ (the need is that humans would not feel safe without them, as even early machine intelligences – having been built for a specific purpose – will be in many ways superior to their human creators and therefore be perceived as a threat to them), I do not feel that a safeguard which depends on the machine already being fully functional would be even remotely effective. Instead, such safeguards should be implemented at a far lower and fundamental level.

For an example of this, think of the safety procedures that are built into modern aircraft. An aeroplane is a sophisticated and powerful piece of machinery with some carefully-designed artificial intelligence algorithms pre-programmed into it, such as the autopilot and autoland features, the collision avoidance system, and the fuel regulators. Other, less sophisticated decision-making programs include the air pressure regulators and the turbulence indicators.

If the cabin pressure drops, an automatic system causes oxygen masks to drop from the overhead compartment. But this is not the only way to cause this to happen – the pilot also has a button for this purpose. On many ‘planes, in the event of a wing fire, the corresponding engine will be switched off – but this decision can be overridden by a human operator. These systems are all exhibiting high-level decision-making behaviour: rules programmed in to the existing systems. But these are, in the end, a second level safeguard to the low-level decision-making that prompts the pilot to press the button that drops the masks or keeps the engine on. These overrides are the most fundamental and must crucial safeguards in a modern aircraft: the means to physically cause or prevent the behaviour of the A.I..

Let’s go back to our ‘robots’ – imagine a future not unlike that expressed in films like Blade Runner or I, Robot, in which humanoid robotic servants assist humans with many menial tasks. Suppose, for whatever reason (malice, malfunction, or whatever), a robot attacks a human – the first level of safeguard (and the only one suggested by both films and by the author of the “Ten Ethical Laws“) would be that the human could demand that the robot desist. This would probably be a voice command: “Stop!”. But of course, this is like the aeroplane that ‘decides’ to turn off a burning engine – we already know that something has ‘gone wrong’ in the AI unit: the same machine that has to process the speech, ‘stop’. How do we know that this will be correctly understood, particularly if we already know that there has been a malfunction? If the command fails to work, the human’s only likely chance for survival would be to initialise the second, low-level safeguard – probably a reset switch or “big red button”.

You see: the rules that the author proposes are unsubstantial, vauge, and open to misinterpretation – just like the human’s cry for the robot to stop, above. The safeguards he proposes are no more effective than asking humans to be nice to one another is to preventing crime.

Whether or not it is ethical to give intelligent entities ‘off’ buttons is, of course, another question entirely.

Additional: On further reading, it looks as if the author of the document recently saw “I, Robot” and decided that his own neo-Christian viewpoint could be applied to artificial intelligences: which, of course, it could, but there is no reason to believe that it would be any more effective on any useful artificial intelligence than it would be on any useful ‘real’ intelligence.

Executable Stenography… With A Difference

Somebody’s come up with a program that hides secret messages in executable programs. Well… that’s not so impressive – we’ve all hidden secret messages in JPEG files before by using programs to ‘flip’ certain pixels (example). This works by changing the image in subtle ways that the human eye won’t detect, but that the descrambling application will. But here’s the clever bit…

Typically, when encoding a ‘hidden message’ in an executable, one ‘pads’ the file, making it bigger. The technique used when encoding messages in graphics files can’t be used with executables, because ‘flipping’ bits of the file would stop the program from working (or at least, working as it should), which may arouse suspicion. But this new tool works by exploiting redundancy in the i386 instruction set, swapping instructions or blocks of instructions for other ones which are functionally identical. As a result, the original filesize remains the same, and the program maintains full functionality. It would take an eavesdropper to fully compare the executable with a known original executable in order to determine that there was even a message hidden within it, and (thanks to Blowfish cryptography) yet more effort to decode that message.

Marvellous.

Parallel Parking For Dummies

Parallel Parking High ScoreHave a go at the parallel parking game – let’s so how good you lot are. I’ve managed to beat the rest of the folks at work with a stunning score of 87.72, parking in 5.7 seconds flat… not bad for the only non-driver!

God Violates Intel Trademark

Jesus InsideIt’s all news, today! Lots of fascinating little stories and nothing about me. I mean, who’s blog is it anyway?

According to The Register, the authoritive source on such matters, God may have infringed upon Intel’s intellectual property. Only moderately funny, but nice and geeky.

In other news, I hear that AMD are planning to release a new dual-processor board designed to accomodate two older Athlon chips, optomised for playing 80s music on MP3. They’re calling it the Duron Duron.

Singing Flowers

Remember those silly plastic flowers that danced? Well, here’s the next inevitable step: a Japanese inventor has developed a device that can be buried in a plant pot and uses vibration to cause the leaves of plant therein to oscillate as speakers.

Grow your own speakers. Yay.

Personally, I’d have far too much fun burying such a device under a friend’s vegetable patch, on a long (buried or otherwise concealed) cable, and have his plants ‘talk back’ to him when he talks to them. Where can I get one?

A Selection Of News Items From Around The World

[this post has been partially damaged during a server failure on Sunday 11th July 2004, and it has been possible to recover only a part of it]

[more of this post was recovered on Friday 24 November 2017]

Here’s some stuff I found interesting this weekend:

Swedish health workers, in an effort to stem the growing cases of chlamydia among young people, have launched a ‘condom ambulance [BBC News]. If you find yourself ‘caught short’ in Sweden, just give them a bell and they’ll rush around to your house with a pack-of-three, for the equivelent cost of about £4.

Chinese researchers have used a carbon nanotube [Wikipedia] as a filament in a new, experimental light bulb [The Register]. This bulb emits more light and works at a lower threshold than tungsten at the same voltage, and was still functioning fine after being switched on and off 5000 times. The future of lighting?

And finally, researchers from Hebrew University in Israel may have found a solution to the problems associated with passwords. As it stands, ‘secure’ passwords are hard to remember, and often find themselves written down, whereas insecure ones can be cracker. Plus, for real security, passwords should be …