Paedophile-Luring And Artificial Intelligence Ethics

[this post has been partially damaged during a server failure on Sunday 11th July 2004, and it has been possible to recover only a part of it]

[further fragments of this post were recovered on 12 October 2018]

Fun in the sun.

Kit and I had an idea for something like this a while back, and we were wondering if it constituted entrapment: after all, under UK law, it’s illegal for a human to attempt to trick another human into committing a crime, as it cannot be determined whether that person would have committed the crime of their own volition… but here’s the catch – is it legitimate for a machine, working on behalf of a human, to do the same thing?

That’s what’s likely to be the crucial issue if this scheme to trick ‘net paedophiles into giving information to computerised children [BBC] provides evidence in court (not just leads, as is the case so far) towards convicting people who are ‘grooming’ children on the internet.

Personally, I’d argue that – in this case – the machine is a tool of the human, just like chat room software is a tool of humans. I don’t see the difference between me using chat room software, pretending to be a kid, luring paedophiles, and providing tips to the police, and me writing a program to do the same for me. It’s …

 

Artificial Intelligence For Dummies

I’ve just written an artificial intelligence gamebot, designed to pseudointelligently play simple board games which involve a finite upper number of moves and a board of tokens – for example: Connect Four, Noughts & Crosses, Go!, or Othello. It uses the (appropriately-written) rules of the game in order to pre-anticipate a vast number of moves, and select the ‘best’ ones based on the likelihood of them winning. It’s not terribly powerful, but I’d never written such a widely-scoped A.I. before, and I fancied the challenge.

I let it out for it’s first run this afternoon, and started a game of Connect Four with it. Here are the results:

I took the first turn, and put one of my pieces into the first column of the grid.

The gamebot took the second turn, picked up an enormous handful of pieces, and put six of them into the grid (two in the first column and four in the next four adjacent columns). These four-in-a-row, of course, won it the game.

Perhaps I need to define ‘cheating’ for it. Hmm… back to the drawing board…

Cyberethics Of Artificial Intelligence Slavery

Claire drove me to work this morning. We had a fascinating discussion on the way, on Cyberethics Of Artificial Intelligence Slavery. Cool.

This morning I gave a tour of the office to our new interviewee, Phil, who for some reason I keep trying to call Chris. If he gets the job, he’ll be working full-time as an industry year student when I become a part-timer again later this month.

Now I have to go get some work done…