Happy Fun Weekend

It’s so much nicer coming back to the office on a Monday after a weekend both relaxing and productive, with lots of happy fun time with friends. Managed to tidy the flat, do heaps of laundry, have a successful Troma Night (three films, a decent crowd, and everybody hung on in ’til the end despite knackeredness), a sedate but moderately successful Geek Night (Carcassonne and Chez Geek). All good.

Plus, I managed to find time to learn a fair bit about mod-rewrite, the Apache module that lets you do all kinds of useful things like canonical URLs, content negotiation, proxying content, fallbacks, etc. (as used on Scatmania to make the ‘nice’ URLs you see with the date and post name embedded into the pseudo-folder-structure). Fab. And managed to help Bryn with his new web site, which I’m sure you’ll all be seeing later this month.

And in actual news, BBC News reports that a Swedish man has been issued with a £90 ticket for illegally parking his snowmobile in Warwick, despite claiming never to have been there and that his snowmobile was in his shed in Bollstabruk at the time.

The Scary Baby Conspiracy

Now here’s an idea for an Illuminati: The Game Of Conspiracy “Illuminati” card – The Scary Baby Conspiracy. Suggestions for the Scary Baby Conspiracy’s unique win condition and any special rules are welcome (from anybody who actually knows what Illuminati is). Fnord.

On which note – it’s Geek Night tonight! Hopefully we can have a couple of games of Carcassonne and perhaps one of Hacker.

A.I. Nuts, Again

Do you remember a week or two ago I wrote about a guy who patented the “Ethical Rules Of Artificial Intelligence”? Well – it looks like he’s read my article and placed his own comments. I’m quite surprised and impressed that he took the time (away from his heavy schedule of philosophising or book-signing or whatever) to come and read my counter-arguments to his ideas, and placed comments of his own (albeit mostly pre-fabricated stuff).

Here’s to you, John LaMuth.

Mr. Sinus Theater 3000 In Name-Theft Lawsuit

Mr. Sinus Theater 3000

The Austin Business Journal reports that the makers of Mystery Science Theater 3000 are taking to court the makers of a new show, Mr. Sinus Theater 3000, in which a group of guys sit in front of a cinema screen and take the piss out of the films they watch. And fair enough: it seems to me that this is quite obviously an attempt to cash in on the fame of the now-dead series, MST3K. Although if it’s not – and it is just fair use and parody… then Best Brains would be complaining that their parody-ridden show was itself the victim of a parody… umm…

In any case, go read the story.

Hide & Seek

Following the success of our last game of hide & seek in the Castle, Paul, Bryn, Claire and I went for another game last night. Ah; the simple joys of childish fun – legging it around a darkened ruin at night. Right up until silly screaming girls filled the area. Then we left.

In other news, I’ve been playing far too much The Ur-Quan Masters (a.k.a. Star Control 2). If you like retro space exploration and trading games, give it a go (it’s free). Fantastic soundtrack, too.

Arriva And The Bus Route Of Doom

Missed my bug this morning. It seems that Arriva carried out some changes to the local bus timetables at the weekend. This much I knew about. I also know that my bus would now be leaving the bus station at 0740 (rather than the 0750 I’m used to), so I arrived ten minutes earlier than at a bus stop it passes.

I was all in favour of this change. Throughout the day, the buses on the 531 route to Penrhyncoch have all left Aberystwyth at twenty to the hour… except for the aforementioned first one of the day, which leaves at ten to. For those of us who frequently alternate between these two times, it seems particularly not to have the first bus of the morning leave at the same minutes-past-the-hour as the rest: it’s only a ten minute difference. This is what I assumed they’d done.

Somehow, I managed to miss my bus, so I returned home and downloaded the new timetable.

It seems that not only have Arriva changed the start time of my service, but also it’s frequency, number, and even it’s route. The lightly-worded poster discreetly placed in my bus didn’t hint at the magnitude of the upcoming change (“a few changes to schedules”, it said). Bloody hell.

So now there’s no Sunday service, no late service, and an even harder-to-memorise timetable (with the first two and the last one bus either way now on a different schedule pattern than the rest). The only possible benefit to me is that the bus makes less stops: for example, it no longer stops at the bus stop I was waiting at this morning…

<sighs>

This isn’t a good start.

Amazon Pre-Order For Half-Life 2

I can’t believe that it’s been 13 and a half months since I pre-ordered Half-Life 2 on Amazon. I went to check on my order earlier today:

Dan's pre-order for Half-Life 2 on Amazon... with delivery estimate 2003.

It seems that my delivery estimate hasn’t been fixed – I can still (apparently) expect to receive my copy of this long-awaited game by 2nd October… 2003. Hmm.

Bryn – Afro Man!

Picture removed. The joke’s gotten old. – Dan, 5 June 2007

Picture reinstated. The joke’s so old that it’s now dead, and this page can serve as a permanent archive to what once was. – Dan, 7 October 2018

Picture removed again. I’d misunderstood a conversation in which I believed Bryn had approved of its reinstating. – 10 June 2019

Troma Night Website Integrates With Abnib

Woo and indeed hoo! I’m really starting to enjoy programming RSS feeds into my web sites now. I’ve just done a little bit of recoding of the Troma Night website to allow the newly-relaunched Aberystwyth weblog aggregator, Abnib, to syndicate it. Now, Abnib will show the details of the next upcoming Troma Night… and not a moment too soon – if you’re viewing this post on Abnib, you’ll see the announcement of tonight’s Troma Night just below it. Yay!

Abnib’s Back!

Abnib Journals button

(much thanks due to Gareth)

Abnib, the Aberystwyth Weblog Aggregator (bringing you niblets of the best of Aber’s weblogs) is back online, after months of absence. Take a look and see who you recognise.

Haven’t quite gotten around to putting everybody’s ‘mugshots’ in there (as I’m having some difficulty with semitransparent PNGs), but Gareth’s made a good few to get us going. Yay! Hooray for Gareth! And Aberystwyth! And Abnib! And RSS!

Thunderbirds Are… NO!

I saw Thunderbirds at the cinema last night. Jeez; was that awful. Unlike Bryn (who’s complained at length about the film already), I’m not a long-standing fan of the original TV series, and so the film didn’t ‘ruin’ it for me (although I did notice several major inconsistencies). Nonetheless, I still found the film to be quite abysmal.

The whole thing feels like a bad re-make of Spy Kids. It’s riddled with continuity errors (where did that door opening switch go?), conveniences (suddenly an electronic lock becomes a mechanical one later in the film, so that Parker can pick it), plotholes (The Hood states that he was born with his powers, then later states that he gained them after the Thunderbirds failed to rescue him), false geography (I must take that trans-Thames monorail someday), false physics (you’re landing that rocket how?), bad sound effects (reminiscent of 60s cartoons, but no, not authentic to the style of the original series), awful acting (look; I’m scared – look; I’m concerned – look… umm), characters with no common sense (let’s all leave the base undefended during this period of suspicious activity, for no reason whatsoever – and – my being a Thunderbird is a secret, so I’ll be seen to exit a disaster scene with them… in my flying car)…

The best thing about the film was the subtle and less-subtle jokes they made about the original series: “Look at him, like a puppet on a string!” says The Hood, as he uses his mind control powers on Brains. In another scene, with a close-up of a character’s hand, strings can clearly be seen supporting it (in the original series, the characters were puppets but for close-up scenes real hands were used).

It’s currently averaging 4.5 on the IMDB. I’d give it a 3, and it’s only that high because (a) I’m not a Thunderbirds fan and (b) I’ve seen a lot of awful films this last year.

How Google Could De-Throne AIM, And Other Geeky News

There’s an article on how Google could overthrow AIM/ICQ (link removed; apple-x.net now seems to be occupied by domain squatters), and perhaps even MSN Messenger, from their dominant positions in the instant messenger market, and improve internet standards usage and accessibility, by releasing their own instant messenger tool powered by the (wonderful) Jabber protocol. It’s a lovely idea, but (sadly) not one which is likely to happen.

On similarly geeky news, there’s a new web site, BrowseHappy, which aims to help everyday users make the switch away from Internet Explorer to safer, simpler, faster, better browsers. If you’re still using IE, take a look. If you’re already enlightened, show it to your unenlightened friends. It’s a very approachable site in nice, easy language.

And finally, there’s apparently a new worm doing the rounds, “Peeping Tom”, which, upon infection, turns on the victim’s webcam and microphone, and begins broadcasting to the world. What a lovely idea for a novelty virus.

Thanks for listening

A.I. For Deluded Nutcases

Some goon (sorry: Californian counsellor) has patented Inductive Inference Affective Language Analyzer Simulating Artificial Intelligence (including the Ten Ethical Laws Of Robotics). It’s nothing but unintelligible babble, interspersed by (inaccurate) references to artificial intelligence theory. The author (who also writes a book on family values with a distinct evangelic slant, from which most of the text of the patent seems to be taken) appears to know nothing about A.I. or computer science. In addition, I find his suggestion that ‘wooly’ and ‘vague’ rules and ‘commandments’ are sensible choices for A.I. safeguards –

While a meaningful future artificial intelligence may be more than capable of understanding rules set out in a way that a human might like to express it – indeed, for some machine intelligences (artificial or not) this capacity to understand human speech and expressions could be a very useful feature – this is not the level at which safeguards should be implemented.

While I appreciate the need for ‘safeguards’ (the need is that humans would not feel safe without them, as even early machine intelligences – having been built for a specific purpose – will be in many ways superior to their human creators and therefore be perceived as a threat to them), I do not feel that a safeguard which depends on the machine already being fully functional would be even remotely effective. Instead, such safeguards should be implemented at a far lower and fundamental level.

For an example of this, think of the safety procedures that are built into modern aircraft. An aeroplane is a sophisticated and powerful piece of machinery with some carefully-designed artificial intelligence algorithms pre-programmed into it, such as the autopilot and autoland features, the collision avoidance system, and the fuel regulators. Other, less sophisticated decision-making programs include the air pressure regulators and the turbulence indicators.

If the cabin pressure drops, an automatic system causes oxygen masks to drop from the overhead compartment. But this is not the only way to cause this to happen – the pilot also has a button for this purpose. On many ‘planes, in the event of a wing fire, the corresponding engine will be switched off – but this decision can be overridden by a human operator. These systems are all exhibiting high-level decision-making behaviour: rules programmed in to the existing systems. But these are, in the end, a second level safeguard to the low-level decision-making that prompts the pilot to press the button that drops the masks or keeps the engine on. These overrides are the most fundamental and must crucial safeguards in a modern aircraft: the means to physically cause or prevent the behaviour of the A.I..

Let’s go back to our ‘robots’ – imagine a future not unlike that expressed in films like Blade Runner or I, Robot, in which humanoid robotic servants assist humans with many menial tasks. Suppose, for whatever reason (malice, malfunction, or whatever), a robot attacks a human – the first level of safeguard (and the only one suggested by both films and by the author of the “Ten Ethical Laws“) would be that the human could demand that the robot desist. This would probably be a voice command: “Stop!”. But of course, this is like the aeroplane that ‘decides’ to turn off a burning engine – we already know that something has ‘gone wrong’ in the AI unit: the same machine that has to process the speech, ‘stop’. How do we know that this will be correctly understood, particularly if we already know that there has been a malfunction? If the command fails to work, the human’s only likely chance for survival would be to initialise the second, low-level safeguard – probably a reset switch or “big red button”.

You see: the rules that the author proposes are unsubstantial, vauge, and open to misinterpretation – just like the human’s cry for the robot to stop, above. The safeguards he proposes are no more effective than asking humans to be nice to one another is to preventing crime.

Whether or not it is ethical to give intelligent entities ‘off’ buttons is, of course, another question entirely.

Additional: On further reading, it looks as if the author of the document recently saw “I, Robot” and decided that his own neo-Christian viewpoint could be applied to artificial intelligences: which, of course, it could, but there is no reason to believe that it would be any more effective on any useful artificial intelligence than it would be on any useful ‘real’ intelligence.