On This Day In 2004

Looking Back

On this day in 2004 I handed in my dissertation, contributing towards my BEng in Software Engineering. The topic of my dissertation was the Three Rings project, then in its first incarnation, a web application originally designed to help university Nightlines to run their services.

An early Three Rings Directory page. If you remember when Three Rings used to look like this, then you're very old.

I’d originally started developing the project early in the previous academic year, before I’d re-arranged how I was going to finish my course: Three Rings celebrates its tenth birthday this year. This might be considered to have given me a head start over my peers, but in actual fact it just meant that I had even more to write-up at the end. Alongside my work at SmartData a few days a week (and sometimes at weekends), that meant that I’d been pretty damn busy.

A page from my dissertation, covering browser detection and HTTPS support (then, amazingly, still not-quite-universal in contemporary browsers).

I’d celebrated hitting 10,000 words – half of the amount that I estimated that I’d need – but little did I know that my work would eventually weigh in at over 30,000 words, and well over the word limit! In the final days, I scrambled to cut back on text and shunt entire chapters into the appendices (A through J), where they’d be exempt, while a team of volunteers helped to proofread everything I’d done so far.

Go on then; have another screenshot of an ancient web application to gawk at.

Finally, I was done, and I could relax. Well: right up until I discovered that I was supposed to have printed and bound two copies, and I had to run around a busy and crowded campus to get another copy run off at short notice.

Looking Forward

Three Rings went from strength to strength, as I discussed in an earlier “on this day”. When Bryn came on board and offered to write programs to convert Three Rings 1 data into Three Rings 2 data, in 2006, he borrowed my dissertation as a reference. After he forgot that he still had it, he finally returned it last month.

The inside front cover of my dissertation, along with a note from Bryn.

Later still in 2009, Ruth expanded Three Rings as part of her Masters dissertation, in a monumental effort to add much-needed features at the same time as getting herself a degree. After handing it in and undergoing her defense (which went better than she expected), she got a first.

My dissertation (left) back on my bookshelf, where it belongs.

Today, Three Rings continues to eat a lot of my time, and now supports tens of thousands of volunteers at hundreds of different helplines and other charities, including virtually every Nightline and the majority of all Samaritans branches.

It’s grown even larger than I ever imagined, back in those early days. I often tell people that it started as a dissertation project, because it’s simpler than the truth: that it started a year or two before that, and provided a lot of benefit to a few Nightlines, and it was just convenient that I was able to use it as a part of my degree because otherwise I probably wouldn’t have had time to make it into what it became. Just like I’m fortunate now to have the input of such talented people as I have, over the last few years, because I couldn’t alone make it into the world-class service that it’s becoming.

This blog post is part of the On This Day series, in which Dan periodically looks back on years gone by.

× × × × ×

Visitor Tracking Without Cookies (or How To Abuse HTTP 301s)

Last week I was talking to Alexander Dutton about an idea that we had to implement cookie-like behaviour using browser caching. As I first mentioned last year, new laws are coming into force across Europe that will require websites to ask for your consent before they store cookies on your computer. Regardless of their necessity, these laws are badly-defined and ill thought-out, and there’s been a significant lack of information to support web managers in understanding and implementing the required changes.

British Telecom's implementation of the new cookie laws. Curiously, if you visit their site using the Opera web browser, it assumes that you've given consent, even if you click the button to not do so.
British Telecom’s implementation of the new cookie laws. Curiously, if you visit their site using the Opera web browser, it assumes that you’ve given consent, even if you click the button to not do so.

To illustrate one of the ambiguities in the law, I’ve implemented a tool which tracks site visitors almost as effectively as cookies (or similar technologies such as Flash Objects or Local Storage), but which must necessarily fall into one of the larger grey areas. My tool abuses the way that “permanent” (301) HTTP redirects are cached by web browsers.

[callout][button link=”http://c301.scatmania.org/” align=”right” size=”medium” color=”green”]See Demo Site[/button]You can try out my implementation for yourself. Click on the button to see the sample site, then close down all of your browser windows (or even restart your computer) and come back and try again: the site will recognise you and show you the same random number as it did the first time around, as well as identifying when your first visit was.[/callout]

Here’s how it works, in brief:

  1. A user visits the website.
  2. The website contains a <script> tag, pointing at a URL where the user’s browser will find some Javascript.
  3. The user’s browser requests the Javascript file.
  4. The server generates a random unique identifier for this user.
  5. The server uses a HTTP 301 response to tell the browser “this Javascript can be found at a different web address,” and provides an address that contains the new unique identifier.
  6. The user’s browser requests the new document (e.g. /javascripts/tracking/123456789.js, if the user’s unique ID was 123456789).
  7. The resulting Javascript is generated dynamically to automatically contain the ID in a variable, which can then be used for tracking purposes.
  8. Subsequent requests to the server, even after closing the browser, skip steps 3 through 5, because the user’s browser will cache the 301 and re-use the unique web address associated with that individual user.
How my "301-powered 'cookies'" work.
How my “301-powered ‘cookies'” work.

Compared to conventional cookie-based tracking (e.g. Google Analytics), this approach:

  • Is more-fragile (clearing the cache is a more-common user operation than clearing cookies, and a “force refresh” may, in some browsers, result in a new tracking ID being issued).
  • Is less-blockable using contemporary privacy tools, including the W3C’s proposed one: it won’t be spotted by any cookie-cleaners or privacy filters that I’m aware of: it won’t penetrate incognito mode or other browser “privacy modes”, though.

Moreover, this technique falls into a slight legal grey area. It would certainly be against the spirit of the law to use this technique for tracking purposes (although it would be trivial to implement even an advanced solution which “proxied” requests, using a database to associate conventional cookies with unique IDs, through to Google Analytics or a similar solution). However, it’s hard to legislate against the use of HTTP 301s, which are an even more-fundamental and required part of the web than cookies are. Also, and for the same reasons, it’s significantly harder to detect and block this technique than it is conventional tracking cookies. However, the technique is somewhat brittle and it would be necessary to put up with a reduced “cookie lifespan” if you used it for real.

[callout][button link=”http://c301.scatmania.org/” align=”right” size=”medium” color=”green”]See Demo Site[/button] [button link=”https://gist.github.com/avapoet/5318224″ align=”right” size=”medium” color=”orange”]Download Code[/button] Please try out the demo, or download the source code (Ruby/Sinatra) and see for yourself how this technique works.[/callout]

Note that I am not a lawyer, so I can’t make a statement about the legality (or not) of this approach to tracking. I would suspect that if you were somehow caught doing it without the consent of your users, you’d be just as guilty as if you used a conventional approach. However, it’s certainly a technically-interesting approach that might have applications in areas of legitimate tracking, too.

Update: The demo site is down, but I’ve update the download code link so that it still works.

×

8-Bit Google Maps

It’s like stepping back in time through videogaming history. And also sideways, into a parallel universe of knights and dragons.

8-bit Google Maps. At different view levels, you’ll see mountainous areas (Wales is worth looking at) and sprites for cities of different sizes.

It’s like Google Maps, but in the style of retro top-down, turn-based RPGs. It’s really quite impressive: it’s presumably being generated at least semi-dynamically (as it covers the whole world), but it’s more than a little impressive. It sometimes makes mistakes with rivers – perhaps where their visibility from the air is low – but nonetheless an interesting feat from a technical perspective.

There’s “8-bit Street View”, too.

Nice one, Google. Go take a look.

Searching For A Virgin

You just can’t rely on GMail’s “contacts” search any more. Look what it came up with:

Not a result I'd commonly associate with the word "virgin".

With apologies to those of you who won’t “get” this: the person who came up in the search results is a name that is far, far away, in my mind, from the word “virgin”.

In not-completely-unrelated news, I use a program called SwiftKey X on my phone, which uses Markov chains (as I’ve described before) to intelligently suggest word completion and entire words and phrases based on the language I naturally use. I had the software thoroughly parse my text messages, emails, and even this blog to help it learn my language patterns. And recently, while writing a text message to my housemate Paul, it suggested the following sentence as the content of my message:

I am a beautiful person.

I have no idea where it got the idea that that’s something I’m liable to say with any regularity. Except now that it’s appeared on my blog, it will. It’s all gone a little recursive.

×

Remembering December 2003

For the first time in over seven years, I’ve recovered a handful more of my “lost” blog posts: articles that were written prior to the great server fuck-up of July 2004.

scatmania.org in August 2003, showing off the simplistic look it had before it was deleted.

Since then, I’ve kept regular backups. A lot of the old stuff is sometimes cringeworthy (in a “did I really used to be such a dick?” way), and I’m sure that someday I’ll look back at my blog posts from today, too, and find them shockingly un-representative of me in the future. That’s the nature of getting older.

Nostalgia’s awesome, which I choose to represent with this photo of me and my parents on a hilltop somewhere. You have permission to “aww”.

But it’s still important to me to keep all of this stuff. My blog is an extension to my diary: the public-facing side of what’s going on in my life. I back-link furiously, especially in the nostalgia-ridden “On This Day” series of blog posts I throw out once in a while.

Castle of the Four Winds in early 1999.
If you remember my blog when it used to look like this, back in the late 1990s, then you’ve been following me than longer than most folks have been on the Web at all.

The blog posts I’ve newly recovered are:

Andy & Sian, the adorable couple who I declared “most surprising” of the new relationships to get underway late in 2003. The pair married in 2010.

So there you go: especially for you stalkers or those of you looking for a trip down memory lane – some links to what I was up to in December 2003.

For those of you who are really on a roll, here’s some further reading from the period: Kit, on the last Troma Night of the year; a religious argument that Alec kicked off (thanks, archive.org); Liz starting her first blog; Paul applies for a hardship loan;

× × × ×

HDMI Virus, or How I Became An Old Person

So I saw this HDMI cable online:

Apparently the plastic coating around this cable helps to prevent 'virus noises', whatever those are. Red scribbles added by me.

Somehow, this triggered a transformation in me. You know how when Eric eats a banana, an amazing transformation occurs? A similar thing happened to me: this horrendously-worded advertisement turned me into an old person. I wanted to write a letter to them.

My letter... er... email to Bluemouth Interactive.

There were so many unanswered questions in my mind: what is a “virus noise” (is it a bit like the sound of somebody sneezing?)? How a polyester coating protects against them? And what kind of viruses are transmitted down video cables, anyway?

It took them five days but, fair play to them, they – despite Reddit’s expectations – wrote back.

Bluemouth's response to me. Like the other pictures, you can click it to see it in full.

Their explanation? The ‘Virus’ was transcribed from French terminology for interference. It’s not a computer virus or anything like that.

The world is full of examples of cables being over-sold, especially HDMI cables and things like “gold-plated optical cables” (do photons care about the conductivity of gold, now?).

Does anybody have enough of a familiarity with the French language to let me know if their explanation is believable?

× × ×

New Computer #2 – Dana

The other week I built Tiffany2, New Earth‘s new media centre computer. She’s well-established and being used to watch movies, surf the web, and whatnot, now, so I thought I’d better fulfil my promise of telling you about my other new smaller-than-average computer, Dana, whose existence was made possible by gifts from my family over Christmas and my birthday.

Dana‘s size and power-consumption is so small that it makes Tiffany2 look like a bloated monster. That’s because Dana is a DreamPlug, an open-architecture plug computer following in the footsteps of the coveted SheevaPlug and GuruPlug.

A dreamplug (seen here with a two-pin power connector, which helps to give you a sense of its size).

The entire computer including its detachable power supply is only a little larger than the mobile telephones of the mid-nineties, and the entire device can be plugged straight into the wall. With no hard disk (it uses SD cards) and no fans, the DreamPlug has no moving parts to wear out or make noise, and so it’s completely silent. It’s also incredibly low-power – mine idles at about 4 watts – that’s about the same as a radio alarm clock, and about a hundredth of what my desktop PCs Toni and Nena run at under a typical load.

I’ve fitted up mine with a Mimo Mini-Monster 10″: a dinky little self-powered USB-driven touchscreen monitor about the size of an iPad. Right now the whole assembly – about the size of a large picture frame – sits neatly in the corner of my desk and (thanks to the magic of Synergy) forms part of my extended multi-monitor desktop, as well as acting as a computer in her own right.

Dana's Mimo Mini-Monster touchscreen: Dana herself is completely concealed behind the screen.

So on the surface, she’s a little bit like a wired tablet computer, which would seem a little silly (and indeed: at a glance you’d mistake her for a digital photo frame)! But because she’s a “real” computer underneath, with a 1.2GHz processor, 512MB RAM, USB, WiFi, and two Ethernet ports, there’s all kinds of fun things that can be done with her.

For a start, she provides an ultra low-power extension to my existing office development environment. I’ve experimented with “pushing” a few tasks over to her, like watching log file output, downloading torrents, running a web server, reading RSS feeds, and so on, but my favourite of her tasks is acting as a gateway between the rest of the world and my office.

A network diagram showing the layout of the computer networks on New Earth. It's more-complex than your average household.

While they’ve come a long way, modern ADSL routers are still woefully inadequate at providing genuine customisability and control over my home network. But a computer like this – small, silent, and cheap – makes it possible to use your favourite open-source tools (iptables, squid, sshd, etc.) as a firewall to segregate off a part of the network. And that’s exactly what I’ve done. My office – the pile of computers in the upper-right of the diagram, above – is regulated by Dana, whose low footprint means that I don’t feel bad about leaving her turned always-on.

That means that, from anywhere in the world (and even from my phone), I can now:

  1. Connect into Dana using SSH.
  2. Send magic packets to Toni, Nena, or Tiffany2 (all of which are on wired connections), causing them to turn themselves on.
  3. Remotely control those computers to, for example, get access to my files from anywhere, set them off downloading something I’ll need later, or whatever else.
  4. Turn them off when I’m done.

That’s kinda sexy. There’s nothing new about it – the technologies and standards involved are as old as the hills – but it’s nice to be able to do it using something that’s barely bigger than a postcard.

I have all kinds of ideas for future projects with Dana. It’s a bit like having a souped-up (and only a little bigger) Arduino to play with, and it’s brimming with potential. How about a webcam for my bird feeder? Or home-automation tools (y’know: so I can turn on my bedroom light without having to get out of bed)? Or a media and file server (if I attached a nice, large, external hard disk)? And then there’s the more far-fetched ideas: it’s easily low-power enough to run from a car battery – how about in-car entertainment? Or home-grown GPS guidance? What about a “delivered ready-to-use” intranet application, as I was discussing the other day with a colleague, that can be simply posted to a client, plugged in, and used? There’s all kinds of fun potential ideas for a box like this, and I’m just beginning to dig into them.

× × ×

Looking for Wikipedia?

As you may have noticed, the English-speaking Wikipedia is “blacking out” in protest at SOPA/PIPA. This is a very important thing: SOPA/PIPA are potentially extremely dangerous bits of legislation (if you’re looking for a short explanation of why, here’s a great video).

I’m going to assume that you’re aware of the issues and have already taken action appropriate to your place – if you’re in the US, you’ve written to your representatives; if you’re in the rest of the English-speaking world, you’ve donated to the EFF (this issue affects all of us), etc. But if you’re in need of Wikipedia, here’s the simplest way to view it, today:

Accessing Wikipedia during the blackout

  1. Go to the English-language Wikipedia as normal. You’ll see the “SOPA blackout” page after a second or so.
  2. Copy-paste the following code into the address bar of the browser:

javascript:(function()%7Bdocument.getElementById('content').style.display='block';document.getElementById('mw-sopaOverlay').style.display='none'%7D)()

That’s all. You don’t even have to turn off Javascript in your browser, as others are suggesting: just surf away.

If you get sick of copy-pasting on every single Wikipedia page you visit… you can drag this link to your bookmarks toolbar (or right click it and select “add to bookmarks”) and then just click it from your bookmarks whenever you want to remove the blackout.

And if you just came here for the shortcut without making yourself aware of the issues, shame on you.

New Computer #1 – Tiffany2

This weekend, I integrated two new computers into the home network on New Earth. The first of these is Tiffany2.

Tiffany2 is a small "media centre" style computer with an all-in-one remote keyboard/mouse.

Tiffany2 replaces Tiffany, the media centre computer I built a little under four years ago. The original Tiffany was built on a shoestring budget of under £300, and provided the technical magic behind the last hundred or so Troma Nights, as well as countless other film and television nights, a means to watch (and record and pause) live TV, surf the web, and play a game once in a while.

The problem with Tiffany is that she was built dirt-cheap at a time when building a proper media centre PC was still quite expensive. So she wasn’t very good. Honestly, I’m amazed that she lasted as long as she did. And she’s still running: but she “feels” slow (and takes far too long to warm up) and she makes a noise like a jet engine… which isn’t what you want when you’re paying attention to the important dialogue of a quiet scene.

Tiffany and Tiffany2. Were this a histogram of their relative noise levels, the one on the left would be much, much larger.

Tiffany2 is virtually silent and significantly more-powerful than her predecessor. She’s also a lot smaller – not much bigger than a DVD player – and generally more feature-rich.

This was the first time I’d built an ITX form-factor computer (Tiffany2 is Mini-ITX): I wanted to make her small, and it seemed like the best standard for the job. Assembling some of her components felt a little like playing with a doll’s house – she has a 2.5″ hard disk and a “slimline” optical drive: components that in the old days we used to call “laptop” parts, which see new life in small desktop computers.

Examples of six different hard drive form factors. Tiffany2 uses the third-smallest size shown in this picture. The computer you're using, unless it's a laptop, probably uses the third-largest (picture courtesy Paul R. Potts, CC-At-SA).

In order to screw in some of the smaller components, I had to dig out my set of watchmaker’s screwdrivers. Everything packs very neatly into a very small space, and – building her – I found myself remembering my summer job long ago at DesignPlan Lighting, where I’d have to tuck dozens of little components, carefully wired-together, into the shell of what would eventually become a striplight in a tube train or a prison, or something.

She’s already deployed in our living room, and we’ve christened her with  the latest Zero Punctuation, a few DVDs, some episodes of Xena: Warrior Princess, and an episode of Total Wipeout featuring JTA‘s old history teacher as a contestant. Looks like she’s made herself at home.

(for those who are sad enough to care, Tiffany2 is running an Intel Core i3-2100 processor, underclocked to 3GHz, on an mITX Gigabyte GA-H61N-USB3 motherboard with 4GB RAM, a 750GB hard disk, and DVD-rewriter, all wrapped up in an Antec ISK 300-150 case with a 150W power supply: easily enough for a media centre box plus some heavy lifting if I ever feel the need to give her any)

× × ×

The men who really invented the GUI

This link was originally posted to /r/technology. See more things from Dan's Reddit account.

The original link was: http://www.computeractive.co.uk/pcw/pc-help/1925325/the-invented-gui

The IBM PC is not alone in having a significant anniversary this year. It is 25 years since Steve Jobs and Steve Wozniac started flogging Apple 1 circuit boards from a Palo Alto garage. But it was not until 1984 that the first Apple Mac made its appearance, with its revolutionary mouse-driven graphical user interface (GUI).

Apple’s achievement in recognising the potential of the GUI and putting it into a mass-market machine cannot be denied. But Apple did not invent the system, as many still believe.

The basic elements of both the MacOS and Windows were developed at Xerox’s Palo Alto Research Centre. Xerox did not patent them and blithely showed them off to Jobs, who promptly snaffled the lot.

The roots of the system go back still further. Every computer history website will tell you that Doug Englebart, hired by the US Defense Department to find new ways of harnessing the computer, invented the mouse in 1963.

But this is true only up to a point. Englebart’s contribution was important, but his ideas didn’t come out of the blue.

Roots in radar

Like the pulse circuits that provide the heartbeat of computing, the GUI has its roots in early radar systems. It was wartime radar work that got Englebart thinking about dynamic information displays, and radar engineers were the first to encounter the problem of how to use these displays to communicate with an intelligent machine.

Two engineers came up with a trackball, the innards of the mouse, a full 11 years before Englebart unveiled his device. Moreover, it was used to select a position on a screen to convey information to a processor, which is the fundamental operation of a GUI. One of the engineers, 80-year-old Tom Cranston, is still alive and living in Scotland.

Cranston’s early career nicely mirrors the shift the electronics industry went through in the 1940s and 1950s. Pre-war electronics was overwhelmingly analog, using thermionic valves as amplifiers, oscillators and detectors.

Cranston, who was born in Canada, spent World War II in Britain maintaining Air Force analog radio equipment.

After the war he took an electronics-focused engineering physics degree at the University of Toronto, before joining Ferranti Canada at a time when it was trying to gain a foothold in the nascent computer industry.

This used valves predominantly in switch mode for logic circuits. “What I studied in electronic circuits at university had nothing to do with what was set before us at Ferranti,” he said.

The Datar system – starting from scratch

Cranston was project engineer with a team working on a system for the Canadian Navy called Datar, an attempt to marry radar to digital computers which was way ahead of its time when it started in 1949.

Datar enabled a group of ships to share sonar and radar information. Up to 500 objects could be identified and tracked, and each ship saw the whole position plotted relative to its own moving position.

These calculations would be trivial today, but for Datar the logic had to be hard-wired using around 10,000 valves per ship.

Everything had to be done from scratch. The young engineers recruited for the project even had to prove that data could be transmitted by radio – a demonstration (using pulse-code modulation) that finally persuaded the cash-strapped Canadian government to back the scheme. Positional information was stored on a magnetic drum, a precursor of the hard disk.

The demonstration system on Lake Ontario used standard radar displays with a rotating beam that showed the blips of nearby aircraft, and ships; sonar data from notional submarines was simulated. They needed a way for an operator to identify a target blip and to enter its position.

These displays were drawn by conventional analog circuitry: there was no video RAM to play with. An electronic dot cursor could be thrown up during a brief flyback period between screen sweeps; the engineers needed to find a way that the operator could position this cursor smoothly over a target blip and store the co-ordinates.

To Cranston and his colleague Fred Longstaff, this was just another problem to be solved. “It didn’t seem a big thing… there was a tremendous urgency about all this and it is hard to recreate that atmosphere.”

The simplest answer would have been to set the dot’s X and Y deflections separately using two variable resistances, as used in nearly all electronic level controls, and then translate these values into digital co-ordinates.

Cranston and Longstaff came up with a far more elegant solution that used one control instead of two, and delivered the co-ordinates directly.

The wheel thing

Cranston, while on a visit to a naval establishment, had seen someone using a wheel on a stick, like a miniature pedometer, to measure distances on a chart. “We need something like that which works simultaneously in two dimensions,” he said to Longstaff.

Longstaff then came up with the idea of two follower wheels resting at right angles to a ball that was free to roll in any direction. The prototype actually used two pairs of wheels driven by a standard 4in Canadian bowling ball resting on an air bearing, a feature that is simpler to make than it sounds.

“You just mix up some plaster and stick a ball in it when it is beginning to set,” explained Cranston. “Then you let the plaster harden, take the ball out, drill holes into the plaster, and pump air through them. The result is like magic.”

A circle of holes close to the rim of each wheel passed a beam of light to a photo-sensor, which produced a string of countable pulses as the wheel rotated. Counting circuits were well understood by then, Cranston recalls.

One wheel measured upward movement and its opposite registered down, and the count was incremented or decremented accordingly to provide the Y co-ordinate; the other pair worked similarly to get the X co-ordinate.

Shutters blocked light from the two wheels’ measuring movements opposite to the current rotation. A button – the equivalent of a mouse click – was pressed to indicate a target.

Now and then

Through today’s eyes, this arrangement seems over-elaborate: why not use two wheels and a direction flag? Half a century later, Cranston cannot recall the details of why it was done in this way, but it seems to have been a matter of using what was at hand. Nowadays, a single line of code could cope with the changing directions; the Datar team had to hard-wire everything.

Also routine now is the control of screen positions by numbers, but it was new and intriguing to Cranston and Longstaff. An analog control would have a unique position for each screen co-ordinate, but there was no such direct relationship in the case of the trackball: if you moved the cursor by altering the stored number, the ball would still work regardless of its orientation.

They thought of the device as “centreless” and Longstaff jokingly referred to it as the “turbo-encabulator”.

The whole exercise was what in today’s jargon would be called a proof of concept. The team had to show Datar could work in order to raise the money to refine it, and it needed a lot of money. Valves were unreliable and not really suitable for use on a ship, so the whole system would eventually need rebuilding round new-fangled transistors.

Canada could not afford to do this itself and was seeking a partnership with another country. A system was demonstrated to a succession of military and technical decision makers. One US military observer was so astonished by the sophisticated display that he peered under a table to ensure there was no tomfoolery going on.

Nobody bought into the system. Britain and the US, the most likely partners, had their own projects and there was probably a “not invented here” factor.

Ironically, a prototype US system that Cranston saw later at MIT didn’t need a trackball because it was more advanced: targets were identified and tracked automatically.

Research unrewarded

Many people, though, had seen the trackball. The question of patenting it never arose. Ferranti UK, the parent company, had limited contact with its Canadian arm. Executives had little idea of what was going on at the research level.

Cranston said: “Think about the state of play in the computer world in 1952. There were only a handful of operating computers in the world. Almost all were unreliable. There was no common software language… pulse rates were only 50-100kHz. The idea of using a ball to control a cursor which could intervene and change program execution was a million miles ahead.”

Ball resolvers were not new. They had appeared in navigational and ballistic control mechanisms. The achievement of Longstaff and Cranston was to see how one could be used in conjunction with an electronic display. It was, Cranston says, a generation before its time.

Where Datar went

The Datar experience went into a programmable computer called the FP-6000 which was launched in 1961 by Ferranti Packard – the original company merged in 1958 with Packard Electric.

The FP-6000 was one of the first to use an operating system and was ahead of IBM rivals in its ability to multi-task. Its chief architect was Longstaff. He ended his career as a comms guru with Motorola and died five years ago.

The FP-6000 ended up with ICL, after being bought by Britain’s International Computers and Tabulators, and the two UK firms sold 3000 of them worldwide as the 1900 series.

Cranston left Ferranti in 1956 to take what he describes as a “giant leap backwards”. He joined the Canadian arm of a US company making data loggers and alarm scanners for the Canadian power industry that used logic in the form of mechanical switch arrays.

Electronic computers were considered too unreliable and too expensive for the task. Telephone relay logic filled the gap for another decade.

In fact, Intel’s seminal 4004 was designed originally for tasks like this.

Cranston left after 11 years and moved near to Inverness with his Scottish wife, setting up home in an old mill that he converted himself. He taught for several years in the local technical college, introducing students to the mysteries of the microprocessor.

Surprisingly, Cranston does not have a computer. “They are too fascinating,” he said. “I’d get so involved, I wouldn’t have time for anything else.”

Banned From Facebook

Apparently I’m too cool for Facebook.

The message I see when I try to log in to Facebook. Sadly, I'm also prohibited from using Pidgin to connect to Facebook Chat, which is just about the only thing I use Facebook for these days.

Okay, that’s not what that message actually says, but that’s how I chose to read it. It turns out that my name isn’t real. I went through their forms to tell them that “no, really, this is my name”. They also asked me “what I use Facebook for”, to which I – of course – answered “chatting to friends and stalking exes, same as everybody else – why, what do YOU use Facebook for?” But when I submitted the form, it just ran me back around in a circle back to where I started.

Also: Facebook! Is that exposed HTML code in your message? Dear me.

I’d be less frustrated if I didn’t just send them a copy of my driving license earlier this year, in order to prove that my name was really my name. I guess that the media claims that Facebook keeps all of your information indefinitely aren’t true, and in actual fact they have the memory of a proverbial goldfish.

I’d be more frustrated if I actually used Facebook for anything more than pushing blog posts out to people who prefer to see them on Facebook, and occasionally chatting to people, thanks to the wonderful pidgin-facebookchat plugin.

So on average, I suppose, I’m pretty indifferent. That’s the Facebook way.

×

Cold-call scam attempts to trick users into thinking their PC has a fault, sells them a “solution”. Here’s a recording.

This link was originally posted to /r/technology. See more things from Dan's Reddit account.

The original link was: https://news.bbc.co.uk/today/hi/today/newsid_9637000/9637033.stm

More and more people are facing calls from companies claiming falsely that their computer is infected by a virus.

Technology correspondent Rory Cellan-Jones had one such call and outlines how the scam works and how to avoid it.

Goodbye Reader

Goodbye, Google Reader. It was fun while it lasted.

Long ago, I used desktop RSS readers. I was only subscribed to my friends’ blogs back then anyway, so it didn’t matter that I could only read them from my home computer. But then RSS feeds started appearing on news sites, and tech blogs started appearing about things related to my work. And smartphones took over the world, and I wanted to be able to synchronise my reading list everywhere. There were a few different services that competed for my attention, but Google Reader was the best. It was simple, and fast, and easy, and it Just Worked in that way that Google products often do.

I put up with the occasional changes to the user interface. Hey, it’s a beta, and it’s still the best thing out there. Hey, it’s free, what can you say? I put up with the fact that from time to time, they changed the site in ways that were sometimes quite hostile to Opera, my web browser of choice. I put up with the fact that it had difficulty with unsigned HTTPS certificates (it’s fine now) and that it didn’t provide a mechanism to authenticate against services like LiveJournal (it still doesn’t). I even worked around the latter, releasing my own tool and updating it a few times until LiveJournal blocked it (twice) and I had to instead recommend that people switched to rival service FreeMyFeed.

The new Google Reader (with my annotations - click to embiggen). It sucks quite a lot.

But the final straw came this week when Google “updated” Reader once again, with two awful new changes:

  1. I know that they’re ever-so-proud of the Google+ user interface, but rebranding all of the other services to look like it just isn’t working. It’s great for Google+, not-bad for Search, bad for GMail (but at least you can turn it off!), and fucking awful for Reader. I like distinct borders between my items. I don’t like big white spaces and buttons that eat up half the screen.
  2. The sharing interface is completely broken. After a little while, I worked out that I still can share things with other people, but I can’t any longer see what other people are sharing without clicking over to Google+. This sucks a lot. No longer can I keep track of which shared items I have and haven’t read, and no longer can I read the interesting RSS feeds my friends have shared in the same place as I read (and share) my own.

So that’s the last straw. Today, I switched everything over to Tiny Tiny RSS.

Tiny Tiny RSS - it's simple, clean, and (in an understated way) beautiful.

Originally I felt that I was being pushed “away” from Google Reader, but the more I’ve played with it, the more I’ve realised that I’m being drawn “towards” Tiny Tiny, and wishing that I’d made the switch further. The things that have really appealed are:

  • It’s self-hosted. Tiny Tiny RSS is a free, open-source solution that you host for yourself (or I suppose you can use a shared host; there are a few around). I know that this is a downside to most people, but to me, it’s a serious selling point: now, I’m in control of what updates are applied, when, and if I don’t like the functionality of a part of the system, I can change it – I’m in control.
  • It’s simple and clean. It’s got a great user interface, in an understated and simplistic way. It’s somewhat reminiscent of desktop email clients, replacing the “stream of feeds” idea with a two- or three-pane view (your choice). That sounds like it’d be a downside, until you realise…
  • …with great keyboard controls. Tiny Tiny RSS is great for keyboard lovers like me. The default key-commands (which are of course customisable) are based on Emacs, so if that’s your background then it’s easy to be right at home in minutes and browsing feeds faster than ever.
  • Plus: it’s got a stack of nice features. I’m loving the “fresh” filter, that helps me differentiate between the stuff I’ve “saved for later” reading and the stuff that’s actually new and interesting. I’m also impressed by the integrated authentication, which removes my dependency on FreeMyFeed-like services and (because it’s self-hosted) lets me keep my credentials securely under my own control. It supports authentication using SSL certificates, a beautiful and underused technology. It allows you to customise the update frequency of your feeds, so I can stalk by friends’ blogs at lightning-quick rates and stall my weekly update subscriptions so they don’t get checked so frequently. And unlike Google Reader, it actually tells me when feeds break, so I don’t just “get no updates” for a while before I think to check the site (and it’ll even let me change the URLs when this happens, rather than unsubscribing and resubscribing).

Put simply: all of my major gripes with Google Reader over the last few years have been answered all at once in this wonderful little program. If people are interested in how I set up Tiny Tiny RSS and and made the switchover as simple and painless as possible, I’ll write a blog post to talk you through it.

I’ve had just one problem: it’s not quite so tolerant of badly-formed XML as Google Reader. There’s one feed in my list which, it turns out, has (very) invalid XML in it’s feed, that Google Reader managed to ignore and breeze over, but Tiny Tiny RSS chokes on. I’ve contacted the site owner to try to get it fixed, but if they don’t, I might have to hack some code to try to make a workaround. Not ideal, and not something that everybody would necessarily want to deal with, so be aware!

If, like me, you’ve become dissatisfied by Google Reader this week, you might also like to look at rssLounge, the other worthy candidate I considered as a replacement. I had a quick play but didn’t find it quite as suitable for my needs, but it might be to your taste: take a look.

The new sidebar, showing what I'm reading in my RSS reader lately.

Oh, and one more thing: if you used to “follow” me on Google Reader (or even if you didn’t) and you want to continue to subscribe to the stuff I “share”, then you’ll want to subscribe to this new RSS feed of “my shared stuff”, instead: it can also be found syndicated in the right-hand column of my blog.

Update: this guy’s made a bookmarklet that makes the new Google Reader theme slightly less hideous. Doesn’t fix the other problems, though, but if you’re not quite pissed-off enough to jump ship, it might make your experience more-bearable.

Update 2: others in the blogosphere are saying good things about Reader rival NewsBlur, which recently turned one year old. If you’re looking for a hosted service, rather than something “roll-your-own” like Tiny Tiny RSS, perhaps it’s the tool for you?

× × ×

QR Codes of the Bodleian

The Treasures of the Bodleian exhibition opened today, showcasing some of the Bodleian Libraries‘ most awe-inspiring artefacts: fragments of original lyrics by Sappho, charred papyrus from Herculaneum prior to the eruption of Mt. Vesuvius in 79 CE, and Conversation with Smaug, a watercolour by J. R. R. Tolkien to illustrate The Hobbit are three of my favourites. Over the last few weeks, I’ve been helping out with the launch of this exhibition and its website.

Photograph showing a laptop running Ubuntu, in front of a partially-constructed exhibition hall in which museum artefacts are being laid-out in glass cases.
From an elevated position in the exhibition room, I run a few tests of the technical infrastructure whilst other staff set up, below.

In particular, something I’ve been working on are the QR codes. This experiment – very progressive for a sometimes old-fashioned establishment like the Bodleian – involves small two-dimensional barcodes being placed with the exhibits. The barcodes are embedded with web addresses for each exhibit’s page on the exhibition website. Visitors who scan them – using a tablet computer, smartphone, or whatever – are directed to a web page where they can learn more about the item in front of them and can there discuss it with other visitors or can “vote” on it: another exciting new feature in this exhibition is that we’re trying quite hard to engage academics and the public in debate about the nature of “treasures”: what is a treasure?

Close-up photograph showing a small plaque with a QR code alongside interprative text, in an exhibition case.
A QR code in place at the Treasures of the Bodleian exhibition.

In order to improve the perceived “connection” between the QR code and the objects, to try to encourage visitors to scan the codes despite perhaps having little or no instruction, we opted to embed images in the QR codes relating to the objects they related to. By cranking up the error-correction level of a QR code, it’s possible to “damage” them quite significantly and still have them scan perfectly well.

One of my “damaged” QR codes. This one corresponds to The Laxton Map, a 17th Century map of common farming land near Newark on Trent.

We hope that the visual association between each artefact and its QR code will help to make it clear that the code is related to the item (and isn’t, for example, some kind of asset tag for the display case or something). We’re going to be monitoring usage of the codes, so hopefully we’ll get some meaningful results that could be valuable for future exhibitions: or for other libraries and museums.

Rolling Your Own

If you’re interested in making your own QR codes with artistic embellishment (and I’m sure a graphic designer could do a far better job than I did!), here’s my approach:

  1. I used Google Infographics (part of Chart Tools) to produce my QR codes. It’s fast, free, simple, and – crucially – allows control over the level of error correction used in the resulting code. Here’s a sample URL to generate the QR code above:

https://chart.googleapis.com/chart?chs=500×500&cht=qr&chld=H|0&chl=HTTP://TREASURES.BODLEIAN.OX.AC.UK/T7

  1. 500×500 is the size of the QR code. I was ultimately producing 5cm codes because our experiments showed that this was about the right size for our exhibition cabinets, the distance from which people would be scanning them, etc. For laziness, then, I produced codes 500 pixels square at a resolution of 100 pixels per centimetre.
  2. H specifies that we want to have an error-correction level of 30%, the maximum possible. In theory, at least, this allows us to do the maximum amount of “damage” to our QR code, by manipulating it, and still have it work; you could try lower levels if you wanted, and possibly get less-complex-looking codes.
  3. 0 is the width of the border around the QR code. I didn’t want a border (as I was going to manipulate the code in Photoshop anyway), so I use a width of 0.
  4. The URL – HTTP://TREASURES.BODLEIAN.OX.AC.UK/T7  – is presented entirely in capitals. This is because capital letters use fewer bits when encoded as QR codes. “http” and domain names are case-insensitive anyway, and we selected our QR code path names to be in capitals. We also shortened the URL as far as possible: owing to some complicated technical and political limitations, we weren’t able to lean on URL-shortening services like bit.ly, so we had to roll our own. In hindsight, it’d have been nice to have set up the subdomain “t.bodleian.ox.ac.uk”, but this wasn’t possible within the time available. Remember: the shorter the web address, the simpler the code, and simpler codes are easier and faster to read.
  5. Our short URLs redirect to the actual web pages of each exhibit, along with an identifying token that gets picked up by Google Analytics to track how widely the QR codes are being used (and which ones are most-popular amongst visitors).
By now, you’ll have a QR code that looks a little like this.
  1. Load that code up in Photoshop, along with the image you’d like to superimpose into it. Many of the images I’ve had to work with are disturbingly “square”, so I’ve simply taken them, given them a white or black border (depending on whether they’re dark or light-coloured). With others, though, I’ve been able to cut around some of the more-attractive parts of the image in order to produce something with a nicer shape to it. In any case, put your image in as a layer on top of your QR code.
  2. Move the image around until you have something that’s aesthetically-appealing. With most of my square images, I’ve just plonked them in the middle and resized them to cover a whole number of “squares” of the QR code. With the unusually-shaped ones, I’ve positioned them such that they fit in with the pattern of the QR code, somewhat, then I’ve inserted another layer in-between the two and used it to “white out” the QR codes squares that intersect with my image, giving a jagged, “cut out” feel.
  3. Test! Scan the QR code from your screen, and again later from paper, to make sure that it’s intact and functional. If it’s not, adjust your overlay so that it covers less of the QR code. Test in a variety of devices. In theory, it should be possible to calculate how much damage you can cause to a QR code before it stops working (and where it’s safe to cause the damage), but in practice it’s faster to use trial-and-error. After a while, you get a knack for it, and you almost feel as though you can see where you need to put the images so that they just-barely don’t break the codes. Good luck!
Another of my “damaged” QR codes. I’m reasonably pleased with this one.

Give it a go! Make some QR codes that represent your content (web addresses, text, vCards, or whatever) and embed your own images into them to make them stand out with a style of their own.

× × × ×

What’s Wrong With My Phone

In my review of my new HTC Sensation earlier this month, I tried to explain how my new phone – with it’s swish and simple interface – didn’t feel quite… geeky enough for me. I picked up on the way that it’s process management works, but I’ve since realised that this is only symptomatic of a deeper problem. This is entirely to do with the difference between traditional computers (of which my old N900 was one) and modern consumer-centric devices (which, inspired by the iPod/iPhone/iPad/etc.) try to simplify things for the end-user and provide strong support for centralised repositories of pre-packaged “apps” for every conceivable purpose.

To take an example of the difference: my N900 ran Linux, and felt like it ran Linux. As a reasonably-sensible operating system, this meant that all of the applications on it used pretty much the same low-level interfaces to do things. If I wanted, I could have installed (okay, okay – compiled) sshfs, and be reasonably confident that every application on my phone, whether it’s a media player or a geocaching application or whatever, would use that new filesystem. I could store my geocaching .gpx files on an SSH-accessible server somewhere, and my phone could access them, and my geocaching app wouldn’t know the difference because I’d have that level of control over the filesystem abstraction layer.

Similarly, if I installed a game which made use of Ogg Vorbis to store its sound files, which therefore installed the Vorbis codecs, then I can expect that my media player software will also be able to make use of those codecs, because they’ll be installed in the standard codec store. This kind of thing “just works”. Okay, okay: you know as well as I do that computers don’t always “just work”, but the principle is there such that it can “just work”, even if it doesn’t always.

On these contemporary smartphones, like the iPhone, Android devices, and (I assume) modern BlackBerrys, the model is different: individual applications are sandboxed and packaged up into neat little bundles with no dependencies outside of that provided by the platform. If you have two applications installed that both use sshfs, then they both have to include (or implement) the relevant bundle! And having them installed doesn’t automatically give sshfs-like functionality to your other filesystem-accessing tools.

It’s not all bad, of course: this “new model” is great for helping non-technical users keep their devices secure, for example, and it means that there’s almost no risk of dependency hell. It’s very… easy. But I’m still not sure it quite works: I’ll bet that 90% of users would install an application that demands dubious levels of permissions (and could, for example, be stealing their address book data for sale to scammers) without even thinking about the risks, so the security benefits are somewhat nullified.

In summary:

Pros Cons
Traditional-computing device (e.g. N900)
  • User actually “owns” device
  • Applications to be combined (e.g. pipes, automation, new middleware)
  • Open-source development likely to thrive
  • User can probably “brick” device
  • Full potential requires learning investment
  • Harder to monetise platform, as a developer
“New model” device (e.g. iPhone, Android)
  • Easy for non-technical users
  • More secure (in theory) as platform exposes little
  • Centralised “app store”/”marketplace”
  • Potentially limiting for technical users
  • Only as secure as the user is savvy.
  • Centralised “app store” store can act as a “lock in”

Needless to say, the new model devices are winning, and already tablet computers powered by the very same platforms as the mobile phones are beginning to be seen as a simpler, easier alternative to conventional laptops. It’s to be expected: most of today’s users don’t want a learning curve before they can use their smartphone: they just want to make some calls, play Angry Birds a bit, keep up with their Facebook friends, and so on. But I hope that there’ll always be room for a few folks like me: folks who want to tinker, want to play, want to hack code for no really benefit but their own pleasure… and without having to shell out for a developer license in order to do so!