My geek-crush Ben Foxall posted on Twitter on Monday morning to share
that he’d had a moment of fun nostalgia when he’d come into the office to discover that somebody in his team had covered his monitor with two layers of Post-It notes. The bottom layer
contained numbers – and bombs! – to represent the result of a Minesweeper board, and the upper layer ‘covered’ them so that individual Post-Its could be removed to reveal what lay
beneath. Awesome.
Not to be outdone, I hunted around my office and found some mini-Post-Its. Being smaller meant that I could fit more of them onto a monitor and thus make a more-sophisticated (and
more-challenging!) play space. But how to generate the board? Sure: I could do it by hand, but that doesn’t seem very elegant at all – plus, humans make really bad random number generators! I didn’t need quantum-tunnelling-seeded Minesweeper (yes, that’s a thing) levels of entropy, sure, but it’d still be nice to outsource the heavy lifting
to a computer, right?
So naturally, I wrote a program to do it for me. Want to see? It’s at danq.me/minesweeper. Just line up some Post-Its on a co-worker’s monitor to work out
how many you can fit across it in each dimension (I found that I could get 6 × 4 standard-sized Post-Its but 7× 5 or even 8× 5 mini-sized Post-Its very comfortablyonto one of the typical widescreen monitors in my office), decide how many mines you want, and click
Generate. Don’t like the board you get? Click it again!
And because I was looking for a fresh excuse to play with Periscope, I broadcast the first game I set up live to the Internet. In the end, 66
people ended up watching some or all of a paper-based game of Minesweeper played by my colleague Liz, including moments of cheering her on
and, in one weird moment, despair at the revelation that she was married. The internet’s strange, yo.
Anyway: in case you missed the Periscope broadcast, I’ve put it on YouTube. Sorry about the portrait-orientation filming: I
think it’s awful, too, but it’s a Periscope thing and I haven’t installed the new update that
fixes it yet.
Now go set up a game of Post-It Minesweeper for a friend or co-worker.
What’s the hardest word to guess, when playing hangman? I’ll come back to that.
Last year, Nick Berry wrote a fantastic blog post about the optimal strategy for Hangman. He showed that the best guesses
to make to get your first “hit” in a game of hangman are not the most-commonly occurring letters in written English, because these aren’t the most commonly-occurring
letters in individual words. He also showed that the first guesses should be adjusted based on the length of the word (the most common letter in 5-letter words is ‘S’, but the most
common letter in 6-letter words is ‘E’). In short: hangman’s a more-complex game than you probably thought it was! I’d like to take his work a step further, and work out which word is
the hardest word: that is – assuming you’re playing an optimal strategy, what word takes the most-guesses?
First, though, we need to understand how hangman is perfectly played. Based on the assumption that the “executioner” player is choosing words randomly, and that no clue is given as to
the nature of the word, we can determine the best possible move for all possible states of the game by using a data structure known as a tree. Suppose our opponent has chosen a
three-letter word, and has drawn three dashes to indicate this. We know from Nick’s article that the best letter to guess is A. And then, if our guess is wrong, the next
best letter to guess is E. But what if our first guess is right? Well, then we’ve got an “A” in one or more positions on the board, and we need to work out the next best
move: it’s unlikely to be “E” – very few three-letter words have both an “A” and an “E” – and of course what letter we should guess next depends entirely on what positions
the letters are in.
What we’re actually doing here is a filtering exercise: of all of the possible letters we could choose, we’re considering what possible results that could have. Then for
each of those results, we’re considering what guesses we could make next, and so on. At each stage, we compare all of the possible moves to a dictionary of all possible
words, and filter out all of the words it can’t be: after our first guess in the diagram above, if we guess “A” and the board now shows “_ A _”, then we know that of the
600+ three-letter words in the English language, we’re dealing with one of only about 134. We further refine our guess by playing the odds: of those words, more of them have a “C” in
than any other letter, so that’s our second guess. If it has a C in, that limits the options further, and we can plan the next guess accordingly. If it doesn’t have a C
in, that still provides us with valuable information: we’re now looking for a three-letter word with an A in the second position and no letter C: that cuts it
down to 124 words (and our next guess should be ‘T’). This tree-based mechanism for working out the best moves is comparable to that used by other game-playing computers. Hangman is
simple enough that it can be “solved” by contemporary computers (like draughts –
solved in 2007 – but unlike chess: while modern chess-playing
computers can beat humans, it’s still theoretically possible to build future computers that will beat today’s computers).
Now that we can simulate the way that a perfect player would play against a truly-random executioner, we can use this to simulate games of hangman for every possible word
(I’m using version 0.7 of this British-English dictionary).
In other words, we set up two computer players: the first chooses a word from the dictionary, the second plays “perfectly” to try to guess the word, and we record how many guesses it
took. So that’s what I did. Here’s the Ruby code I used. It’s heavily-commented and
probably pretty understandable/good learning material, if you’re into that kind of thing. Or if you fancy optimising it, there’s plenty of scope for that too (I knocked it out on a
lunch break; don’t expect too much!). Or you could use it as the basis to make a playable hangman game. Go wild.
Running the program, we can see that the hardest three-letter word is “xxv”, which would take 22 guesses (20 of them wrong!) to get. But aside from the roman numeral for 25, I don’t
think that “xxv” is actually a word. Perhaps my dictionary’s not very good. “Oak”, though, is definitely a word, and at 20 guesses (17 wrong), it’s easily enough to hang your opponent
no matter how many strokes it takes to complete the gallows.
There are more tougher words in the four-letter set, like the devious “quiz”, “jazz”, “zinc”, and “faux”. Pick one of those and your opponent – unless they’ve seen this blog post! – is
incredibly unlikely to guess it before they’re swinging from a rope.
As we get into the 5, 6, and 7-letter words you’ll begin to notice a pattern: that the hardest words with any given number of letters get easier the longer
they are. That’s kind of what you’d expect, I suppose: if there were a hypothetical word that contained every letter in the alphabet, then nobody would ever fail to (eventually) get it.
When we make a graph of each word length, showing which proportion of the words require a given number of “wrong” guesses (by an optimised player), we discover a “sweet spot” window in
which we’ll find all of the words that an optimised player will always fail to guess (assuming that we permit up to 10 incorrect guesses before they’re disqualified). The
window seems small for the number of times I remember seeing people actually lose at hangman, which implies to me that human players consistently play sub-optimally, and do not
adequately counteract that failing by applying an equal level of “smart”, intuitive play (knowing one’s opponent and their vocabulary, looking for hints in the way the game is
presented, etc.).
In case you’re interested, then, here are the theoretically-hardest words to throw at your hangman opponent. While many of the words there feel like they would quite-rightly be
difficult, others feel like they’d be easier than their ranking would imply: this is probably because they contain unusual numbers of vowels or vowels in unusual-but-telling positions,
which humans (with their habit, inefficient under normal circumstances, of guessing an extended series of vowels to begin with) might be faster to guess than a
computer.
The explosion of smartphone ownership over the last decade has put powerful multi-function computers into the pockets of almost
half of us. But despite the fact that the average smartphone contains at least as much personally-identifiable information as its owner keeps on their home
computer (or in dead-tree form) at their house – and is significantly more-prone to
opportunistic theft – many users put significantly less effort into protecting their mobile’s data than they do the data they keep at home.
I have friends who religiously protect their laptops and pendrives with TrueCrypt, axCrypt, or similar, but still carry around an unencrypted mobile phone. What we’re talking about here is a device that contains all of the contact
details for you and everybody you know, as well as potentially copies of all of your emails and text messages, call histories, magic cookies for social networks and other services, saved passwords, your browsing history (some people would say that’s the
most-incriminating thing on their phone!), authentication apps, photos, videos… more than enough information for an attacker to pursue a highly-targeted identity theft or
phishing attack.
“Pattern lock” is popular because it’s fast and convenient. It might be good enough to stop your kids from using your phone without your permission (unless they’re smart enough to do
some reverse smudge engineering: looking for the smear-marks made by your fingers as you unlock the device; and let’s face
it, they probably are), but it doesn’t stand up to much more than that. Furthermore, gesture unlock solutions dramatically reduce the number of permutations, because you can’t repeat a
digit: so much so, that you can easily perform a rainbow table attack on the SHA1 hash to
reverse-engineer somebody’s gesture. Even if Android applied a per-device psuedorandom salt to the gesture pattern (they don’t, so you can download a prefab table), it doesn’t take long
to generate an SHA1 lookup of just 895,824 codes (maybe Android should have listened to Coda Hale’s advice and used BCrypt, or else something better still).
These attacks, though (and the iPhone isn’t bulletproof, either), are all rather academic, because they are
trumped by the universal rule that once an
attacker has physical access to your device, it is compromised. This is fundamentally the way in which mobile security should be considered to be equivalent to computer security.
All of the characteristics distinct to mobile devices (portability, ubiquity, processing power, etc.) are weaknesses, and that’s why smartphones deserve at least
as much protection as desktop computers protecting the same data. Mobile-specific features like “remote wipe” are worth having, but can’t be relied upon alone – a wily attacker
could easily keep your phone in a lead box or otherwise disable its connectivity features until it’s cracked.
The only answer is to encrypt your device (with a good password). Having to tap in a PIN or password may be less-convenient than just “swipe to unlock”, but it gives
you a system that will resist even the most-thorough efforts to break it, given physical access (last year’s
iPhone 4 vulnerability notwithstanding).
It’s still not perfect – especially here in the UK, where the RIPA can be used (and has been used) to force key surrender. What we really need is meaningful, usable
“whole system” mobile encryption with plausible deniability. But so long as you’re only afraid of identity thieves and phishing scammers, and not
being forced to give up your password by law or under duress, then it’s “good enough”.
Of course, it’s only any use if it’s enabled before your phone gets stolen! Like backups, security is one of those things that everybody should make a habit of thinking
about. Go encrypt your smartphone; it’s remarkably easy –
As web developers, we’re used to working around the bugs in Microsoft Internet Explorer. The older versions are worst, and I’m certainly glad to not have to write code that
works in Internet Explorer 6 (or, increasingly, Internet Explorer 7) any more: even Microsoft are glad to see Internet Explorer 6 dying out, but even IE8 is pretty ropey too. And despite what Microsoft claim, I’m afraid IE9 isn’t really a “modern” browser either (although it is a huge step forwards over its
predecessors).
But imagine my surprise when I this week found what I suspect might be a previously undiscovered bug in Internet Explorer 8 and below. Surely they’ve all been found (and some of them
even fixed), but now? But no. It takes a very specific set of circumstances for the bug to manifest itself, but it’s not completely unbelievable – I ran into it by accident while
refactoring parts of Three Rings.
Here’s the crux of it: if you’re –
Using Internet Explorer 8 or lower, and
You’re on a HTTPS (secure) website, and
You’re downloding one of a specific set of file types: Bitmap files, for example, are a problem, but JPEG files aren’t (Content-Type: image/bmp), and
The web server indicates that the file you’re downloading should be treated as something to be “saved”, rather than something to be viewed in your browser
(Content-Disposition: attachment), and
The web server passes a particular header to ask that Internet Explorer does not cache a copy of the file (Cache-Control: no-cache),
Then you’ll see a dialog box like the one shown above. Switching any of the prerequisites in that list out makes the problem go away: even switching the header from a strict “no-cache”
to a more-permissive “private” makes all the difference.
I’ve set up a test environment where you can see this for yourself: HTTP version; HTTPS version. The source code of my experiment (PHP) is also available. Of course, if you try it in a functional, normal web browser, it’ll all work fine. But if
you’ve got access to a copy of Internet Explorer 8 on some old Windows XP box somewhere (IE8 is the last version of the browser made available for XP), then try it in that and see for
yourself what a strange error you get.
Last week I was talking to Alexander Dutton about an idea that we had to implement cookie-like behaviour using browser caching. As I first mentioned last year, new laws are coming into force across Europe that will require
websites to ask for your consent before they store cookies on your computer.
Regardless of their necessity, these laws are badly-defined and ill thought-out, and there’s been a significant lack of information to support web managers in understanding and
implementing the required changes.
To illustrate one of the ambiguities in the law, I’ve implemented a tool which tracks site visitors almost as effectively as cookies (or similar technologies such as Flash Objects or
Local Storage), but which must necessarily fall into one of the larger grey areas. My tool abuses the way that “permanent” (301) HTTP redirects are cached by web browsers.
[callout][button link=”http://c301.scatmania.org/” align=”right” size=”medium” color=”green”]See Demo Site[/button]You can try out my implementation for yourself. Click on the button to
see the sample site, then close down all of your browser windows (or even restart your computer) and come back and try again: the site will recognise you and show you the same random
number as it did the first time around, as well as identifying when your first visit was.[/callout]
Here’s how it works, in brief:
A user visits the website.
The website contains a <script> tag, pointing at a URL where the user’s browser will find some Javascript.
The user’s browser requests the Javascript file.
The server generates a random unique identifier for this user.
The server uses a HTTP 301 response to tell the browser “this Javascript can be found at a different web address,” and provides an address that contains the new unique identifier.
The user’s browser requests the new document (e.g. /javascripts/tracking/123456789.js, if the user’s unique ID was 123456789).
The resulting Javascript is generated dynamically to automatically contain the ID in a variable, which can then be used for tracking purposes.
Subsequent requests to the server, even after closing the browser, skip steps 3 through 5, because the user’s browser will cache the 301 and re-use the unique web
address associated with that individual user.
Compared to conventional cookie-based tracking (e.g. Google Analytics), this approach:
Is more-fragile (clearing the cache is a more-common user operation than clearing cookies, and a “force refresh” may, in some browsers, result in a new tracking ID
being issued).
Is less-blockable using contemporary privacy tools, including the W3C’s
proposed one: it won’t be spotted by any cookie-cleaners or privacy filters that I’m aware of: it won’t penetrate incognito mode or other browser “privacy modes”, though.
Moreover, this technique falls into a slight legal grey area. It would certainly be against the spirit of the law to use this technique for tracking purposes (although it
would be trivial to implement even an advanced solution which “proxied” requests, using a database to associate conventional cookies with unique IDs, through to Google Analytics or a
similar solution). However, it’s hard to legislate against the use of HTTP 301s, which are an even more-fundamental and required part of the web than cookies are. Also, and for the same
reasons, it’s significantly harder to detect and block this technique than it is conventional tracking cookies. However, the technique is somewhat brittle and it would be necessary to
put up with a reduced “cookie lifespan” if you used it for real.
[callout][button link=”http://c301.scatmania.org/” align=”right” size=”medium” color=”green”]See Demo Site[/button] [button link=”https://gist.github.com/avapoet/5318224″ align=”right”
size=”medium” color=”orange”]Download Code[/button] Please try out the demo, or download the source code (Ruby/Sinatra) and see for yourself how this technique works.[/callout]
Note that I am not a lawyer, so I can’t make a statement about the legality (or not) of this approach to tracking. I would suspect that if you were somehow caught doing
it without the consent of your users, you’d be just as guilty as if you used a conventional approach. However, it’s certainly a technically-interesting approach that might have
applications in areas of legitimate tracking, too.
Update: The demo site is down, but I’ve update the download code link so that it still works.
As I indicated in my last blog post, my new blog theme has a “pop up” Dan in the
upper-left corner. Assuming that you’re not using Internet Explorer, then when you move your mouse cursor over it, my head will “duck” back behind the bar below it.
This is all done without any Javascript whatsoever: it’s pure CSS. Here’s how it’s done:
<divclass="sixteen columns"> <divid="dans-creepy-head"></div> <h1id="site-title"class="graphic">
<ahref="/"title="Scatmania">Scatmania</a>
</h1> <spanclass="site-desc graphic">
The adventures and thoughts of "Scatman" Dan Q
</span> </div>
The HTML for the header itself is pretty simple: there’s a container (the big blue bar) which contains, among other things, a <div> with the id
"dans-creepy-head". That’s what we’ll be working with. Here’s the main CSS:
The CSS sets a size, position, and background image to the <div>, in what is probably a familiar way. A :hover selector changes the style to increase the
distance from the top of the container (from -24px to 100px) and to decrease the height, cropping the image (from 133px to 60px – this was necessary
in this case to prevent the bottom of the image from escaping out from underneath the masking bar that it’s supposed to be “hiding behind”). With just that code, you’d have a perfectly
workable “duck”, but with a jerky, one-step animation.
The transition directive (and browser-specific prefix versions -o-transition, -webkit-transition, and -moz-transition, for compatability) are what
makes the magic happen. This element specifies that any ("all") style is changed on this element (whether via CSS directives, as in this case, or by a change of class or
properties by a Javascript function), that a transition effect will be applied to those changes. My use of "all" is a lazy catch-all – I could have specified the
individual properties ( top and height) that I was interested in changing, and even put different periods on each, but I’ll leave it to you to learn about CSS3 transition options for yourself. The 800ms is the
duration of the transition: in my case, 0.8 seconds.
I apply some CSS to prevent the :hover effect from taking place in Internet Explorer, which doesn’t support transitions. The "ie" class is applied to the
<html> tag using Paul Irish’s technique, so it’s easy to detect and handle IE users without loading separate
stylesheet files for them. And finally, in order to fit with my newly-responsive design, I make the pop-up head disappear when the window is under 780px wide (at which point there’d be
a risk of it colliding with the title).
That’s all there is to it! A few lines of CSS, and you’ve got an animation that degrades gracefully. You could equally-well apply transformations to links (how about making them fade in
or out, or change the position of their background image?) or, with a little Javascript, to your tabstrips and drop-down menus.
Somehow, this triggered a transformation in me. You know how when Eric eats
a banana, an amazing transformation occurs? A similar thing happened to me: this horrendously-worded advertisement turned me into an old person. I wanted to write a letter
to them.
There were so many unanswered questions in my mind: what is a “virus noise” (is it a bit like the sound of somebody sneezing?)? How a polyester coating protects against them? And what
kind of viruses are transmitted down video cables, anyway?
Their explanation? The ‘Virus’ was transcribed from French terminology for interference. It’s not a computer virus or anything like that.
The world is full of examples of cables being over-sold, especially HDMI cables and things like “gold-plated optical cables” (do photons care about the conductivity of gold, now?).
Does anybody have enough of a familiarity with the French language to let me know if their explanation is believable?
On this day in 2011 I launched
FreeDeedPoll.org.uk, a site that tries to make it as easy as possible for British citizens to change their names (and have those new names
accepted as being legally-recognised).
The thing that people often don’t realise is that, as a British citizen, you have the right to be known by pretty-much any name you like. You don’t need a solicitor to change your name. You don’t even need any
money. You can just start using it. A deed poll, which you can make all by yourself for free, is just a piece of paper on which you write a promise that you
consider your “new” name to be your primary moniker, and not your “old” one.
Over the last year, almost 3,000 deed polls have been generated using the site, including ones for my partner Ruth (who opted to keep her maiden name as a middle name after she got married) and my friend Jen (who now has among the coolest – and most hippyish – collection of names I’ve ever seen). As to how many of the other thousands of deed polls
have actually been used, I simply don’t know: as a commitment to privacy, no logs are kept of the names people enter onto the form, so for all I know there are 2,000 all the same and
998 “blank” submissions.
Looking Forward
I’ve become a minor Internet guru on the topic of name changes, it turns out. The other week, a transgendered stranger contacted me via the “chat to Dan” link, to ask about the legal
aspects of their (slightly more-complex than most) case for changing their name. And because I’m a fan of helping people, I did a little research with them in order to find the answers.
I felt the need to keep stressing that IANAL, but I’m
pretty sure I managed to help, anyway.
And over the New Year, when there were a few days of downtime for the site (I was part of an exodus of domains from my SOPA-supporing previous registrar, and they made the process
difficult), I received messages from people asking when it would be back up again, so it’s obviously getting some use.
Most recently, a few days ago, a stranger emailed me asking for advice on the legal issues in changing the names of his children. After doing the necessary research, I’m now thinking of
expanding the site to make this easier, too.
A strange feeling for me has been that this project is, and has been for the last year, “finished”. I’m not very good at finishing technical projects: one of the biggest and most
important things that I’ve worked on – Three Rings – is
now
in its tenth year and shows no sign of being “finished”. So it feels odd to have developed a website that’s complete, done and dusted, and probably won’t require more than a modicum
of maintenance over the coming decades to keep it running.
It’s good, though, that I’ve been able to help people with something about which far too many are underinformed. It gives me a warm fuzzy feeling, and I like it.
To mark the ocassion, I’ve updated the open-source version of the
tool so that it works “out of the box”: it now includes all of the (free) fonts you need to get started, and can be used without setting up reCAPTCHA if you like. For more information about the history of the project, see my project page about it.
This blog post is part of the On This Day series, in which Dan periodically looks back on
years gone by.
The other week I built Tiffany2, New Earth‘s new media centre computer. She’s well-established and being used to watch movies, surf the web, and whatnot, now, so I thought I’d
better fulfil my promise of telling you about my other new smaller-than-average computer, Dana, whose existence was made possible by gifts from my family over Christmas
and my birthday.
Dana‘s size and power-consumption is so small that it makes Tiffany2 look like a bloated monster. That’s because Dana is a DreamPlug, an open-architecture plug computer following in the footsteps of the coveted SheevaPlug and GuruPlug.
The entire computer including its detachable power supply is only a little larger than the mobile telephones of the mid-nineties, and the entire device can be plugged straight into the
wall. With no hard disk (it uses SD cards) and no fans, the DreamPlug has no moving parts to wear out or make noise, and so it’s completely silent. It’s also incredibly low-power – mine
idles at about 4 watts – that’s about the same as a radio alarm clock, and about a hundredth of what my desktop PCs Toni and Nena run at under a typical load.
I’ve fitted up mine with a Mimo Mini-Monster 10″: a dinky little self-powered USB-driven touchscreen monitor about the
size of an iPad. Right now the whole assembly – about the size of a large picture frame – sits neatly in the corner of my desk and (thanks to the magic of Synergy) forms part of my extended multi-monitor desktop, as well as acting as a computer in her own right.
So on the surface, she’s a little bit like a wired tablet computer, which would seem a little silly (and indeed: at a glance you’d mistake her for a digital photo frame)! But because
she’s a “real” computer underneath, with a 1.2GHz processor, 512MB RAM, USB, WiFi, and two Ethernet ports, there’s all kinds of fun things that can be done with her.
For a start, she provides an ultra low-power extension to my existing office development environment. I’ve experimented with “pushing” a few tasks over to her, like watching log file
output, downloading torrents, running a web server, reading RSS feeds, and so on, but my favourite of her tasks is acting as a gateway between the rest of the world and my office.
While they’ve come a long way, modern ADSL routers are still woefully inadequate at providing genuine customisability and control over my home network. But a computer like this – small,
silent, and cheap – makes it possible to use your favourite open-source tools (iptables, squid, sshd, etc.) as a firewall to segregate off a part of the network. And that’s exactly what
I’ve done. My office – the pile of computers in the upper-right of the diagram, above – is regulated by Dana, whose low footprint means that I don’t feel bad about leaving her
turned always-on.
That means that, from anywhere in the world (and even from my phone), I can now:
Connect into Dana using SSH.
Send magic packets
to Toni, Nena, or Tiffany2 (all of which are on wired connections), causing them to turn themselves on.
Remotely control those computers to, for example, get access to my files from anywhere, set them off downloading something I’ll need later, or whatever else.
Turn them off when I’m done.
That’s kinda sexy. There’s nothing new about it – the technologies and standards involved are as old as the hills – but it’s nice to be able to do it using something that’s barely
bigger than a postcard.
I have all kinds of ideas for future projects with Dana. It’s a bit like having a souped-up (and only a little bigger) Arduino to play with, and it’s brimming with potential. How about a webcam for my bird feeder? Or home-automation tools
(y’know: so I can turn on my bedroom light without having to get out of bed)? Or a media and file server (if I attached a nice, large, external hard disk)? And then there’s the more
far-fetched ideas: it’s easily low-power enough to run from a car battery – how about in-car entertainment? Or home-grown GPS guidance? What about a “delivered ready-to-use” intranet
application, as I was discussing the other day with a colleague, that can be simply posted to a client, plugged in, and used? There’s all kinds of fun potential ideas for a box like
this, and I’m just beginning to dig into them.
This weekend, I integrated two new computers into the home network on New Earth. The first of these is
Tiffany2.
Tiffany2 replaces Tiffany, the media centre computer I built a little under four years ago. The original Tiffany was built on a shoestring budget of
under £300, and provided the technical magic behind the last hundred or so Troma Nights, as well as countless other film and television nights, a means to watch (and record and pause)
live TV, surf the web, and play a game once in a while.
The problem with Tiffany is that she was built dirt-cheap at a time when building a proper media centre PC was still quite expensive. So she wasn’t very good. Honestly, I’m
amazed that she lasted as long as she did. And she’s still running: but she “feels” slow (and takes far too long to warm up) and she makes a noise like a jet engine… which isn’t what
you want when you’re paying attention to the important dialogue of a quiet scene.
Tiffany2 is virtually silent and significantly more-powerful than her predecessor. She’s also a lot smaller – not much bigger than a DVD player – and generally more
feature-rich.
This was the first time I’d built an ITX form-factor computer (Tiffany2 is Mini-ITX): I wanted to make her small, and it seemed like the best standard for the job. Assembling some of her components
felt a little like playing with a doll’s house – she has a 2.5″ hard disk and a “slimline” optical drive: components that in the old days we used to call “laptop” parts, which see new
life in small desktop computers.
In order to screw in some of the smaller components, I had to dig out my set of watchmaker’s screwdrivers. Everything packs very neatly into a very small space, and – building her – I
found myself remembering my summer job long ago at DesignPlan Lighting, where I’d have to tuck dozens of little
components, carefully wired-together, into the shell of what would eventually become a striplight in a tube train or a prison, or something.
She’s already deployed in our living room, and we’ve christened her with the latest Zero Punctuation, a few DVDs, some episodes of Xena: Warrior Princess, and an episode of Total Wipeout featuring JTA‘s old history teacher
as a contestant. Looks like she’s made herself at home.
(for those who are sad enough to care, Tiffany2 is running an Intel Core i3-2100 processor, underclocked to 3GHz, on an mITX Gigabyte GA-H61N-USB3 motherboard with 4GB RAM, a 750GB hard disk, and DVD-rewriter, all wrapped up in an
Antec ISK 300-150 case with a 150W power supply: easily enough for a media centre box
plus some heavy lifting if I ever feel the need to give her any)
Long ago, I used desktop RSS readers. I was only subscribed to my friends’ blogs back then anyway, so it didn’t matter that I could only read them from my home computer. But then RSS
feeds started appearing on news sites, and tech blogs started appearing about things related to my work. And smartphones took over the world, and I wanted to be able to synchronise my
reading list everywhere. There were a few different services that competed for my attention, but Google Reader was the best. It was simple, and fast, and easy, and it Just Worked in
that way that Google products often do.
I put up with the occasional changes to the user interface. Hey, it’s a beta, and it’s still the best thing out there. Hey, it’s free, what can you say? I put up with the fact that from
time to time, they changed the site in ways that were sometimes quite hostile to Opera, my web
browser of choice. I put up with the fact that it had difficulty with unsigned HTTPS certificates (it’s fine now) and that it didn’t provide a mechanism to authenticate against services
like LiveJournal (it still doesn’t). I even worked around the latter, releasing my own tool and updatingit a few times until LiveJournal blocked it (twice) and I had to instead recommend that
people switched to rival service FreeMyFeed.
I know that they’re ever-so-proud of the Google+ user interface, but rebranding all of the other services to look like
it just isn’t working. It’s great for Google+, not-bad for Search, bad for GMail (but at least you can turn it off!), and fucking awful for Reader. I like
distinct borders between my items. I don’t like big white spaces and buttons that eat up half the screen.
The sharing interface is completely broken. After a little while, I worked out that I still can share things with other people, but I can’t any longer see what other
people are sharing without clicking over to Google+. This sucks a lot. No longer can I keep track of which shared items I have and haven’t read, and no longer can I read the interesting
RSS feeds my friends have shared in the same place as I read (and share) my own.
So that’s the last straw. Today, I switched everything over to Tiny Tiny RSS.
Originally I felt that I was being pushed “away” from Google Reader, but the more I’ve played with it, the more I’ve realised that I’m being drawn “towards” Tiny Tiny, and wishing that
I’d made the switch further. The things that have really appealed are:
It’s self-hosted. Tiny Tiny RSS is a free, open-source solution that you host for yourself (or I suppose you can use a shared host; there are a few around). I know
that this is a downside to most people, but to me, it’s a serious selling point: now, I’m in control of what updates are applied, when, and if I don’t like the
functionality of a part of the system, I can change it – I’m in control.
It’s simple and clean. It’s got a great user interface, in an understated and simplistic way. It’s somewhat reminiscent of desktop email clients,
replacing the “stream of feeds” idea with a two- or three-pane view (your choice). That sounds like it’d be a downside, until you realise…
…with great keyboard controls. Tiny Tiny RSS is great for keyboard lovers like me. The default key-commands (which are of course customisable) are based on
Emacs, so if that’s your background then it’s easy to be right at home in minutes and browsing feeds faster than ever.
Plus: it’s got a stack of nice features. I’m loving the “fresh” filter, that helps me differentiate between the stuff I’ve “saved for later” reading and the
stuff that’s actually new and interesting. I’m also impressed by the integrated authentication, which removes my dependency on FreeMyFeed-like services and (because it’s self-hosted)
lets me keep my credentials securely under my own control. It supports authentication using SSL certificates, a beautiful and underused technology. It allows you to customise the
update frequency of your feeds, so I can stalk by friends’ blogs at lightning-quick rates and stall my weekly update subscriptions so they don’t get checked so frequently. And unlike
Google Reader, it actually tells me when feeds break, so I don’t just “get no updates” for a while before I think to check the site (and it’ll even let me change the
URLs when this happens, rather than unsubscribing and resubscribing).
Put simply: all of my major gripes with Google Reader over the last few years have been answered all at once in this wonderful little program. If people are interested in how I set up
Tiny Tiny RSS and and made the switchover as simple and painless as possible, I’ll write a blog post to talk you through it.
I’ve had just one problem: it’s not quite so tolerant of badly-formed XML as Google Reader. There’s one feed in my list which, it turns out, has (very) invalid XML in it’s
feed, that Google Reader managed to ignore and breeze over, but Tiny Tiny RSS chokes on. I’ve contacted the site owner to try to get it fixed, but if they don’t, I might have to hack
some code to try to make a workaround. Not ideal, and not something that everybody would necessarily want to deal with, so be aware!
If, like me, you’ve become dissatisfied by Google Reader this week, you might also like to look at rssLounge, the
other worthy candidate I considered as a replacement. I had a quick play but didn’t find it quite as suitable for my needs, but it might be to your taste: take a look.
Oh, and one more thing: if you used to “follow” me on Google Reader (or even if you didn’t) and you want to continue to subscribe to the stuff I “share”, then
you’ll want to subscribe to this new RSS feed of “my shared stuff”, instead: it can also be found syndicated in
the right-hand column of my blog.
Update:this guy’s made a
bookmarklet that makes the new Google Reader theme slightly less hideous. Doesn’t fix the other problems, though, but if you’re not quite pissed-off enough to jump ship, it
might make your experience more-bearable.
Update 2: others in the blogosphere are saying good things about Reader rival NewsBlur, which recently turned one year old. If you’re looking for a hosted service, rather than something “roll-your-own” like
Tiny Tiny RSS, perhaps it’s the tool for you?
In particular, something I’ve been working on are the QR codes. This
experiment – very progressive for a sometimes old-fashioned establishment like the Bodleian – involves small two-dimensional barcodes being placed with the exhibits. The barcodes are
embedded with web addresses for each exhibit’s page on the exhibition website. Visitors who scan them – using a tablet computer, smartphone, or whatever – are directed to a web page
where they can learn more about the item in front of them and can there discuss it with other visitors or can “vote” on it: another exciting new feature in this exhibition is that we’re
trying quite hard to engage academics and the public in debate about the nature of “treasures”: what is a treasure?
In order to improve the perceived “connection” between the QR code and the objects, to try to encourage visitors to scan the codes despite perhaps having little or no instruction, we
opted to embed images in the QR codes relating to the objects they related to. By cranking up the error-correction level of a QR code, it’s possible to “damage” them quite significantly and still have them scan perfectly well.
We hope that the visual association between each artefact and its QR code will help to make it clear that the code is related to the item (and isn’t, for example, some kind of asset tag
for the display case or something). We’re going to be monitoring usage of the codes, so hopefully we’ll get some meaningful results that could be valuable for future exhibitions: or for
other libraries and museums.
Rolling Your Own
If you’re interested in making your own QR codes with artistic embellishment (and I’m sure a graphic designer could do a far better job than I did!), here’s my approach:
I used Google Infographics (part of Chart Tools) to produce my QR codes. It’s fast, free,
simple, and – crucially – allows control over the level of error correction used in the resulting code. Here’s a sample URL to generate the QR code above:
500×500 is the size of the QR code. I was ultimately producing 5cm codes because our experiments showed that this was about the right size
for our exhibition cabinets, the distance from which people would be scanning them, etc. For laziness, then, I produced codes 500 pixels square at a resolution of 100 pixels per
centimetre.
H specifies that we want to have an error-correction level of 30%, the maximum possible. In theory, at least, this allows us to do the maximum
amount of “damage” to our QR code, by manipulating it, and still have it work; you could try lower levels if you wanted, and possibly get less-complex-looking codes.
0 is the width of the border around the QR code. I didn’t want a border (as I was going to manipulate the code in Photoshop anyway), so I use
a width of 0.
The URL – HTTP://TREASURES.BODLEIAN.OX.AC.UK/T7 – is presented entirely in capitals. This is because capital letters use fewer bits
when encoded as QR codes. “http” and domain names are case-insensitive anyway, and we selected our QR code path names to be in capitals. We also shortened the URL as far as possible:
owing to some complicated technical and political limitations, we weren’t able to lean on URL-shortening services like bit.ly, so we had to roll our own. In
hindsight, it’d have been nice to have set up the subdomain “t.bodleian.ox.ac.uk”, but this wasn’t possible within the time available. Remember: the shorter the web address, the simpler
the code, and simpler codes are easier and faster to read.
Our short URLs redirect to the actual web pages of each exhibit, along with an identifying token that gets picked up by Google Analytics to track how widely the QR codes are being used (and which ones are most-popular amongst visitors).
Load that code up in Photoshop, along with the image you’d like to superimpose into it. Many of the images I’ve had to work with are disturbingly “square”, so I’ve simply taken
them, given them a white or black border (depending on whether they’re dark or light-coloured). With others, though, I’ve been able to cut around some of the more-attractive parts of
the image in order to produce something with a nicer shape to it. In any case, put your image in as a layer on top of your QR code.
Move the image around until you have something that’s aesthetically-appealing. With most of my square images, I’ve just plonked them in the middle and resized them to cover a whole
number of “squares” of the QR code. With the unusually-shaped ones, I’ve positioned them such that they fit in with the pattern of the QR code, somewhat, then I’ve inserted another
layer in-between the two and used it to “white out” the QR codes squares that intersect with my image, giving a jagged, “cut out” feel.
Test! Scan the QR code from your screen, and again later from paper, to make sure that it’s intact and functional. If it’s not, adjust your overlay so that it covers less of the QR
code. Test in a variety of devices. In theory, it should be possible to calculate how much damage you can cause to a QR code before it stops working (and where it’s safe to cause the
damage), but in practice it’s faster to use trial-and-error. After a while, you get a knack for it, and you almost feel as though you can see where you need to put the images so that
they just-barely don’t break the codes. Good luck!
Give it a go! Make some QR codes that represent your content (web addresses, text, vCards, or whatever) and embed your own images into them to make them stand out with a style of their
own.
In my review of my new HTC Sensation earlier this month, I tried to explain how
my new phone – with it’s swish and simple interface – didn’t feel quite… geeky enough for me. I picked up on the way that it’s process management works, but I’ve since
realised that this is only symptomatic of a deeper problem. This is entirely to do with the difference between traditional computers (of which my old N900 was one) and modern consumer-centric devices (which, inspired by the iPod/iPhone/iPad/etc.) try
to simplify things for the end-user and provide strong support for centralised repositories of pre-packaged “apps” for every conceivable purpose.
To take an example of the difference: my N900 ran Linux, and felt like it ran Linux. As a reasonably-sensible operating system, this meant that all of the applications on it
used pretty much the same low-level interfaces to do things. If I wanted, I could have installed (okay, okay – compiled) sshfs, and be reasonably confident that every application on my phone, whether it’s a media player or a geocaching application
or whatever, would use that new filesystem. I could store my geocaching .gpx files on an SSH-accessible server somewhere, and my phone could access them, and my geocaching
app wouldn’t know the difference because I’d have that level of control over the filesystem abstraction layer.
Similarly, if I installed a game which made use of Ogg Vorbis to store
its sound files, which therefore installed the Vorbis codecs, then I can expect that my media player software will also be able to make use of those codecs, because
they’ll be installed in the standard codec store. This kind of thing “just works”. Okay, okay: you know as well as I do that computers don’t always “just work”, but the principle is
there such that it can “just work”, even if it doesn’t always.
On these contemporary smartphones, like the iPhone, Android devices, and (I assume) modern BlackBerrys, the model is different: individual applications are sandboxed and packaged up
into neat little bundles with no dependencies outside of that provided by the platform. If you have two applications installed that both use sshfs, then they both have to
include (or implement) the relevant bundle! And having them installed doesn’t automatically give sshfs-like functionality to your other filesystem-accessing tools.
It’s not all bad, of course: this “new model” is great for helping non-technical users keep their devices secure, for example, and it means that there’s almost no risk of dependency hell. It’s very… easy. But I’m still not
sure it quite works: I’ll bet that 90% of users would install an application that demands dubious levels of permissions (and could, for example, be stealing their address book data for
sale to scammers) without even thinking about the risks, so the security benefits are somewhat nullified.
In summary:
Pros
Cons
Traditional-computing device (e.g. N900)
User actually “owns” device
Applications to be combined (e.g. pipes, automation, new middleware)
More secure (in theory) as platform exposes little
Centralised “app store”/”marketplace”
Potentially limiting for technical users
Only as secure as the user is savvy.
Centralised “app store” store can act as a “lock in”
Needless to say, the new model devices are winning, and already tablet computers powered by the very same platforms as the mobile phones are beginning to be seen as a simpler, easier
alternative to conventional laptops. It’s to be expected: most of today’s users don’t want a learning curve before they can use their smartphone: they just want to make some calls, play
Angry Birds a bit, keep up with their Facebook friends, and so on.
But I hope that there’ll always be room for a few folks like me: folks who want to tinker, want to play, want to hack code for no really benefit but their own pleasure… and without
having to shell out for a developer license in order to do so!
I’ve recently gotten a new phone – a HTC Sensation running
Android 2.3, and I thought I’d offer up a few thoughts on
it. But first…
Hang on: what was wrong with your old phone?
Well-remembered! You’re right, of course, that last year I got a Nokia N900, and that it was the
best mobile communications device I’d ever owned. I don’t care so much about a slim profile or an “app store”, but I do care about raw power and geeky hardware features, and the N900
delivers both of those in spades. I’ve had several phones that have, at the time, been the “best phone I’ve ever owned” – my 7110 and my N96 both also earned that distinction, whereas my 7610 and my C550 – the latter of which had only one redeeming feature – fell far short.
Awesome though it is, with it’s beautiful hardware keyboard, mighty processor, FM receiver and transmitter, Bluetooth and IR, etc., and completely unlocked,
tamper-friendly architecture, the N900 suffers from one terrible, terrible flaw: for some reason, the engineers who built it decided to mount the Micro-B USB port (used for charging,
tethering, mounting etc. the phone) not to the hard plastic case, but to the fragile inner circuit board. Allow me to illustrate:
Why is this a problem? Well, as Katie explained to me at the New Earth housewarming party,
most of her other friends who’d had N900s had encountered a problem by now, whereby the USB cable used to charge the device eventually puts a strain on the connection between the port
and the board, tearing them apart. “Nope,” I told her, “I’ve never had any such problem with mine.”
Looks like I spoke too soon, because that very week, I managed to break my N900 in exactly this way. My theory: that girl is cursed. I shall be attempting to
exorcise the anti-technology demons in her the very next time I see her, possibly in some kind of ceremony involving high-voltage direct current. In any case, I found myself with a
phone that I couldn’t charge.
So you replaced it?
No, of course not. My N900 remains a fantastic palmtop and a great device. It’s just got a minor problem in that it’s no longer possible to charge or “hard”-tether it to anything any
more. The latter problem was an easy one to fix: a separate battery charger (I already carry a spare battery for it, so this was no hardship), bought for about £4 on eBay, made it easy
to keep the device rolling. The second problem’s not so much of an issue, because I tend to do all of my synchronisation by Bluetooth and WiFi anyway. But even if
these were an issue, it looks like a pretty
simple job to re-solder the USB port (and epoxy it to the case, as it should have been to begin with!). I might give it a go, some day, but my current soldering iron is a little big
and chunky for such fine and delicate work, and I’m a little out of practice, so I’ll save that project for another day.
However, I’m a big believer in the idea that when the Universe wants you to have a new phone, it finds a fault with your current phone. Perhaps this is the geek equivalent of thinking
that “When God
closes a door, He opens a window”.
So: I’ve got myself a HTC Sensation, which narrowly beat the Sony Ericsson Xperia Arc after carefully weighing up the reviews. I’d always planned that I’d try an
Android device next, but I’d originally not expected to do so until Ice Cream Sandwich, later this year. But… when the Universe closes your USB Port, it
opens a Gingerbread shop… right?
The New Sensation
After a few difficulties relating to my name – it turns out that my mobile
phone network has recorded my name correctly in their database, and I can’t change it, but whenever I use their web-based checkout it asks me to enter a longer surname even though I
don’t have a surname field to change – I finally received my new phone.
The first thing one notices about this phone is that it’s fast. Blindingly fast. I’ve used a variety of Android-powered HTC devices before, as well as other modern
touchscreen smartphones like the iPhone, and I’m yet to use anything that consistently ramps up high-end graphics and remains slick and responsive like this does. Its mighty dual-core
1.2GHz processor’s the cause of this, little doubt. I originally worried that battery life might be limited as a result – I don’t mind charging my phone every night, but I don’t want to
have to charge it during the day too! – but it’s actually been really good. Using WiFi, GPRS, GPS, playing videos, surfing the web, and other “everyday” tasks don’t put a dent in the
battery: I’ve only once seen it dip to under 10% battery remaining, and that was after 40 hours of typical use during a recent camping weekend (with no access to electricity).
It’s also been really well-designed from a usability perspective, too. Those familiar with Android would probably just start using it, but I’ve not had so much exposure to the platform
and was able to come to it with completely fresh eyes. Between Android 2.3 and HTC Sense 3, there’s a nice suite of “obvious” apps, and I didn’t have any difficulty synchronising my contacts, hooking
up my various email accounts, and so on. There are some really nice “smart” touches, like that the phone rings loudly if it thinks it’s in a bag or pocket, more quietly after you pick
it up, and silences the ringer completely if you pick it up from a table and flip it from face-up to face-down. These simple gestural touches are a really nice bit of user interface
design, and I appreciate the thought that’s gone into them.
The Android Marketplace is reasonable, although I feel as though I’ve been spoiled. On the N900, if there was an application I needed, I usually already knew what it was and where I’d
find it: then I’d either apt-get it, or download the source and
compile it, right there on the device. For somebody who’s already perfectly confident at a *nix command-line, the N900 is fab, and it feels a little restrictive to have to
find equivalent apps in a closed-source environment. It’s not that the pricing is unreasonable – most of the applications I’ve wanted have been under a quid,
and all have been under £4 – it’s just that I know that there are FOSS alternatives that would have been easy to compile on my old device: I guess it’s just a transition.
On the other hand, the sheer volume of applications so-easily available as the Android Market is staggering. I’ve been filled with app ideas, but every idea I’ve had but one or
two already exist and are just waiting to be installed. It’s a little like being a kid in a candy store.
It’s also taking me quite some time to get used to the way that process management works on an Android device. On Android devices, like the iPhone/iPad, returning to the home screen
doesn’t (necessarily) close the application, but it might – that’s up to the developer. If it doesn’t, the application will probably be “paused” (unless it’s
a media player or it’s downloading or something, then it’ll likely keep going in the background). And when you re-launch the same application, it could be
simply unpausing, or perhaps it’s relaunching (in which case it may or may not restore its previous state, depending on the whim of the developer)… You see
all of the keywords there: might, probably, likely, could, perhaps. Great for most users, who don’t want to have to
think about what their phone is doing in the background, but it feels like a step backwards to me: I’m used to being able to ALT-TAB between
my currently-running applications, to know what’s running, when (and I can always use top and find out exactly what resources a process is eating). Putting all of this process management into the hands of
developers feels to me like giving up control of my device, and it’s a challenging change to undergo. Yes: despite the openness of the platform, Android feels just
a little out of my control compared to what I’m used to.
Switching from a physical to a virtual keyboard for the first time is a significant change, too, and it’s slowed me down quite a lot, although applications like SwiftKey X – with its incredibly intelligent personalised predictions – and Hacker’s Keyboard – which gives me back some of the keys I was “missing” – have helped to ease the
transition a lot.
In summary: the HTC Sensation seems to be a fantastic device, and I’m really enjoying using it. I’ve got a few niggles to contend with, but these are all things that were destined to
catch me out upon switching away from a platform as open as the N900, and they’re not severe enough to make me give up and get an N950 instead: I’m reasonably confident that I’ll come
to love the Sensation and we’ll go on to be very happy together.
But will it become my latest “best phone ever”? Time will tell, I guess.