Chrome is the new Safari. And so are Edge and Firefox.

This article is a repost promoting content originally published elsewhere. See more things Dan's reposted.

I am not saying Apple’s approach is wrong. What Apple is doing is important too, and I applaud the work Apple has been doing in improving privacy on the web.

But it can’t be the only priority. Just imagine what the web would look like if every browser would have taken that approach 20 years ago.

Actually, no, don’t imagine it all. Just think back at Internet Explorer 6; that is what the web looked like 20 years ago.

There can only be one proper solution: Apple needs to open up their App Store to browsers with other rendering engines. Scrap rule 2.5.6 and allow other browsers on iOS and let them genuinely compete.

As a reminder, Safari is the only web browser on iOS. You might have been fooled to think otherwise by the appearance of other browsers in the App Store or perhaps by last year’s update that made it possible at long last to change the default browser, but it’s all an illusion. Beneath the mask, all browsers on iOS are powered by Safari’s WebKit, or else they’re booted from the App Store.

Comic showing Scooby-Doo character Fred removing the mask from the ghosts of different iOS web browsers to find Safari's WebKit beneath all of them.

Neils’ comparison to Internet Explorer 6 is a good one, but as I’ve long pointed out, there’s a big and important difference between Microsoft’s story during the First Browser War and Apple’s today:

  • Microsoft bundled Internet Explorer with Windows, raising the barrier to using a different web browser, which a court ruled as monopolistic and recommended that Microsoft be broken into smaller companies (this recommendation was scaled back on appeal).
  • Apple bundle Safari with iOS and prohibit the use of any other browser’s rendering engine on that platform, preventing the use of a different web browser. Third-party applications have been available for iOS – except, specifically, other browser rendering engines and a handful of other things – for 13 years now, but it still seems unlikely we’ll see an antitrust case anytime soon.

Apple are holding the Web back… and getting away with it.

Exploiting vulnerabilities in Cellebrite UFED and Physical Analyzer from an app’s perspective

This article is a repost promoting content originally published elsewhere. See more things Dan's reposted.

Cellebrite makes software to automate physically extracting and indexing data from mobile devices. They exist within the grey – where enterprise branding joins together with the larcenous to be called “digital intelligence.” Their customer list has included authoritarian regimes in Belarus, Russia, Venezuela, and China; death squads in Bangladesh; military juntas in Myanmar; and those seeking to abuse and oppress in Turkey, UAE, and elsewhere. A few months ago, they announced that they added Signal support to their software.

Their products have often been linked to the persecution of imprisoned journalists and activists around the world, but less has been written about what their software actually does or how it works. Let’s take a closer look. In particular, their software is often associated with bypassing security, so let’s take some time to examine the security of their own software.

Moxie Marlinspike (Signal)

Recently Moxie, co-author of the Signal Protocol, came into possession of a Cellebrite Extraction Device (phone cracking kit used by law enforcement as well as by oppressive regimes who need to clamp down on dissidents) which “fell off a truck” near him. What an amazing coincidence! He went on to report, this week, that he’d partially reverse-engineered the system, discovering copyrighted code from Apple – that’ll go down well! – and, more-interestingly, unpatched vulnerabilities. In a demonstration video, he goes on to show that a carefully crafted file placed on a phone could, if attacked using a Cellebrite device, exploit these vulnerabilities to take over the forensics equipment.

Obviously this is a Bad Thing if you’re depending on that forensics kit! Not only are you now unable to demonstrate that the evidence you’re collecting is complete and accurate, because it potentially isn’t, but you’ve also got to treat your equipment as untrustworthy. This basically makes any evidence you’ve collected inadmissible in many courts.

Moxie goes on to announce a completely unrelated upcoming feature for Signal: a minority of functionally-random installations will create carefully-crafted files on their devices’ filesystem. You know, just to sit there and look pretty. No other reason:

In completely unrelated news, upcoming versions of Signal will be periodically fetching files to place in app storage. These files are never used for anything inside Signal and never interact with Signal software or data, but they look nice, and aesthetics are important in software. Files will only be returned for accounts that have been active installs for some time already, and only probabilistically in low percentages based on phone number sharding. We have a few different versions of files that we think are aesthetically pleasing, and will iterate through those slowly over time. There is no other significance to these files.

That’s just beautiful.

Spoiled by the Web

This article is a repost promoting content originally published elsewhere. See more things Dan's reposted.

Back in 2016, I made an iMessage app called Overreactions. Actually, the term “app” is probably generous: It’s a collection of static and animated silly faces you can goof around with in iMessage. Its “development” involved many PNGs but zero lines of code.

Just before the 2019 holidays, I received an email from Apple notifying me that the app “does not follow one or more of the App Store Review Guidelines.” I signed in to Apple’s Resource Center, where it elaborated that the app had gone too long without an update. There were no greater specifics, no broken rules or deprecated dependencies, they just wanted some sort of update to prove that it was still being maintained or they’d pull the app from the store in December.

Here’s what it took to keep that project up and running…

There’s always a fresh argument about Web vs. native (alongside all the rehashed ones, of course). But here’s one you might not have heard before: nobody ever wrote a Web page that met all the open standards only to be told that they had to re-compile it a few years later for no reason other than that the browser manufacturers wanted to check that the author was still alive.

But that’s basically what happened here. The author of an app which had been (and still did) work fine was required to re-install the development environment and toolchain, recompile, and re-submit a functionally-identical version of their app (which every user of the app then had to re-download along with their other updates)… just because Apple think that an app shouldn’t ever go more than 3 years between updates.

The Route of a Text Message

This article is a repost promoting content originally published elsewhere. See more things Dan's reposted.

With each tap, a small electrical current passes from the screen to her hand. Because electricity flows easily through human bodies, sensors on the phone register a change in voltage wherever her thumb presses against the screen. But the world is messy, and the phone senses random fluctuations in voltage across the rest of the screen, too, so an algorithm determines the biggest, thumbiest-looking voltage fluctuations and assumes that’s where she intended to press.

Figure 0. Capacitive touch.

So she starts tap-tap-tapping on the keyboard, one letter at a time.

I-spacebar-l-o-v-e-spacebar-y-o-u.

I’ve long been a fan of “full story” examinations of how technology works. This one looks and the sending and receipt of an SMS text message from concept through touchscreen, encoding and transmission, decoding and display. It’s good to be reminded that whatever technology you build, even a “basic” Arduino project, a “simple” website or a “throwaway” mobile app, you’re standing on the shoulders of giants. Your work sits atop decades or more of infrastructure, standards, electronics and research.

Sometimes it feels pretty fragile. But mostly it feels like magic.

The Simple Joy of “No Phones Allowed”

This article is a repost promoting content originally published elsewhere. See more things Dan's reposted.

A few nights ago I saw Jack White in concert. It was a wonderful night, and a big part of that was due to a new rule he has imposed on all his tour dates: no phones.

When you arrive, you have to put your phone into a neoprene pouch, supplied by a company called Yondr, which they lock and give back to you. If you want to use your phone during the show, you can go into the concourse and unlock it by touching it to one of several unlocking bases. The concert area itself remains screen-free.

The effect was immediately noticeable upon entering the concert bowl. Aside from the time-travel-like strangeness of seeing a crowd devoid of blue screens, there was a palpable sense of engagement, as though—and it sounds so strange to say it—everyone came just so they could be there.

The most-significant observation in this article, in my mind, was that even putting a 20-second delay to people using their phones – that is, having to walk out to the concourse to unlock their bags – was sufficient to dramatically remove the temptation for their use. That’s amazing, but unsurprising: Veritasium recently did a video about boredom and how the desire to avoid ever feeling bored (despite its scientifically-demonstrable benefits), coupled with the easy access of instant stimulation from our smart devices, leads us into the well-known check phone, replace, check again cycle (or else “zombie smartphoning”).

I’ve been trying to be better about paying less-attention to my phone, this year, and it’s gone well… except that (as David also observes in the linked article) I’ve become more acutely aware of the feeling of the conversational/interpersonal “void” created when somebody else chances a “quick check” of their phone under the table. I used to blame social media mostly for this – and I still think that it’s an issue, and it’s certainly true that my Facebook/Twitter/Reddit-heavier-using friends are the biggest culprits for getting lost in their devices – but I’ve come to see it as a bigger, more-human issue, coupled with the availability of the new technologies around us.

Similar to how we eat too much fat, and sugar, and meat… simply because it’s available, we crave the stimulation that we can easily get from the device in our pocket to such an extent that we’ve become unhealthy in our habits.

Intercepting HTTPS Traffic from Android Emulator

Mostly for my own benefit, as most other guides online are outdated, here’s my set-up for intercepting TLS-encrypted communications from an emulated Android device (in Android Emulator) using Fiddler. This is useful if you want to debug, audit, reverse-engineer, or evaluate the security of an Android app. I’m using Fiddler 5.0 and Android Studio 2.3.3 (but it should work with newer versions too) to intercept connections from an Android 8 (Oreo) device using Windows. You can easily adapt this set-up to work with physical devices too, and it’s not hard to adapt these instructions for other configurations too.

Intercepting a HTTPS connection to DanQ.me on a virtual Android device.

1. Configure Fiddler

Install Fiddler and run it.

Configuring Fiddler

Under Tools > Options > HTTPS, enable “Decrypt HTTPS traffic” and allow a root CA certificate to be created.

Click Actions > Export Root Certificate to Desktop to get a copy of the root CA public key.

Fiddler's Connections settings

On the Connections tab, ensure that “Allow remote computers to connect” is ticked. You’ll need to restart Fiddler after changing this and may be prompted to grant it additional permissions.

If Fiddler changed your system proxy, you can safely change this back (and it’ll simplify your output if you do because you won’t be logging your system’s connections, just the Android device’s ones). Fiddler will complain with a banner that reads “The system proxy was changed. Click to reenable capturing.” but you can ignore it.

2. Configure your Android device

Android Device Manager - New Device

Install Android Studio. Click Tools > Android > AVD Manager to get a list of virtual devices. If you haven’t created one already, create one: it’s now possible to create Android devices with Play Store support (look for the icon, as shown above), which means you can easily intercept traffic from third-party applications without doing APK-downloading hacks: this is great if you plan on working out how a closed-source application works (or what it sends when it “phones home”).

Android emulator showing network settingsIn Android’s Settings > Network & Internet, disable WiFi. Then, under Mobile Network > Access Point Names > {Default access point, probably T-Mobile} set Proxy to the local IP address of your computer and Port to 8888. Now all traffic will go over the virtual cellular data connection which uses the proxy server you’ve configured in Fiddler.

Android network proxy settings

Drag the root CA file you exported to your desktop to your virtual Android device. This will automatically copy the file into the virtual device’s “Downloads” folder (if you’re using a physical device, copy via cable or network). In Settings > Security & Location > Encryption & Credentials > Install from SD Card, use the hamburger menu to get to the Downloads folder and select the file: you may need to set up a PIN lock on the device to do this. Check under Trusted credentials > User to check that it’s there, if you like.

Installing a Root CA in Android.

Test your configuration by visiting a HTTPS website: as you browse on the Android device, you’ll see the (decrypted) traffic appear in Fiddler. This also works with apps other than the web browser, of course, so if you’re reverse-engineering a API-backed application encryption then encryption doesn’t have to impede you.

3. Not working? (certificate pinning)

A small but increasing number of Android apps implement some variation of built-in key pinning, like HPKP but usually implemented in the application’s code (which is fine, because most people auto-update their apps). What this does is ensures that the certificate presented by the server is signed by a certification authority from a trusted list (a trusted list that doesn’t include Fiddler’s CA!). But remember: the app is running on your device, so you’re ultimately in control – FRIDA’s bypass script “fixed” all of the apps I tried, but if it doesn’t then I’ve heard good things about Inspeckage‘s “SSL uncheck” action.

Summary of steps

If you’re using a distinctly different configuration (different OS, physical device, etc.) or this guide has become dated, here’s the fundamentals of what you’re aiming to achieve:

  1. Set up a decrypting proxy server (e.g. Fiddler, Charles, Burp, SSLSplit – note that Wireshark isn’t suitable) and export its root certificate.
  2. Import the root certificate into the certificate store of the device to intercept.
  3. Configure the device to connect via the proxy server.
  4. If using an app that implements certificate pinning, “fix” the app with FRIDA or another tool.

Review of Vodafone

This review of Vodafone originally appeared on Google Maps. See more reviews by Dan.

Like most mobile phone shops, this one enjoys a confusing layout and less-than-completely-helpful staff.

If you know exactly what you want then you can have a reasonable experience, but if you know exactly what you want then you’d be better to go online. If you’re not sure what you want, you’re going to have an unhappy time full of upselling and jargon… so you’d do better to go online.

The network itself is good: broad coverage, good data speeds, nice international deals. But this shop is a perfect example of why their shops ought to already be a relic of history.

Map of 51.7522757,-1.2577140

AMP Is Poisonous

If you’re a web developer and you haven’t come across the Google AMP project yet… then what stone have you been living under? But just in case you have been living under such a stone – or you’re not a web developer – I’ll fill you in. If you believe Google’s elevator pitch, AMP is “…an open-source initiative aiming to make the web better for all… consistently fast, beautiful and high-performing across devices and distribution platforms.”

I believe that AMP is fucking poisonous and that the people who’ve come out against it by saying it’s “controversial” so far don’t go remotely far enough. Let me tell you about why.

AMP logo in handcuffs

When you configure your website for AMP – like the BBC, The Guardian, Reddit, and Medium already have – you deliver copies of your pages written using AMP HTML and AMP JS rather than the HTML and Javascript that you’re normally would. This provides a subset of the functionality you’re used to, but it’s quite a rich subset and gives you a lot of power with minimal effort, whether you’re trying to make carousels, video players, social sharing features, or whatever. Then when your site is found via Google Search on a mobile device, then instead of delivering the user to your AMP HTML page or its regular-HTML alternative… Google delivers your site for you via an ultra-fast precached copy via their own network. So far, a mixed bag, right? Wrong.

What’s poisonous about Google AMP

Ignoring the facts that you can get locked-in if you try it once, it makes the fake news problem worse than ever, and it breaks the core concepts of a linkable web, the thing that worries me the most is that AMP represents the most-subtle threat to Net Neutrality I’ve ever seen… and it’s from an organisation that is nominally in favour of a free and open Internet but that stands to benefit from a more-closed Internet so long as it’s one that they control.

Google’s stated plan to favour pages that use AMP creates a publisher’s arms race in which content creators are incentivised to produce content in the (open-source but) Google-controlled AMP format to rank higher in the search results, or at least regain parity, versus their competitors. Ultimately, if everybody supported AMP then – ignoring the speed benefits for mobile users (more on that in a moment) – the only winner is Google. Google, who would then have a walled garden of Facebook-beating proportions around the web. Once Google delivers all of your content, there’s no such thing as a free and open Internet any more.

So what about those speed increases? Yes, the mobile web is slower than we’d like and AMP improves that. But with the exception of the precaching – which is something that could be achieved by other means – everything that AMP provides can be done using existing technologies. AMP makes it easy for lazy developers to make their pages faster, quickly, but if speed on mobile devices is the metric for your success: let’s just start making more mobile-friendly pages! We can make the mobile web better and still let it be our Web: we don’t need to give control of it to Google in order to shave a few milliseconds off the load time.

We need to reject AMP, and we need to reject it hard. Right now, it might be sufficient to stand up to your boss and say “no, implementing AMP on our sites is a bad idea.” But one day, it might mean avoiding the use of AMP entirely (there’ll be browser plugins to help you, don’t worry). And if it means putting up with a slightly-slower mobile web while web developers remain lazy, so be it: that’s a sacrifice I’m willing to make to help keep our web free and open. And I hope you will be, too.

Like others, I’m just hoping that Sir Tim will feel the urge to say something about this development soon.

Pocket Searching

Pocket dialling was bad enough. I once received a phone call from a friend whose phone called me – as the last number he’d dialled – just as he was putting on a harness in anticipation of doing a bungee jump. So all I got to hear was rustling, and shuffling… and then a blood-curdling scream. Nice one.

A cat on a mobile phone.
Hello? Yes, this is cat. Just thought I’d press some buttons and see who I got.

But in this age of smartphones, the pocket search has become a new threat. Thanks to the combination of touchscreens, anticipatory keyboards (I use SwiftKey, and I’m beginning to think that it knows me better than I do myself), and always-online devices, we’re able to perform quite complex queries quite accidentally. I’ve got a particular pair of trousers which seems to be especially good at unlocking my phone, popping up a search engine, typing a query (thanks to the anticipatory keyboard, usually in full words), and then taking a screenshot and saving it for me, so that I can’t later deny having searched for… whatever it was.

This morning, while cycling to work, I searched for the following (which I’ve reformatted by inserting line breaks, in order to transform it into the sort-of poem you might expect from sombebody both insane and on hallucinogens):

thanks again
and it all goes on
and I will Also
Also A bit LIKE THAT
THE ANSWER is
That you are looking at your Local Ryanair
and a ripening
and a ripening
I can assure are a BIT
and see the new template by clicking here
for
for YOU GUYS
GUYS HAVE YOU ANY COMMENTS
ON MY WAY BACK FROM YOU
And the other side and I will have the same
as a friend or relative
relative humidity
humidity
to you
you are here car
car
and
and embarrassing
embarrassing
the best thing is the first three years and over
over?

Maybe my phone is gradually becoming sentient and is trying to communicate with me. I for one welcome our new robot overlords.

Phone Security == Computer Security

The explosion of smartphone ownership over the last decade has put powerful multi-function computers into the pockets of almost half of us. But despite the fact that the average smartphone contains at least as much personally-identifiable information as its owner keeps on their home computer (or in dead-tree form) at their house – and is significantly more-prone to opportunistic theft – many users put significantly less effort into protecting their mobile’s data than they do the data they keep at home.

Nokia E7, showing lock screen.
Too late, little Nokia E7: I’ve got physical access to you now.

I have friends who religiously protect their laptops and pendrives with TrueCrypt, axCrypt, or similar, but still carry around an unencrypted mobile phone. What we’re talking about here is a device that contains all of the contact details for you and everybody you know, as well as potentially copies of all of your emails and text messages, call histories, magic cookies for social networks and other services, saved passwords, your browsing history (some people would say that’s the most-incriminating thing on their phone!), authentication apps, photos, videos… more than enough information for an attacker to pursue a highly-targeted identity theft or phishing attack.

Pattern lock configuration on an Android mobile phone.
Android pattern lock: no encryption, significantly less-random than an equivalent-length PIN, and easily broken by a determined attacker.

“Pattern lock” is popular because it’s fast and convenient. It might be good enough to stop your kids from using your phone without your permission (unless they’re smart enough to do some reverse smudge engineering: looking for the smear-marks made by your fingers as you unlock the device; and let’s face it, they probably are), but it doesn’t stand up to much more than that. Furthermore, gesture unlock solutions dramatically reduce the number of permutations, because you can’t repeat a digit: so much so, that you can easily perform a rainbow table attack on the SHA1 hash to reverse-engineer somebody’s gesture. Even if Android applied a per-device psuedorandom salt to the gesture pattern (they don’t, so you can download a prefab table), it doesn’t take long to generate an SHA1 lookup of just 895,824 codes (maybe Android should have listened to Coda Hale’s advice and used BCrypt, or else something better still).

iPhone showing the PIN lock screen.
An encrypted iPhone can be configured to resist brute-force attacks by wiping the phone after repeated failures, which replaces one security fault (brute-force weakness) with another (a denial of service attack that’s so easy that your friends can do it by accident).

These attacks, though (and the iPhone isn’t bulletproof, either), are all rather academic, because they are trumped by the universal rule that once an attacker has physical access to your device, it is compromised. This is fundamentally the way in which mobile security should be considered to be equivalent to computer security. All of the characteristics distinct to mobile devices (portability, ubiquity, processing power, etc.) are weaknesses, and that’s why smartphones deserve at least as much protection as desktop computers protecting the same data. Mobile-specific features like “remote wipe” are worth having, but can’t be relied upon alone – a wily attacker could easily keep your phone in a lead box or otherwise disable its connectivity features until it’s cracked.

A finger swipes-to-unlock a Samsung mobile phone.
The bottom line: if the attacker gets hold of your phone, you’re only as safe as your encryption.

The only answer is to encrypt your device (with a good password). Having to tap in a PIN or password may be less-convenient than just “swipe to unlock”, but it gives you a system that will resist even the most-thorough efforts to break it, given physical access (last year’s iPhone 4 vulnerability notwithstanding).

It’s still not perfect – especially here in the UK, where the RIPA can be used (and has been used) to force key surrender. What we really need is meaningful, usable “whole system” mobile encryption with plausible deniability. But so long as you’re only afraid of identity thieves and phishing scammers, and not being forced to give up your password by law or under duress, then it’s “good enough”.

Of course, it’s only any use if it’s enabled before your phone gets stolen! Like backups, security is one of those things that everybody should make a habit of thinking about. Go encrypt your smartphone; it’s remarkably easy –

Cardless Cashpoints

My mobile banking app, showing me a special six digit code.
The mobile app presents you with a special six-digit code that is used to withdraw the cash.

RBS Group this week rolled out a service to all of its customers, allowing them to withdraw cash from an ATM without using their bank card. The service is based upon the same technologies that’s used to provide emergency access to cash by people who’ve had their cards stolen, but integrates directly into the mobile banking apps of the group’s constituent banks. I decided to give it a go.

The first step is to use the mobile app to request a withdrawal. There’s an icon for this, but it’s a bit of a mystery that it’s there unless you already know what you’re looking for. You can’t make a request from online banking without using the mobile app, which seems to be an oversight (in case you can’t think of a reason that you’d want to do this, read on: there’s one at the end). I opted to withdraw £50.

Next, it’s off to find a cash machine. I struck out, without my wallet, to try to find the nearest Royal Bank of Scotland, NatWest, or Tesco cashpoint. The mobile app features a GPS tool to help you find these, although it didn’t seem to think that my local Tesco cashpoint existed, walking me on to a branch of NatWest.

Cash machine: "Do you wish to carry out a Get Cash or Emergency Cash transaction? [No] [Yes]"
The readout of the cash machine demonstrates that the roots of the “Get Cash” system lie in the older “Emergency Cash” feature: the two are functionally the same thing.
As instructed by the app, I pressed the Enter key on the keypad of the cash machine. This bypasses the usual “Insert card” prompt and asks, “Do you wish to carry out a Get Cash or Emergency Cash transaction?” I pressed Yes.

Entering a 6-digit code from a mobile phone into a cash machine.
The number displayed upon the screen is entered into the cash machine.

The ATM asked for the PIN I’d been given by the mobile app: a 6-digit code. Each code is only valid for a window of 3 hours and can only be used once.

A cashpoint asking for the PIN a second time, and then asking for the amount of money to withdraw.
The cash machine asks for the PIN a second time, and then asks for the sum of money to be withdrawn.

I’m not sure why, but the ATM asks that the PIN is confirmed by being entered a second time. This doesn’t make a lot of sense to me – if it was mistyped, it’d surely fail anyway (unless I happened to guess another valid code, within its window), and I’d simply be able to try again. And if I were an attacker, trying to guess numbers, then there’s no difficulty in typing the same number twice.

It’s possible that this is an attempt at human-tarpitting, but that wouldn’t be the best way to do it. If the aim is to stop a hacker from attempting many codes in quick succession, simply imposing a delay would be far more effective (this is commonplace with cash machines anyway: ever notice that you can’t put a card in right after the last transaction has finished?). Strange.

Finally, the ATM asks what value of cash was agreed to be withdrawn. I haven’t tried putting in an incorrect value, but I assume that it would refuse to dispense any cash if the wrong number was entered – this is presumably a final check that you really are who you claim to be.

Cash machine: "Please take your cash and your receipt."
It feels strange taking money and a receipt from a cashpoint without first having to retrieve my card. I spent a few minutes after the experience with a feeling that I’d forgotten something.

It worked. I got my money. The mobile app quickly updated to reflect the change to my balance and invalidated the code: the system was a success.

The banks claim that this will be useful for times that you’ve not got your card with you. Personally, I don’t think I ever take my phone outdoors without also taking my wallet with me, so the chance of that it pretty slim. If my card were stolen, I’d be phoning the bank to cancel the card anyway, so it wouldn’t save me a call, either, if I needed emergency cash. But there are a couple of situations in which I’d consider using this neat little feature:

  • If I was suspicious of a possible card-skimming device on a cash machine, but I needed to withdraw money and there wasn’t an un-tampered ATM in the vicinity. It’d be nice to know that you can avoid having your card scanned by some kid with a skimmer just by using your phone to do the authentication rather than a valuable piece of plastic.
  • To send money to somebody else. Using this tool is cheaper than a money order and faster than a bank transfer: it’s an instantaneous way to get small sums of cash directly into the hands of a distant friend. “Sure, I’ll lend you £50: just go to a cash machine and type in this code.” I’m not sure whether or not this is a legitimate use of the service, but I can almost guarantee that it’ll be the most-popular. It’ll probably be reassuring to parents of teenagers, for example, who know that they can help their offspring get a taxi home when they’ve got themselves stranded somewhere.

What do you think? If you’re with RBS, NatWest or Tesco, have you tried this new mobile banking feature? Do you think there’s mileage in it as an idea, or is it a solution in need of a problem?

One Hundred And Sixty

When I first went to university, in 1999, I got my first mobile phone. Back then, messaging features on mobiles were a bit more simplistic than they are today.

For example, phones were only just starting to appear that could handle multi-SMS messages. For those without this feature there was a new skill to be learned.

With practice, we got to be particularly good at cutting out messages down to the requisite number of characters to fit into a single SMS: just 160 characters.

We even learned how to meaningfully split messages in our heads, with indicators (ellipses, or numbers showing message parts), to carry longer concepts. (4/19)

Even when multi-message capable phones came out (I got one in 2000), these skills were still useful. At 10p or 12p per message, you soon learned to be concise.

Nowadays, this skill has lost its value. With more and more people having “unlimited SMS” plans or enormous quantities of credits, there’s no need to be brief.

If you’ve got an iPhone, you don’t even get told how long your message is, I hear. You just keep typing. And that’s not uncommon on other kinds of handset too.

Your phone’s still splitting your message up, in the background. Putting markers in, so that other phones can understand. And these markers are human-readable.

Just in case your message is going to a phone that’s over about 12 years old, your smartphone makes sure that the markers would be understood by humans. (9/19)

So now we’ve got smartphones talking to each other in a language that humans designed to talk to one another in. Does that feel really strange to anybody else?

I looked at my phone while I wrote a message, today. I noticed that number in the corner, that indicated that my message would span 3 texts. And I didn’t care.

Why would I? It’s a vestige of an older form of communication. Someday, it’ll look as primitive as the paintings on the walls of caves, daubed by early humans.

But for now, I remember. And, somehow, the skill I learned all those years ago – a trick that’s alien to almost anybody younger than me – has a new, fresh use.

Twitter. 140 character messages. A little bit less than a text, which seems strange. Are they really trying to make us even more brief than those early phones?

The skill is still the same. Think ahead. Prune. Plan. Snip. And, if you absolutely must span several messages, make it clear to your reader so that they know.

I see a whole new generation of people learning this skill that I once learned. It’s not the same (it never will be): they don’t pay 10p every time they tweet.

But you know what? It’s just as pointless now as it was the first time around. If you want to say something, say it. If 36p is too much, risk a 10-second call!

And in the case of the Twitter generation: if your message doesn’t fit on Twitter, then it probably doesn’t belong on Twitter. I’m a 160-character-or-more man.

I’m not sure I’m cut out for the Twitterverse with its 140-character limits. But it’s nice to remember how to think in 160, just like I have in this blog post.

Searching For A Virgin

You just can’t rely on GMail’s “contacts” search any more. Look what it came up with:

Not a result I'd commonly associate with the word "virgin".

With apologies to those of you who won’t “get” this: the person who came up in the search results is a name that is far, far away, in my mind, from the word “virgin”.

In not-completely-unrelated news, I use a program called SwiftKey X on my phone, which uses Markov chains (as I’ve described before) to intelligently suggest word completion and entire words and phrases based on the language I naturally use. I had the software thoroughly parse my text messages, emails, and even this blog to help it learn my language patterns. And recently, while writing a text message to my housemate Paul, it suggested the following sentence as the content of my message:

I am a beautiful person.

I have no idea where it got the idea that that’s something I’m liable to say with any regularity. Except now that it’s appeared on my blog, it will. It’s all gone a little recursive.

Idea: mobile app that uses camera and shifts colour-balances to make colours “visible”

This self-post was originally posted to /r/ColorBlind. See more things from Dan's Reddit account.

I’m not colourblind, and I’m not really a mobile developer, so maybe there’s something I’ve missed, but I’ve got an idea for an app and I thought I’d run it by you guys to see if there’s something I’ve missed.

Mobile processing power is getting better and better, and we’re probably getting close to the point where we can do live video image manipulation at acceptable framerates (even 10 frames/sec would be something). So why can’t we make an app that shifts colours as seen by the camera to a particular different part of the spectrum (depending on the user’s preferences).

For example, a deuteranomat (green weak, difficulty differentiating through the red/orange/yellow/green spectrum) might configure the software to shift yellows and greens to instead be presented as purples and blues. The picture would be false, of course, but it would help distinguish between colours in order to make, for example, colour-coded maps readable.

I was thinking about how video cameras can often “see” infa-red (try pointing a remote control at a video camera and pressing the button), and present it to the viewer as white or red, when I saw a documentary with some footage of “how bees see the world”. Bees have vision of a similar breadth of spectrum to humans, but shifted well into the infa-red range (and away from the blue end of the spectrum). In the documentary, they’d filmed some flowers using a highly infa-red sensitive camera, and then they’d “shifted” the colours around the spectrum in order to make it visible to normal humans: the high-infa-reds became yellows, the low-infa-reds became blues, and the reds they left as reds. Obviously this isn’t what bees actually experience, but it’s an approximation that allows us to appreciate the variety in their spectrum.

Can we make this conversion happen “live” on mobile technology? And why haven’t we done so!

I have multiple GMail accounts. I’d like to see a different notification icon for each. Any suggestions?

This self-post was originally posted to /r/androidapps. See more things from Dan's Reddit account.

The first of the two apps mentioned in this article – “Gmail Notifier” – sounds perfect, but doesn’t seem to exist any more.

GMail Notifier + Widgets looks like it might do it (it’s designed to do different icons depending on labels). Does anybody have any experience with this?

Or any other suggestions? I’m running CM7.1 on a HTC Sensation, in case it matters.