Wacom drawing tablets track the name of every application that you open

This is a repost promoting content originally published elsewhere. See more things Dan's reposted.

I don’t care whether anything materially bad will or won’t happen as a consequence of Wacom taking this data from me. I simply resent the fact that they’re doing it.

The second is that we can also come up with scenarios that involve real harms. Maybe the very existence of a program is secret or sensitive information. What if a Wacom employee suddenly starts seeing entries spring up for “Half Life 3 Test Build”? Obviously I don’t care about the secrecy of Valve’s new games, but I assume that Valve does.

We can get more subtle. I personally use Google Analytics to track visitors to my website. I do feel bad about this, but I’ve got to get my self-esteem from somewhere. Google Analytics has a “User Explorer” tool, in which you can zoom in on the activity of a specific user. Suppose that someone at Wacom “fingerprints” a target person that they knew in real life by seeing that this person uses a very particular combination of applications. The Wacom employee then uses this fingerprint to find the person in the “User Explorer” tool. Finally the Wacom employee sees that their target also uses “LivingWith: Cancer Support”.

Remember, this information is coming from a device that is essentially a mouse.

Interesting deep-dive investigation into the (immoral, grey-area illegal) data mining being done by Wacom when you install the drivers for their tablets. Horrifying, but you’ve got to remember that Wacom are unlikely to be a unique case. I had a falling out with Razer the other year when they started bundling spyware into the drivers for their keyboards and locking-out existing and new customers from advanced features unless they consented to data harvesting.

I’m becoming increasingly concerned by the normalisation of surveillance capitalism: between modern peripherals and the Internet of Things, we’re “willingly” surrendering more of our personal lives than ever before. If you haven’t seen it, I’d also thoroughly recommend Data, the latest video from Philosophy Tube (of which I’ve sung the praises before).

Evaluating the GCHQ Exceptional Access Proposal

This is a repost promoting content originally published elsewhere. See more things Dan's reposted.

In a blog post, cryptographer Matthew Green summarized the technical problems with this GCHQ proposal. Basically, making this backdoor work requires not only changing the cloud computers that oversee communications, but it also means changing the client program on everyone’s phone and computer. And that change makes all of those systems less secure. Levy and Robinson make a big deal of the fact that their backdoor would only be targeted against specific individuals and their communications, but it’s still a general backdoor that could be used against anybody.

The basic problem is that a backdoor is a technical capability — a vulnerability — that is available to anyone who knows about it and has access to it. Surrounding that vulnerability is a procedural system that tries to limit access to that capability. Computers, especially internet-connected computers, are inherently hackable, limiting the effectiveness of any procedures. The best defense is to not have the vulnerability at all.

Lest we ever forget why security backdoors, however weasely well-worded, are a terrible idea, we’ve got Schneier calling them out. Spooks in democratic nations the world over keep coming up with “innovative” suggestions like this one from GCHQ but they keep solving the same problem, the technical problem of key distribution or key weakening or whatever it is that they want to achieve this week, without solving the actual underlying problem which is that any weakness introduced to a secure system, even a weakness that was created outwardly for the benefit of the “good guys”, can and eventually will be used by the “bad guys” too.

Furthermore: any known weakness introduced into a system for the purpose of helping the “good guys” will result in the distrust of that system by the people they’re trying to catch. It’s pretty trivial for criminals, foreign agents and terrorists to switch from networks that their enemies have rooted to networks that they (presumably) haven’t, which tends to mean a drift towards open-source security systems. Ultimately, any backdoor that gets used in a country with transparent judicial processes becomes effectively public knowledge, and ceases to be useful for the “good guys” any more. Only the non-criminals suffer, in the long run.

Sigh.