Cellebrite makes software to automate physically extracting and indexing data from mobile devices. They exist within the grey – where enterprise branding joins together with the larcenous to be called “digital intelligence.” Their customer list has included authoritarian regimes in Belarus, Russia, Venezuela, and China; death squads in Bangladesh; military juntas in Myanmar; and those seeking to abuse and oppress in Turkey, UAE, and elsewhere. A few months ago, they announced that they added Signal support to their software.
Their products have often been linked to the persecution of imprisoned journalists and activists around the world, but less has been written about what their software actually does or how it works. Let’s take a closer look. In particular, their software is often associated with bypassing security, so let’s take some time to examine the security of their own software.
Recently Moxie, co-author of the Signal Protocol, came into possession of a Cellebrite Extraction Device (phone cracking kit used by law enforcement as well as by oppressive regimes who need to clamp down on dissidents) which “fell off a truck” near him. What an amazing coincidence! He went on to report, this week, that he’d partially reverse-engineered the system, discovering copyrighted code from Apple – that’ll go down well! – and, more-interestingly, unpatchedvulnerabilities. In a demonstration video, he goes on to show that a carefully crafted file placed on a phone could, if attacked using a Cellebrite device, exploit these vulnerabilities to take over the forensics equipment.
Obviously this is a Bad Thing if you’re depending on that forensics kit! Not only are you now unable to demonstrate that the evidence you’re collecting is complete and accurate, because it potentially isn’t, but you’ve also got to treat your equipment as untrustworthy. This basically makes any evidence you’ve collected inadmissible in many courts.
Moxie goes on to announce a completely unrelated upcoming feature for Signal: a minority of functionally-random installations will create carefully-crafted files on their devices’ filesystem. You know, just to sit there and look pretty. No other reason:
In completely unrelated news, upcoming versions of Signal will be periodically fetching files to place in app storage. These files are never used for anything inside Signal and never interact with Signal software or data, but they look nice, and aesthetics are important in software. Files will only be returned for accounts that have been active installs for some time already, and only probabilistically in low percentages based on phone number sharding. We have a few different versions of files that we think are aesthetically pleasing, and will iterate through those slowly over time. There is no other significance to these files.
VPNs have long been essential online tools that provide security, freedom, and most importantly, privacy. Each day, hundreds of millions of internet users connect to a VPN to prevent their online activities from being tracked and monitored so that they can privately access web resources. In other words, the very purpose of a VPN is to prevent the type of surveillance that Google engages in on a massive and unprecedented scale.
Google knows this, and in their whitepaper discussing VPN by Google One, Google acknowledges that VPN usage is becoming mainstream and that “up to 25% of all internet users accessed a VPN within the last month of 2019.” Increasing VPN usage unfortunately poses a significant problem for Google, by making it more difficult to track users across the internet, mine their data, and target them with advertisements. In short, VPNs undermine Google’s power.
So yeah, it turns out that Google are launching a VPN service. I just checked, and it’s not available to me anyway because it’s US-only (apparently nobody explained to Google the irony of having a VPN service that’s geofenced), but that’s pretty academic because I wasn’t going to touch it with a barge pole in the first place.
Google already collect data on your browsing habits if you use their products. And I’m not just talking about Chrome, which of course continues to track you using your Google Account even after you log out and clear your cookies, and Google’s ubiquitous Web tools, but also the tracking pixels hidden on every other website thanks to Google Analytics, AdWords, reCAPTCHA, Google Fonts, and the like. Sure, you can use e.g. uMatrix to stop all of these (although I’m in need of a replacement), but that’s not a solution for, y’know, normal people. Container tabs help and you should absolutely use them, but they don’t quite go far enough. It’s a challenge.
Switch to their VPN, though, and they’re suddenly able to track all of your browsing activity, in any browser on your device. And probably many of the desktop applications you run, too, as most of them “phone home” for updates or functionality. And because it’s a paid-for VPN service, this data can be instantly linked to your real-world identity. By a company that’s demonstrated its willingness to misuse that data for their own benefit (or for the benefit of overreaching law enforcement agencies). Yeah: no deal, Google.
Perhaps the only company I’d trust less to provide a VPN service would be Facebook, because you just know they’d be doing so exclusively to undermine individual privacy. Oh wait; that’s exactly what they did. Sigh.
Block all third-party cookies, yes, by all means1. But deleting all local storage (including Indexed DB, etc.) after 7 days effectively blocks any future decentralised apps using the browser (client side) as a trusted replication node in a peer-to-peer network. And that’s a huge blow to the future of privacy.
Like Aral and doubtless many others, I was initially delighted to see that Safari has beaten Chrome to the punch, blocking basically all third-party cookies through its Intelligent Tracking Protection. I don’t even routinely use Safari (although I do block virtually all third-party and many first-party cookies using uMatrix for Firefox), but I loved this announcement because I knew that this, coupled with Google’s promise to (eventually) do the same in their browser, would make a significant impact on the profitability of surveillance capitalism on the Web. Hurrah!
But as Aral goes on to point out, Apple’s latest changes also effectively undermines the capability of people to make Progressive Web Applications that run completely-offline, because their new privacy features delete the cache of all offline storage if it’s not accessed for 7 days.
I hope that Google doesn’t follow Apple’s lead on this particular “privacy” point, although I’m sure that it’s tempting for them to do so. Offline Web applications have the potential to provide an open, simple, and secure ecosystem for the “apps” of tomorrow, and after several good steps forwards… this week we took a big step back.
…why would cookies ever need to work across domains? Authentication, shopping carts and all that good stuff can happen on the same domain. Third-party cookies, on the other hand, seem custom made for tracking and frankly, not much else.
Instead, then, I imagine that a Web re-invented today would treat third-party content a little like we treat CORS or we’re beginning to treat resource types specified by Content-Security-Policy and Feature-Policy headers. That is, website owners would need to “opt-in” to which third-party domains could be trusted to provide content, perhaps subdivided into scripts and cookies. This wouldn’t prohibit trackers, but it would make their use less of an assumed-default (develolpers would have to truly think about the implications of what they were enabling) and more transparent: it’d be very easy for a browser to list (and optionally block, sandbox, or anonymise) third-party trackers could potentially target them, on a given site, without having to first evaluate any scripts and their sources.
I was recently inspired by Dave Rupert to remove Google Analytics from this blog. For a while, there’ll have been no third-party scripts being delivered on this site at all, except through iframes (for video embedding etc., which is different anyway because there’s significantly less scope leak). Recently, I’ve been experimenting with Jetpack because I get it for free through my new employer, but I’m always looking for ways to improve how well my site “stands alone”: you can block all third-party resources and this site should still work just fine (I wonder if I can add a feature to my service worker to allow visitors to control exactly what third party content they’re exposed to?).
I’ve just cleared out my desk at the Bodleian in anticipation of my imminent departure and discovered that I’ve managed to successfully keep not only my P60s but also every payslip I’ve ever received in the 8½ years I’ve worked there. At a stretch, I might just end up requiring those for the current tax year but I can’t conceive of any reason I’ll ever need the preceding hundred or so of them, so the five year-old and I shredded them all.
If you’ve ever wanted to watch five solid minutes of cross-cut shredding shot from an awkwardly placed mobile phone camera, this is the video for you. Everybody else can move along.
Let’s face the truth. We are in an abusive relationship with our phones.
Ask yourself the first three questions that UK non-profit Women’s Aid suggests to determine if you’re in an abusive relationship:
Has your partner tried to keep you from seeing your friends or family?
Has your partner prevented you or made it hard for you to continue or start studying, or from going to work?
Does your partner constantly check up on you or follow you?
If you substitute ‘phone’ for ‘partner’, you could answer yes to each question. And then you’ll probably blame yourself.
A fresh take by an excellent article. Bringing a feminist viewpoint to our connection to our smartphones helps to expose the fact that our relationship with the devices would easily be classified as abusive were they human. The article goes on to attempt to diffuse the inevitable self-blame that comes from this realisation and move forward to propose a more-utopian future in which our devices might work for us, rather than for the companies that provide the services for which we use them.
The Rusty Bicycle,
28 Magdalen Rd, Oxford OX4 1RB, United Kingdom.
Visited today for the first time. Discovered when I came to order that special offers (e.g. £5 lunchtime pizzas) on the menu aren’t actually honoured when you come to order unless you agree to install their “app”. Their app is appalling: currently averaging 1.7/5 on the app store, and I can see why!
Cocktails were okay. Food was pretty good, but not quite good enough to take away the sour taste left by the “app” experience. I’d visit again… but only if they moved towards honouring the special offers they advertise… WITHOUT the precondition that you agree to give all of your personal data to them first.
Big news! This site is no longer using Google Analytics and I’ve switched to a self-hosted version of brand new analytics product Fathom.
Fathom is very simple. It only tracks 4 things: Unique Visitors, Page Views, Time on Site, and Bounce Rate. It shows me a chart of page views and visitors and then gives me a break down of referrers and top performing content. That’s it. And to be quite honest, that’s about all I need from my blog analytics.
You know what, Dave:me too! I’ve been running Google Analytics since forever and Piwik/Matomo (in parallel with it) for about a year and honestly: I get more than enough of what I need from the latter. So you’ve inspired me to cut the line with Google: after all, all I was doing was selling them my friends’ data in exchange for some analytics I wasn’t really paying attention to… and I’d frankly rather not.
So: for the first time in a decade or so, there’s no Google Analytics on this site. Woop!
Recently, Google officially launched Android 9 Pie, which includes a slew of new features around digital well-being, security, and privacy. If you’ve poked around the network settings on your phone while on the beta or after updating, you may have noticed a new Private DNS Mode now supported by Android.
This new feature simplifies the process of configuring a custom secure DNS resolver on Android, meaning parties between your device and the websites you visit won’t be able to snoop on your DNS queries because they’ll be encrypted. The protocol behind this, TLS, is also responsible for the green lock icon you see in your address bar when visiting websites over HTTPS. The same technology is useful for encrypting DNS queries, ensuring they cannot be tampered with and are unintelligible to ISPs, mobile carriers, and any others in the network path between you and your DNS resolver. These new security protocols are called DNS over HTTPS, and DNS over TLS.
Bad: Android Pie makes it harder (than previous versions) to set a custom DNS server on a cellular data connection.
Good: Android Pie supports DNS-over-TLS, so that’s nice.
Desperate for data on its competitors, Facebook has been secretly paying people to install a “Facebook Research” VPN that lets the company suck in all of a user’s phone and web activity, similar to Facebook’s Onavo Protect app that Apple banned in June and that was removed i…
Since 2016, Facebook has been paying users ages 13 to 35 up to $20 per month plus referral fees to sell their privacy by installing the iOS or Android “Facebook Research” app. Facebook even asked users to screenshot their Amazon order history page. The program is administered through beta testing services Applause, BetaBound and uTest to cloak Facebook’s involvement, and is referred to in some documentation as “Project Atlas” — a fitting name for Facebook’s effort to map new trends and rivals around the globe.
I figured we’d been almost a day since Facebook were last in the news for privacy and ethics-related concerns (earlier this week, earlier still), so we must’ve been due more coverage. This time, it’s about Facebook’s latest tack in trying to understand the teen market that it’s failing to penetrate as well as it once did, and the fact that it’s been paying young adults and children to proxy all of their traffic through Facebook’s servers including setting up their phones to allow Facebook to break their encryption so that it can understand how they’re using them.
Facebook plans to integrate its messaging services on Instagram, WhatsApp and Facebook Messenger.
While all three will remain stand-alone apps, at a much deeper level they will be linked so messages can travel between the different services.
Facebook told the BBC it was at the start of a “long process”.
The plan was first reported in the New York Times and is believed to be a personal project of Facebook founder Mark Zuckerberg.
Once complete, the merger would mean that a Facebook user could communicate directly with someone who only has a WhatsApp account. This is currently impossible as the applications have no common core.
The work to merge the three elements has already begun, reported the NYT, and is expected to be completed by the end of 2019 or early next year.
Facebook-looking-dodgy in the news again this week (previously) with the news that they plan to integrate Instagram and WhatsApp into their central platform. They’re selling the upsides of this, such as that Facebook and WhatsApp users will be able to communicate with one another without switching to a different tool, but privacy advocates are understandably concerned: compared to Facebook, WhatsApp provides a reasonable level of anonymity. It also seems likely that this move may be an effort to preempt antitrust suits forcing Facebook’s property portfolio to be kept separate.
But even without those concerns, there are smaller but just-as-real, more-insidious privacy risks from this integration. With a very minor change to their terms and conditions about the use of the WhatsApp app Facebook can start performing even more-sophisticated big-data mining on the types of interpersonal relationships that they’re known to enjoy (let’s not forget that this is the company whose app will, left-unchecked, mine your mobile phone book to find friends-in-common that you have with other people, even if that friend-in-common doesn’t use Facebook!). With WhatsApp’s treasure trove of metadata, Facebook can determine who you talk to and, from where, and with what frequency: by technical necessity, none of this metadata is protected by WhatsApp’s end-to-end encryption. Similarly, they can determine what “groups” you participate in. This easily supports the “shadow profiles” they maintain which tell them far more about your life and interests than your mere Facebook profile alone does.
I for one will be watching WhatsApp with care and dropping it if it looks likely to “turn evil”. It’s not as though there aren’t (arguably better) alternatives, such as Signal (which I already use as my primary mobile text messaging system) and Riot.
An open source checklist of resources designed to improve your online privacy and security. Check things off to keep track as you go.
I’m pretty impressed with this resource. It’s a little US-centric and I would have put the suggestions into a different order, but many of the ideas on it are very good and are presented in a way that makes them accessible to a wide audience.
German chat platform Knuddels.de (“Cuddles”) has been fined €20,000 for storing user passwords in plain text (no hash at all? Come on, people, it’s 2018).
The data of Knuddels users was copied and published by malefactors in July. In September, someone emailed the company warning them that user data had been published at Pastebin (only 8,000 members affected) and Mega.nz (a much bigger breach). The company duly notified its users and the Baden-Württemberg data protection authority.
Interesting stuff: this German region’s equivalent of the ICO applied a fine to this app for failing to hash passwords, describing them as personal information that was inadequately protected following their theft. That’s interesting because it sets a German, and to a lesser extend a European, precedent that plaintext passwords can be considered personal information and therefore allowing the (significant) weight of the GDPR to be applied to their misuse.
Did you install EFF’s brilliant Privacy Badger or any other smart HTTP Cookie management tool? Or did you simply pick the privacy preference in your browser that ignores all third-party cookies? Did many websites you visit annoy you with permission-to-use-cookies pop-ups because of European legislation?
Guess what, it’s all been useless.
Hamburg university researchers have examined closely how web browsers implement so-called TLS session resumption and how the top million popular websites make use of that feature. They found that 80% of websites make a correct use, unsuitable for tracking repeat visitors — just resuming an existing session within the last ten minutes.
Unfortunately though, Google is present on 80% of these websites in form of Analytics, Fonts or other third-party inclusions. And among 10% of sites that do not respect reasonable resumption times, Google sticks out as one of the most greedy ones — it allows for a web browser to stay offline for over a day, and still be recognized as the same web browser the next day. Considering that it is nearly impossible to surf the web without accessing some Google content, this means that Google can track all your surfing habits without any need for HTTP Cookies!
As Facebook isn’t as pervasively present in all of the web, it went even further. It is enough for you to visit any website bearing a Like button every second day to allow Facebook to profile you, even if you never dreamt of logging into that service. Could it be our researchers just caught these companies with their hands deep in the cookie jar (pun intended)? For how long have they been collecting user data this way?
Somewhat conspiracy-theory-like take on an actual, real privacy issue: the fact that TLS makes tracking pretty easy even without cookies. If you thought my 301-based cookieless tracking was clever, this is cleverer. And harder to detect, to boot.
For a philosopher, Helen Nissenbaum is a surprisingly active participant in shaping how we collect, use, and protect personal data. Nissenbaum, who earned her PhD from Stanford, is a professor of information science at Cornell Tech, New York City, where she focuses on the intersection of politics, ethics, and values in technology and digital media — the hard stuff. Her framework for understanding digital privacy has deeply influenced real-world policy.
In addition to several books and countless papers, she’s also coauthored privacy plug-ins for web browsers including TrackMeNot, AdNauseum, and Adnostic. Nissenbaum views these pieces of code as small efforts at rationalizing a marketplace where opaque consent agreements give consumers little bargaining power against data collectors as they extract as much information, and value from this information, as they can. Meanwhile, these practices offer an indefinite value proposition to consumers while compromising the integrity of digital media, social institutions, and individual security.