Second Factor Safety Net

I’m a huge fan of multifactor authentication. If you’re using it, you’re probably familiar with using an app on your phone (or receiving a text or email) in addition to a username and password when logging in to a service like your email, social network, or a bank. If you’re not using it then, well, you should be.

Two factor authentication using Google Authenticator (TOTP) to log in to a Google Account
Using an app in addition to your username and password, wherever you can, may be the single biggest step you can make to improving your personal digital security.

Ruth recently had a problem when she lost her phone and couldn’t connect to a service for which she usually used an authenticator app like the one pictured above, so I thought I’d share with you my personal strategy for managing multifactor authentication, in case it’s of any use to anybody else. After all: the issue of not-having-the-right-second-factor-to-hand has happened to me before, it’s certainly now happened to Ruth, and it’s probably something that’s happened to other people I know by now, too.

Thoroughly broken iPhone.
It could happen to anybody. What’s your authentication plan?

Here’s my strategy:

  1. Favour fewer different multifactor solutions. Instead of using e.g. text messaging for one, an app for another, a different app for a third, a hardware token for a fourth, and so on, try to find the fewest number of different solutions that work for your personal digital life. This makes backing up and maintenance easier.
    I use RFC6238/TOTP (better known as “Google Authenticator”) for almost all second factor purposes: the only exceptions are my online bank (who use a proprietary variant of RFC6238 that I’ve not finished reverse-engineering) and Steam (who use a proprietary implementation of RFC6238 with a larger character set, for some reason, in their Steam Guard app).
  2. Favour offline-ready multifactor solutions. It’s important to me to be able to log in to my email using a computer even if my mobile phone has no signal or the network is down. This, plus the fact that the bad guys have gotten the hang of intercepting second-factor text messages, means that SMS-based solutions aren’t the right solution in my opinion. Google Authenticator, Authy, FreeOTP etc. all run completely offline.
  3. Have a backup plan. Here’s the important bit. If you use your phone to authenticate, and you lose access to your phone for a period of time (broken, lost, stolen, out of battery, in use by a small child playing a game), you can’t authenticate. That’s why it’s important that you have a backup plan.
Many mobile devices.
That’s probably more backup devices than you need, but YMMV.

Some suggested backup strategies to consider (slightly biased towards TOTP):

  • Multiple devices: (Assuming you’re using TOTP or something like it) there’s nothing to stop you setting up multiple devices to access the same account. Depending on how the service you’re accessing provides the code you need to set it up, you might feel like you have to set them all up at the same time, but that’s not strictly true: there’s another way…
  • Consider setting up a backdoor: Some systems will allow you to print e.g. a set of “backup codes” and store them in a safe place for later use should you lose access to your second factor. Depending on the other strategies you employ, you should consider doing this: for most (normal) people, this could be the single safest way to retain access to your account in the event that you lose access to your second factor. Either way, you should understand the backdoors available: if your online bank’s policy is to email you replacement credentials on-demand then your online bank account’s security is only as good as your email account’s security: follow the chain to work out where the weak links are.
  • Retain a copy of the code: The code you’re given to configure your device remains valid forever: indeed, the way that it works is that the service provider retains a copy of the same code so they can generate numbers at the same time as you, and thus check that you’re generating the same numbers as them. If you keep a copy of the backup code (somewhere very safe!) you can set up any device you want, whenever you want. Personally, I keep copies of all TOTP configuration codes in my password safe (you’re using a password safe, right?).
  • Set up the infrastructure what works for you: To maximise my logging-on convenience, I have my password safe enter my TOTP numbers for me: I’m using KeeOTP for KeePass, but since 2016 LastPass users can do basically the same thing. I’ve also implemented my own TOTP client in Ruby to run on desktop computers I control (just be careful to protect the secrets file), because sometimes you just want a command-line solution. The code’s below, and I htink you’ll agree that it’s simple enough that you can audit it for your own safety too.
#!/usr/bin/env ruby
require 'rubygems'
require 'rotp'

printf "%-30s %3s (%02ds) %4s\n", 'Account',
                                  'Now',
                                  (30 - (Time::now.utc.to_i % 30)),
                                  'Next'
puts '-' * 47
File.read(File.expand_path('~/.google-authenticator-accounts')).
     split("\n").reject{|l| l.strip == ''}.
     each do |account|
  if account =~ /^(.+) ([\w\d]+)$/
    totp = ROTP::TOTP.new($2)
    printf "%-30s %06s    %06s\n", $1,
                                   totp.at(Time::now.utc),
                                   totp.at(Time::now.utc + 30)
  end
end

I’ve occasionally been asked whether my approach actually yields me any of the benefits of two-factor authentication. After all, people say, aren’t I weakening its benefits by storing the TOTP generation key in the same place as my usernames and passwords rather than restricting it to my mobile device. This is true, and it is weaker to do this than to keep the two separately, but it’s not true to say that all of the benefits are negated: replay attacks by an attacker who intercepts a password are mitigated by this approach, for example, and these are a far more-common vector for identity theft than the theft and decryption of password safes.

Everybody has to make their own decisions on the balance of their convenience versus their security, but for me the sweet spot comes here: in preventing many of the most-common attacks against the kinds of accounts that I use and reinforcing my existing username/strong-unique-passwords approach without preventing me from getting stuff done. You’ll have to make your own decisions, but if you take one thing away from this, let it be that there’s nothing to stop you having multiple ways to produce TOTP/Google Authenticator credentials, and you should consider doing so.

#youbroketheinternet So We Got Tracked Anyway

This article is a repost promoting content originally published elsewhere. See more things Dan's reposted.

by an author

Did you install EFF’s brilliant Privacy Badger or any other smart HTTP Cookie management tool? Or did you simply pick the privacy preference in your browser that ignores all third-party cookies? Did many websites you visit annoy you with permission-to-use-cookies pop-ups because of European legislation?

Guess what, it’s all been useless.

Hamburg university researchers have examined closely how web browsers implement so-called TLS session resumption and how the top million popular websites make use of that feature. They found that 80% of websites make a correct use, unsuitable for tracking repeat visitors — just resuming an existing session within the last ten minutes.

Unfortunately though, Google is present on 80% of these websites in form of Analytics, Fonts or other third-party inclusions. And among 10% of sites that do not respect reasonable resumption times, Google sticks out as one of the most greedy ones — it allows for a web browser to stay offline for over a day, and still be recognized as the same web browser the next day. Considering that it is nearly impossible to surf the web without accessing some Google content, this means that Google can track all your surfing habits without any need for HTTP Cookies!

As Facebook isn’t as pervasively present in all of the web, it went even further. It is enough for you to visit any website bearing a Like button every second day to allow Facebook to profile you, even if you never dreamt of logging into that service. Could it be our researchers just caught these companies with their hands deep in the cookie jar (pun intended)? For how long have they been collecting user data this way?

Somewhat conspiracy-theory-like take on an actual, real privacy issue: the fact that TLS makes tracking pretty easy even without cookies. If you thought my 301-based cookieless tracking was clever, this is cleverer. And harder to detect, to boot.

There is no longer any such thing as Computer Security

This article is a repost promoting content originally published elsewhere. See more things Dan's reposted.

Remember “cybersecurity”? Mysterious hooded computer guys doing mysterious hooded computer guy .. things! Who knows what kind of naughty digital mischief they might be up to? Unfortunately, we now live in a world where this kind of digital mischief is literally rewriting the world’s history. For proof of that, you need look no further than…

A good summary of the worst of the commonplace (non-spear) phishing attacks we’re seeing these days and why 2FA is positively, absolutely what you need (in addition to a password manager) these days.

Five-Eyes Intelligence Services Choose Surveillance Over Security

This article is a repost promoting content originally published elsewhere. See more things Dan's reposted.

The Five Eyes — the intelligence consortium of the rich English-speaking countries (the US, Canada, the UK, Australia, and New Zealand) — have issued a “Statement of Principles on Access to Evidence and Encryption” where they claim their needs for surveillance outweigh everyone’s needs for security and privacy. …the increasing use and sophistication of certain…

How many times must security professionals point out that there’s no such thing as a secure backdoor before governments actually listen? If you make a weakness in cryptography to make it easier for the “good guys” – your spies and law enforcement – then either (a) a foreign or enemy power will find the backdoor too, making everybody less-secure than before, or (b) people will use different cryptographic systems: ones which seem less-likely to have been backdoored.

Solving the information black hole is a challenging and important problem of our time. But backdoors surely aren’t the best solution, right?

What Cyber-War Will Look Like

This article is a repost promoting content originally published elsewhere. See more things Dan's reposted.

When prompted to think about the way hackers will shape the future of great power war, we are wont to imagine grand catastrophes: F-35s grounded by onboard computer failures, Aegis BMD systems failing to launch seconds before Chinese missiles arrive, looks of shock at Space Command as American surveillance satellites start careening towards the Earth–stuff like that. This is the sort of thing that fills the opening chapters of Peter Singer and August Cole’s Ghost Fleet. [1] The catastrophes I always imagine, however, are a bit different than this. The hacking campaigns I envision would be low-key, localized, and fairly low-tech. A cyber-ops campaign does not need to disable key weapon systems to devastate the other side’s war effort. It will be enough to increase the fear and friction enemy leaders face to tip the balance of victory and defeat. Singer and company are not wrong to draw inspiration from technological change; nor are they wrong to attempt to imagine operations with few historical precedents. But that isn’t my style. When asked to ponder the shape of cyber-war, my impulse is to look first at the kind of thing hackers are doing today and ask how these tactics might be applied in a time of war.

Mark Cancian thinks like I do.

In a report Cancian wrote for the Center for Strategic and International Studies on how great powers adapt to tactical and strategic surprise, Cancian sketched out twelve “vignettes” of potential technological or strategic shocks to make his abstract points a bit more concrete. Here is how Cancian imagines an “asymmetric cyber-attack” launched by the PRC against the United States Military:

 The U.S. secretary of defense had wondered this past week when the other shoe would drop.  Finally, it had, though the U.S. military would be unable to respond effectively for a while.

The scope and detail of the attack, not to mention its sheer audacity, had earned the grudging respect of the secretary. Years of worry about a possible Chinese “Assassin’s Mace”-a silver bullet super-weapon capable of disabling key parts of the American military-turned out to be focused on the wrong thing.

The cyber attacks varied. Sailors stationed at the 7th Fleet’ s homeport in Japan awoke one day to find their financial accounts, and those of their dependents, empty. Checking, savings, retirement funds: simply gone. The Marines based on Okinawa were under virtual siege by the populace, whose simmering resentment at their presence had boiled over after a YouTube video posted under the account of a Marine stationed there had gone viral. The video featured a dozen Marines drunkenly gang-raping two teenaged Okinawan girls. The video was vivid, the girls’ cries heart-wrenching the cheers of Marines sickening And all of it fake. The National Security Agency’s initial analysis of the video had uncovered digital fingerprints showing that it was a computer-assisted lie, and could prove that the Marine’s account under which it had been posted was hacked. But the damage had been done.

There was the commanding officer of Edwards Air Force Base whose Internet browser history had been posted on the squadron’s Facebook page. His command turned on him as a pervert; his weak protestations that he had not visited most of the posted links could not counter his admission that he had, in fact, trafficked some of them. Lies mixed with the truth. Soldiers at Fort Sill were at each other’s throats thanks to a series of text messages that allegedly unearthed an adultery ring on base.

The variations elsewhere were endless. Marines suddenly owed hundreds of thousands of dollars on credit lines they had never opened; sailors received death threats on their Twitter feeds; spouses and female service members had private pictures of themselves plastered across the Internet; older service members received notifications about cancerous conditions discovered in their latest physical.

Leadership was not exempt. Under the hashtag # PACOMMUSTGO a dozen women allegedly described harassment by the commander of Pacific command. Editorial writers demanded that, under the administration’s “zero tolerance” policy, he step aside while Congress held hearings.

There was not an American service member or dependent whose life had not been digitally turned upside down. In response, the secretary had declared “an operational pause,” directing units to stand down until things were sorted out.

Then, China had made its move, flooding the South China Sea with its conventional forces, enforcing a sea and air identification zone there, and blockading Taiwan. But the secretary could only respond weakly with a few air patrols and diversions of ships already at sea. Word was coming in through back channels that the Taiwanese government, suddenly stripped of its most ardent defender, was already considering capitulation.[2]

How is that for a cyber-attack?

Dot-dash-diss: The gentleman hacker’s 1903 lulz

This article is a repost promoting content originally published elsewhere. See more things Dan's reposted.

A century ago, one of the world’s first hackers used Morse code insults to disrupt a public demo of Marconi’s wireless telegraph

Nevil Maskelyne – doing it for the lulz?

LATE one June afternoon in 1903 a hush fell across an expectant audience in the Royal Institution’s celebrated lecture theatre in London. Before the crowd, the physicist John Ambrose Fleming was adjusting arcane apparatus as he prepared to demonstrate an emerging technological wonder: a long-range wireless communication system developed by his boss, the Italian radio pioneer Guglielmo Marconi. The aim was to showcase publicly for the first time that Morse code messages could be sent wirelessly over long distances. Around 300 miles away, Marconi was preparing to send a signal to London from a clifftop station in Poldhu, Cornwall, UK.

Yet before the demonstration could begin, the apparatus in the lecture theatre began to tap out a message. At first, it spelled out just one word repeated over and over. Then it changed into a facetious poem accusing Marconi of “diddling the public”. Their demonstration had been hacked – and this was more than 100 years before the mischief playing out on the internet today. Who was the Royal Institution hacker? How did the cheeky messages get there? And why?

An early example of hacking and a great metaphor for what would later become hacker-culture, found in the history of the wireless telegraph.

How Edge Follows In IE’s Security Failings

I’ve generally been pretty defensive of Microsoft Edge, the default web browser in Windows 10. Unlike its much-mocked predecessor Internet Explorer, Edge is fast, clean, modern, and boasts good standards-compliance: all of the things that Internet Explorer infamously failed at! I was genuinely surprised to see Edge fail to gain a significant market share in its first few years: it seemed to me that everyday Windows users installed other browsers (mostly Chrome, which is causing its own problems) specifically because Internet Explorer was so terrible, and that once their default browser was replaced with something moderately-good this would no longer be the case. But that’s not what’s happened. Maybe it’s because Edge’s branding is too-remiscient of its terrible predecessor or maybe just because Windows users have grown culturally-used to the idea that the first thing they should do on a new PC is download a different browser, but whatever the reason, Edge is neglected. And for the most part, I’ve argued, that’s a shame.

Edge's minimalistic Certificate View.
I ranted at an Edge developer I met at a conference, once, about Edge’s weak TLS debugging tools that couldn’t identify an OCSP stapling issue that only affected Edge, but I thought that was the worse of its bugs… until now…

But I’ve changed my tune this week after doing some research that demonstrates that a long-standing security issue of Internet Explorer is alive and well in Edge. This particular issue, billed as a “feature” by Microsoft, is deliberately absent from virtually every other web browser.

About 5 years ago, Steve Gibson observed a special feature of EV (Extended Validation) SSL certificates used on HTTPS websites: that their extra-special “green bar”/company name feature only appears if the root CA (certificate authority) is among the browser’s default trust store for EV certificate signing. That’s a pretty-cool feature! It means that if you’re on a website where you’d expect to see a “green bar”, like Three Rings, PayPal, or HSBC, then if you don’t see the green bar one day it most-likely means that your connection is being intercepted in the kind of way I described earlier this year, and everything you see or send including passwords and credit card numbers could be at risk. This could be malicious software (or nonmalicious software: some antivirus software breaks EV certificates!) or it could be your friendly local network admin’s middlebox (you trust your IT team, right?), but either way: at least you have a chance of noticing, right?

Firefox address bars showing EV certificates of Three Rings CIC (GB), PayPal, Inc. (US), and HSBC Holdings plc (GB)
Firefox, like most browsers, shows the company name in the address bar when valid EV certificates are presented, and hides it when the validity of that certificate is put into question by e.g. network sniffing tools set up by your IT department.

Browsers requiring that the EV certificate be signed by a one of a trusted list of CAs and not allowing that list to be manipulated (short of recompiling the browser from scratch) is a great feature that – were it properly publicised and supported by good user interface design, which it isn’t – would go a long way to protecting web users from unwanted surveillance by network administrators working for their employers, Internet service providers, and governments. Great! Except Internet Explorer went and fucked it up. As Gibson reported, not only does Internet Explorer ignore the rule of not allowing administrators to override the contents of the trusted list but Microsoft even provides a tool to help them do it!

Address bars from major browsers connecting to a spoofed site, with EV certificate "green bars" showing only in Internet Explorer and Edge.
From top to bottom: Internet Explorer 11, Edge 17, Firefox 61, Chrome 68. Only Internet Explorer and Edge show the (illegitimate) certificate for “Barclays PLC”. Sorry, Barclays; I had to spoof somebody.

I decided to replicate Gibson’s experiment to confirm his results with today’s browsers: I was also interested to see whether Edge had resolved this problem in Internet Explorer. My full code and configuration can be found here. As is doubtless clear from the title of this post and the screenshot above, Edge failed the test: it exhibits exactly the same troubling behaviour as Internet Explorer.

Thanks, Microsoft.

Safari doesn't fall for it, either.
I also tried Safari (both on MacOS, above, and iOS, below) and it behaved as the other non-Microsoft browsers do (i.e. arguably more-correctly than IE or Edge).

I shan’t for a moment pretend that our current certification model isn’t without it’s problems – it’s deeply flawed; more on that in a future post – but that doesn’t give anybody an excuse to get away with making it worse. When it became apparent that Internet Explorer was affected by the “feature” described above, we all collectively rolled our eyes because we didn’t expect better of everybody’s least-favourite web browser. But for Edge to inherit this deliberate-fault, despite every other browser (even those that share its certificate store) going in the opposite direction, is just insulting.

Before You Turn On Two-Factor Authentication

This article is a repost promoting content originally published elsewhere. See more things Dan's reposted.

Many online accounts allow you to supplement your password with a second form of identification, which can prevent some prevalent attacks. The second factors you can use to identify yourself include authenticator apps on your phone, which generate codes that change every 30 seconds, and security keys, small pieces of hardware similar in size and shape to USB drives. Since innovations that can actually improve the security of your online accounts are rare, there has been a great deal of well-deserved enthusiasm for two-factor authentication (as well as for password managers, which make it easy to use a different random password for every one of your online accounts.) These are technologies more people should be using.

However, in trying to persuade users to adopt second factors, advocates sometimes forget to disclose that all security measures have trade-offs . As second factors reduce the risk of some attacks, they also introduce new risks. One risk is that you could be locked out of your account when you lose your second factor, which may be when you need it the most. Another is that if you expect second factors to protect you from those attacks that they can not prevent, you may become more vulnerable to the those attacks.

Before you require a second factor to login to your accounts, you should understand the risks, have a recovery plan for when you lose your second factor(s), and know the tricks attackers may use to defeat two-factor authentication.

A well-examined exploration of some of the risks of employing two-factor authentication in your everyday life. I maintain that it’s still highly-worthwhile and everybody should do so, but it’s important that you know what you need to do in the event that you can’t access your two-factor device (and, ideally, have a backup solution in place): personally, I prefer TOTP (i.e. app-based) 2FA and I share my generation keys between my mobile device, my password safe (I’ll write a blog post about why this is controversial but why I think it’s a good idea anyway!), and in a console application I wrote (because selfdogfooding etc.).

Intercepting HTTPS Traffic from Android Emulator

Mostly for my own benefit, as most other guides online are outdated, here’s my set-up for intercepting TLS-encrypted communications from an emulated Android device (in Android Emulator) using Fiddler. This is useful if you want to debug, audit, reverse-engineer, or evaluate the security of an Android app. I’m using Fiddler 5.0 and Android Studio 2.3.3 (but it should work with newer versions too) to intercept connections from an Android 8 (Oreo) device using Windows. You can easily adapt this set-up to work with physical devices too, and it’s not hard to adapt these instructions for other configurations too.

Intercepting a HTTPS connection to DanQ.me on a virtual Android device.

1. Configure Fiddler

Install Fiddler and run it.

Configuring Fiddler

Under Tools > Options > HTTPS, enable “Decrypt HTTPS traffic” and allow a root CA certificate to be created.

Click Actions > Export Root Certificate to Desktop to get a copy of the root CA public key.

Fiddler's Connections settings

On the Connections tab, ensure that “Allow remote computers to connect” is ticked. You’ll need to restart Fiddler after changing this and may be prompted to grant it additional permissions.

If Fiddler changed your system proxy, you can safely change this back (and it’ll simplify your output if you do because you won’t be logging your system’s connections, just the Android device’s ones). Fiddler will complain with a banner that reads “The system proxy was changed. Click to reenable capturing.” but you can ignore it.

2. Configure your Android device

Android Device Manager - New Device

Install Android Studio. Click Tools > Android > AVD Manager to get a list of virtual devices. If you haven’t created one already, create one: it’s now possible to create Android devices with Play Store support (look for the icon, as shown above), which means you can easily intercept traffic from third-party applications without doing APK-downloading hacks: this is great if you plan on working out how a closed-source application works (or what it sends when it “phones home”).

Android emulator showing network settingsIn Android’s Settings > Network & Internet, disable WiFi. Then, under Mobile Network > Access Point Names > {Default access point, probably T-Mobile} set Proxy to the local IP address of your computer and Port to 8888. Now all traffic will go over the virtual cellular data connection which uses the proxy server you’ve configured in Fiddler.

Android network proxy settings

Drag the root CA file you exported to your desktop to your virtual Android device. This will automatically copy the file into the virtual device’s “Downloads” folder (if you’re using a physical device, copy via cable or network). In Settings > Security & Location > Encryption & Credentials > Install from SD Card, use the hamburger menu to get to the Downloads folder and select the file: you may need to set up a PIN lock on the device to do this. Check under Trusted credentials > User to check that it’s there, if you like.

Installing a Root CA in Android.

Test your configuration by visiting a HTTPS website: as you browse on the Android device, you’ll see the (decrypted) traffic appear in Fiddler. This also works with apps other than the web browser, of course, so if you’re reverse-engineering a API-backed application encryption then encryption doesn’t have to impede you.

3. Not working? (certificate pinning)

A small but increasing number of Android apps implement some variation of built-in key pinning, like HPKP but usually implemented in the application’s code (which is fine, because most people auto-update their apps). What this does is ensures that the certificate presented by the server is signed by a certification authority from a trusted list (a trusted list that doesn’t include Fiddler’s CA!). But remember: the app is running on your device, so you’re ultimately in control – FRIDA’s bypass script “fixed” all of the apps I tried, but if it doesn’t then I’ve heard good things about Inspeckage‘s “SSL uncheck” action.

Summary of steps

If you’re using a distinctly different configuration (different OS, physical device, etc.) or this guide has become dated, here’s the fundamentals of what you’re aiming to achieve:

  1. Set up a decrypting proxy server (e.g. Fiddler, Charles, Burp, SSLSplit – note that Wireshark isn’t suitable) and export its root certificate.
  2. Import the root certificate into the certificate store of the device to intercept.
  3. Configure the device to connect via the proxy server.
  4. If using an app that implements certificate pinning, “fix” the app with FRIDA or another tool.

Quantum Key Distribution Whitepaper

This article is a repost promoting content originally published elsewhere. See more things Dan's reposted.

a post (ncsc.gov.uk)
This white paper describes our current position on quantum key distribution (QKD). QKD is an approach to key distribution that relies on the properties of quantum mechanics to provide security.

For all the practical, business and security reasons given above, at this point in time we:

  • do not endorse QKD for any government or military applications
  • advise against replacing any existing public key solutions with QKD for commercial applications

The UK should continue its research and development of QKD systems. But this should be balanced by a growing body of practical QKD vulnerability research, and accompanied by the development of methods for quantifying and validating the security claims of real-world QKD systems. Responsible innovation should be accompanied by independent validation.

Wise words from the NCSC here:while QKD continues to depend upon conventional components that often lack battle-testing they may have vulnerabilities. Furthermore, current implementations of quantum cryptography fail to address the bigger and harder problems of authentication and identity – key distribution, while not perfectly solved, is still something that we understand very well… and many real-world attacks target other parts of the process (which QKD does not seek to solve).

Leak in Comic Chameleon (app API hacking)

I recently discovered a minor security vulnerability in mobile webcomic reading app Comic Chameleon, and I thought that it was interesting (and tame) enough to share as a learning example of (a) how to find security vulnerabilities in an app like this, and (b) more importantly, how to write an app like this without this kind of security vulnerability.

The nature of the vulnerability is that, for webcomics pushed directly into the platform by their authors, it’s possible to read comics (long) before they’re published. By way of proof, here’s a copy of the top-right 200 × 120 pixels of episode 54 of the (excellent) Forward Comic, which Imgur will confirm was uploaded on 2 July 2018: over three months ahead of its planned publication date.

Forward Comic 0054, due for publication in October
I’m not going to spoil this comic for you, but if you follow it then when October comes I think you’ll be pleased.

How to hack a web-backed app

Just to be clear, I didn’t set out to hack this app, but once I stumbled upon the vulnerability I wanted to make sure that I was able to collect enough information that I’d be able to explain to its author what was wrong and how to fix it. You’d be amazed how many systems I find security holes in almost-completely by accident. In fact, I’d just noticed that the application supported some webcomics that I follow but for which I hadn’t been able to find RSS feeds (and so I was selfdogfooding my own tool, RSSey, to “produce” RSS feeds for my reader by screen-scraping: not the most-elegant solution). But if this app could produce a list of issues of the comic, it must have some way of doing what I was trying to do, and I wanted to know what it was.

Comic Chameleon running on Android
Comic Chameleon brings a lot of comics into a single slick Android/iOS app. Some of them you’ll even have heard of!

The app, I figured, must “phone home” to some website – probably the app’s official website itself – to get the list of comics that it supports and details of where to get their feeds from, so I grabbed a copy of the app and started investigating. Because I figured I was probably looking for a URL, the first thing I did was to download the raw APK file (your favourite search engine can tell you how to do this), decompressed it (APK files are just ZIP files, really) and ran strings on it to search for likely-looking URLs:

Running strings on the Comic Chameleon APK contents
As predicted, there are several hard-coded addresses. And all over unencrypted HTTP, eww!

I tried visiting a few of the addresses but many of them seemed to be API endpoints that were expecting additional parameters. Probably, I figured, the strings I’d extracted were prefixes to which those parameters were attached. Rather than fuzz for the right parameters, I decided to watch what the app did: I spun up a simulated Android device using the official emulator (I could have used my own on a wireless network that I control, of course, but this was lazier) and ran my favourite packet sniffer to see what the application was requesting.

Wireshark output showing Comic Chameleon traffic.
The web addresses are even clearer, here, and include all of the parameters I need.

Now I had full web addresses with parameters. Comparing the parameters that appeared when I clicked different comics revealed that each comic in the “full list” was assigned a numeric ID which was used when requesting issues of that comic (along with an intermediate stage where the year of publication is requested).

Comic Chameleon comic list XML
Each comic is assigned an ID number, probably sequentially.

Interestingly, a number of comics were listed with the attribute s="no-show" and did not appear in the app: it looked like comics that weren’t yet being made available via the app were already being indexed and collected by its web component, and for some reason were being exposed via the XML API: presumably the developer had never considered that anybody but their app would look at the XML itself, but the thing about the Web is that if you put it on the Web, anybody can see it.

Still: at this point I assumed that I was about to find what I was looking for – some kind of machine-readable source (an RSS feed or something like one) for a webcomic or two. But when I looked at the XML API for one of those webcomics I discovered quite a bit more than I’d bargained on finding:

no-shows in the episode list produced by the web component of Comic Chameleon
Hey, what’s this? This feed includes titles for webcomics that haven’t been published yet, marked as ‘no-show’…

The first webcomic I looked at included the “official” web addresses and titles of each published comic… but also several not yet published ones. The unpublished ones were marked with s="no-show" to indicate to the app that they weren’t to be shown, but I could now see them. The “official” web addresses didn’t work for me, as I’d expected, but when I tried Comic Chameleon’s versions of the addresses, I found that I could see entire episodes of comics, up to three and a half months ahead of their expected publication date.

Whoops.

Naturally, I compiled all of my findings into an email and contacted the app developer with all of the details they’d need to fix it – in hacker terms, I’m one of the “good guys”! – but I wanted to share this particular example with you because (a) it’s not a very dangerous leak of data (a few webcomics a few weeks early and/or a way to evade a few ads isn’t going to kill anybody) and (b) it’s very illustrative of the kinds of mistakes that app developers are making a lot, these days, and it’s important to understand why so that you’re not among them. On to that in a moment.

Responsible disclosure

Because (I’d like to think) I’m one of the “good guys” in the security world, the first thing I did after the research above was to contact the author of the software. They didn’t seem to have a security.txt file, a disclosure policy, nor a profile on any of the major disclosure management sites, so I sent an email. Were the security issue more-severe, I’d have sent a preliminary email suggesting (and agreeing on a mechanism for) encrypted email, but given the low impact of this particular issue, I just explained the entire issue in the initial email: basically what you’ve read above, plus some tips on fixing the issue and an offer to help out.

"Hacking", apparently
This is what stock photo sites think “hacking” is. Well… this, pages full of green code, or hoodies.

I subscribe to the doctrine of responsible disclosure, which – in the event of more-significant vulnerabilities – means that after first contacting the developer of an insecure system and giving them time to fix it, it’s acceptable (in fact: in significant cases, it’s socially-responsible) to publish the details of the vulnerability. In this case, though, I think the whole experience makes an interesting learning example about ways in which you might begin to “black box” test an app for data leaks like this and – below – how to think about software development in a way that limits the risk of such vulnerabilities appearing in the first place.

The author of this software hasn’t given any answer to any of the emails I’ve sent over the last couple of weeks, so I’m assuming that they just plan to leave this particular leak in place. I reached out and contacted the author of Forward Comic, though, which turns out (coincidentally) to be probably the most-severely affected publication on the platform, so that he had the option of taking action before I published this blog post.

Lessons to learn

When developing an “app” (whether for the web or a desktop or mobile platform) that connects to an Internet service to collect data, here are the important things you really, really ought to do:

  1. Don’t publish any data that you don’t want the user to see.
  2. If the data isn’t for everybody, remember to authenticate the user.
  3. And for heaven’s sake use SSL, it’s not the 1990s any more.
Website message asking visitor to confirm that they're old enough.
It’s a good job that nobody on the Web would ever try to view something easily-available but which they shouldn’t, right? That’s why screens like this have always worked so well.

That first lesson’s the big one of course: if you don’t want something to be on the public Internet, don’t put it on the public Internet! The feeds I found simply shouldn’t have contained the “secret” information that they did, and the unpublished comics shouldn’t have been online at real web addresses. But aside from (or in addition to) not including these unpublished items in the data feeds, what else might our app developer have considered?

  • Encryption. There’s no excuse for not using HTTPS these days. This alone wouldn’t have prevented a deliberate effort to read the secret data, but it would help prevent it from happening accidentally (which is a risk right now), e.g. on a proxy server or while debugging something else on the same network link. It also protects the user from exposing their browsing habits (do you want everybody at that coffee shop to know what weird comics you read?) and from having content ‘injected’ (do you want the person at the next table of the coffee shop to be able to choose what you see when you ask for a comic?
  • Authentication (app). The app could work harder to prove that it’s genuinely the app when it contacts the website. No mechanism for doing this can ever be perfect, because the user hasa access to the app and can theoretically reverse-engineer it to fish the entire authentication strategy out of it, but some approaches are better than others. Sending a password (e.g. over Basic Authentication) is barely better than just using a complex web address, but using a client-side certiciate or an OTP algorithm would (in conjunction with encryption) foil many attackers.
  • Authentication (user). It’s a very-different model to the one currently used by the app, but requiring users to “sign up” to the service would reduce the risks and provide better mechanisms for tracking/blocking misusers, though the relative anonymity of the Internet doesn’t give this much strength and introduces various additional burdens both technical and legal upon the developer.

Fundamentally, of course, there’s nothing that an app developer can do to perfectly protect the data that is published to that app, because the app runs on a device that the user controls! That’s why the first lesson is the most important: if it shouldn’t be on the public Internet (yet), don’t put it on the public Internet.

Hopefully there’s a lesson for you somewhere too: about how to think about app security so that you don’t make a similar mistake, or about some of the ways in which you might test the security of an application (for example, as part of an internal audit), or, if nothing else, that you should go and read Forward, because it’s pretty cool.

Further reading

7 August 2018: I’ve now written a quick explanation about how to intercept HTTPS traffic from Android apps, for those that asked.

Unbreakable smart lock devastated to discover screwdrivers exist

This article is a repost promoting content originally published elsewhere. See more things Dan's reposted.

a post

It’s never easy to crack into a market with an innovative new product but makers of the “world’s first smart fingerprint padlock” have made one critical error: they forgot about the existence of screwdrivers.

Tapplock raised $320,000 in 2016 for their product that would allow you to use just your finger to open the “unbreakable” lock. Amazing. Things took a turn for the worse when the ship date of September came and went, and backers complained that the upstart has stopped posting any updates and wasn’t responding to emails nor social media posts.

But after months of silence, the startup assured El Reg that everything was still moving forward and the delays were due to “issues with manufacturing in China.”

Fast forward 18 months and finally – finally – the $100 Tapplock is out on the market and it is… well, how do we put this kindly? Somewhat flawed.

No, Panera Bread Doesn’t Take Security Seriously

This article is a repost promoting content originally published elsewhere. See more things Dan's reposted.

No, Panera Bread Doesn’t Take Security Seriously – PB – Medium by an author (Medium)
tl;dr: In August 2017, I reported a vulnerability to Panera Bread that allowed the full name, home address, email address, food/dietary…

tl;dr: In August 2017, I reported a vulnerability to Panera Bread that allowed the full name, home address, email address, food/dietary preferences, username, phone number, birthday and last four digits of a saved credit card to be accessed in bulk for any user that had ever signed up for an account. This includes my own personal data! Despite an explicit acknowledgement of the issue and a promise to fix it, Panera Bread sat on the vulnerability and, as far as I can tell, did nothing about it for eight months. When Brian Krebs publicly broke the news, other news outlets emphasized the usual “We take your security very seriously, security is a top priority for us” prepared statement from Panera Bread. Worse still, the vulnerability was not fixed at all — which means the company either misrepresented its actual security posture to the media to save face or was not competent enough to determine this fact for themselves. This post establishes a canonical timeline so subsequent reporting doesn’t get confused.

After Section 702 Reauthorization

This article is a repost promoting content originally published elsewhere. See more things Dan's reposted.

After Section 702 Reauthorization - Schneier on Security (schneier.com)

For over a decade, civil libertarians have been fighting government mass surveillance of innocent Americans over the Internet. We’ve just lost an important battle. On January 18, President Trump signed the renewal of Section 702, domestic mass surveillance became effectively a permanent part of US law. Section 702 was initially passed in 2008, as an…