This white paper describes our current position on quantum key distribution (QKD). QKD is an approach to key distribution that relies on the properties of quantum mechanics to
provide security.
…
For all the practical, business and security reasons given above, at this point in time we:
do not endorse QKD for any government or military applications
advise against replacing any existing public key solutions with QKD for commercial applications
The UK should continue its research and development of QKD systems. But this should be balanced by a growing body of practical QKD vulnerability research, and accompanied by the
development of methods for quantifying and validating the security claims of real-world QKD systems. Responsible innovation should be accompanied by independent validation.
…
Wise words from the NCSC here:while QKD continues to depend upon conventional components that often lack battle-testing they may have vulnerabilities. Furthermore, current
implementations of quantum cryptography fail to address the bigger and harder problems of authentication and identity – key distribution, while not perfectly solved, is still
something that we understand very well… and many real-world attacks target other parts of the process (which QKD does not seek to solve).
I recently discovered a minor security vulnerability in mobile webcomic reading app Comic Chameleon, and I thought that it was interesting
(and tame) enough to share as a learning example of (a) how to find security vulnerabilities in an app like this, and (b) more importantly, how to write an app like this
without this kind of security vulnerability.
The nature of the vulnerability is that, for webcomics pushed directly into the platform by their authors, it’s possible to read comics (long) before they’re published. By way
of proof, here’s a copy of the top-right 200 × 120 pixels of episode 54 of the (excellent) Forward
Comic, which Imgur will confirm was uploaded on 2 July 2018: over three months ahead of its planned publication date.
How to hack a web-backed app
Just to be clear, I didn’t set out to hack this app, but once I stumbled upon the vulnerability I wanted to make sure that I was able to collect enough information that I’d be
able to explain to its author what was wrong and how to fix it. You’d be amazed how many systems I find security holes in almost-completely by accident. In fact, I’d just noticed that
the application supported some webcomics that I follow but for which I hadn’t been able to find RSS feeds (and so I was selfdogfooding my own tool, RSSey, to “produce” RSS feeds for my reader by screen-scraping: not the most-elegant solution). But if this app could produce a list of issues of the comic, it
must have some way of doing what I was trying to do, and I wanted to know what it was.
The app, I figured, must “phone home” to some website – probably the app’s official website itself – to get the list of comics that it supports and details of where to get their feeds
from, so I grabbed a copy of the app and started investigating. Because I figured I was probably looking for a URL, the first thing I
did was to download the raw APK file (your favourite search engine can tell you how to do this), decompressed it (APK files are just ZIP files, really) and ran strings on it to search for
likely-looking URLs:
I tried visiting a few of the addresses but many of them seemed to be API endpoints that were expecting additional parameters. Probably, I figured, the strings I’d extracted were
prefixes to which those parameters were attached. Rather than fuzz for the right parameters, I decided to watch what the app did: I spun up a simulated Android device using the official
emulator (I could have used my own on a wireless network that I control, of course, but this was lazier) and ran my favourite packet sniffer to
see what the application was requesting.
Now I had full web addresses with parameters. Comparing the parameters that appeared when I clicked different comics revealed that each comic in the “full list” was assigned a numeric
ID which was used when requesting issues of that comic (along with an intermediate stage where the year of publication is requested).
Interestingly, a number of comics were listed with the attribute s="no-show" and did not appear in the app: it looked like comics that weren’t yet being made available via
the app were already being indexed and collected by its web component, and for some reason were being exposed via the XML
API: presumably the developer had never considered that anybody but their app would look at the XML itself, but the thing about the Web is that if you put it on the Web, anybody can see it.
Still: at this point I assumed that I was about to find what I was looking for – some kind of machine-readable source (an RSS feed
or something like one) for a webcomic or two. But when I looked at the XML API for one of those webcomics I discovered quite a bit more than I’d bargained on finding:
The first webcomic I looked at included the “official” web addresses and titles of each published comic… but also several not yet published ones. The unpublished ones were
marked with s="no-show" to indicate to the app that they weren’t to be shown, but I could now see them. The “official” web addresses didn’t work for me, as I’d expected,
but when I tried Comic Chameleon’s versions of the addresses, I found that I could see entire episodes of comics, up to three and a half months ahead of their expected
publication date.
Whoops.
Naturally, I compiled all of my findings into an email and contacted the app developer with all of the details they’d need to fix it – in hacker terms, I’m one of the “good guys”! – but
I wanted to share this particular example with you because (a) it’s not a very dangerous leak of data (a few webcomics a few weeks early and/or a way to evade a few ads isn’t
going to kill anybody) and (b) it’s very illustrative of the kinds of mistakes that app developers are making a lot, these days, and it’s important to understand why
so that you’re not among them. On to that in a moment.
Responsible disclosure
Because (I’d like to think) I’m one of the “good guys” in the security world, the first thing I did after the research above was to contact the author of the software. They didn’t seem
to have a security.txt file, a disclosure policy, nor a profile on any of the major disclosure management sites, so I sent an email. Were the
security issue more-severe, I’d have sent a preliminary email suggesting (and agreeing on a mechanism for) encrypted email, but given the low impact of this particular issue, I just
explained the entire issue in the initial email: basically what you’ve read above, plus some tips on fixing the issue and an offer to help out.
I subscribe to the doctrine of responsible disclosure, which – in the event of more-significant vulnerabilities –
means that after first contacting the developer of an insecure system and giving them time to fix it, it’s acceptable (in fact: in significant cases, it’s
socially-responsible) to publish the details of the vulnerability. In this case, though, I think the whole experience makes an interesting learning example about ways in which
you might begin to “black box” test an app for data leaks like this and – below – how to think about software development
in a way that limits the risk of such vulnerabilities appearing in the first place.
The author of this software hasn’t given any answer to any of the emails I’ve sent over the last couple of weeks, so I’m assuming that they just plan to leave this particular leak in
place. I reached out and contacted the author of Forward Comic, though, which turns out (coincidentally) to be probably the most-severely affected publication on the platform, so that
he had the option of taking action before I published this blog post.
Lessons to learn
When developing an “app” (whether for the web or a desktop or mobile platform) that connects to an Internet service to collect data, here are the important things you really, really
ought to do:
Don’t publish any data that you don’t want the user to see.
If the data isn’t for everybody, remember to authenticate the user.
And for heaven’s sake use SSL, it’s not the 1990s any more.
That first lesson’s the big one of course: if you don’t want something to be on the public Internet, don’t put it on the public Internet! The feeds I found simply
shouldn’t have contained the “secret” information that they did, and the unpublished comics shouldn’t have been online at real web addresses. But aside from (or in addition to)
not including these unpublished items in the data feeds, what else might our app developer have considered?
Encryption. There’s no excuse for not using HTTPS these days. This alone wouldn’t have prevented a deliberate effort
to read the secret data, but it would help prevent it from happening accidentally (which is a risk right now), e.g. on a proxy server or while debugging something else on the same
network link. It also protects the user from exposing their browsing habits (do you want everybody at that coffee shop to know what weird comics you read?) and from having
content ‘injected’ (do you want the person at the next table of the coffee shop to be able to choose what you see when you ask for a comic?
Authentication (app). The app could work harder to prove that it’s genuinely the app when it contacts the website. No mechanism for doing this can ever be perfect,
because the user hasa access to the app and can theoretically reverse-engineer it to fish the entire authentication strategy out of it, but some approaches are better than others.
Sending a password (e.g. over Basic Authentication) is barely better than just using a complex web address, but
using a client-side certiciate or an OTP algorithm would (in conjunction
with encryption) foil many attackers.
Authentication (user). It’s a very-different model to the one currently used by the app, but requiring users to “sign up” to the service would reduce the risks and
provide better mechanisms for tracking/blocking misusers, though the relative anonymity of the Internet doesn’t give this much strength and introduces various additional burdens both
technical and legal upon the developer.
Fundamentally, of course, there’s nothing that an app developer can do to perfectly protect the data that is published to that app, because the app runs on a device that the user
controls! That’s why the first lesson is the most important: if it shouldn’t be on the public Internet (yet), don’t put it on the public Internet.
Hopefully there’s a lesson for you somewhere too: about how to think about app security so that you don’t make a similar mistake, or about some of the ways in which you might test the
security of an application (for example, as part of an internal audit), or, if nothing else, that you should go and read Forward, because it’s
pretty cool.
It’s never easy to crack into a market with an innovative new product but makers of the “world’s first smart fingerprint padlock” have made one critical error: they forgot about the
existence of screwdrivers.
Tapplock raised $320,000 in 2016 for their product that would allow you to use just your finger to open the “unbreakable” lock. Amazing. Things took a turn for the worse when the ship
date of September came and went, and backers complained that the upstart has stopped posting any updates and wasn’t responding to emails nor social media posts.
But after months of silence, the startup assuredEl Reg that everything was still moving forward and the delays were due to “issues with manufacturing in China.”
Fast forward 18 months and finally – finally – the $100 Tapplock is out on the market and it is… well, how do we put this kindly? Somewhat flawed.
tl;dr: In August 2017, I reported a vulnerability to Panera Bread that allowed the full name, home address, email address, food/dietary…
tl;dr: In August 2017, I reported a vulnerability to Panera Bread that allowed the full name, home address, email address, food/dietary preferences, username, phone
number, birthday and last four digits of a saved credit card to be accessed in bulk for any user that had ever signed up for an account. This
includes my own personal data! Despite an explicit acknowledgement of the issue and a promise to fix it, Panera Bread sat on the vulnerability and,
as far as I can tell, did nothing about it for eight months. When Brian Krebs publicly broke the news, other news outlets emphasized the usual “We take your security very
seriously, security is a top priority for us” prepared statement from Panera Bread. Worse still, the vulnerability was not fixed at
all — which means the company either misrepresented its actual security posture to the media to save face or was not competent enough to
determine this fact for themselves. This post establishes a canonical timeline so subsequent reporting doesn’t get confused.
For over a decade, civil libertarians have been fighting government mass surveillance of innocent Americans over the Internet. We’ve just lost an important battle. On January 18,
President Trump signed the renewal of Section 702, domestic mass surveillance became effectively a permanent part of US law. Section 702 was initially passed in 2008, as an…
I've long been a proponent of Content Security Policies (CSPs). I've used them to fix mixed content warnings on this blog after Disqus made a little mistake, you'll see one adorning
Have I Been Pwned (HIBP) and I even wrote a dedicated Pluralsight course on browser security headers. I'm a
But it’s not all roses with CSPs and that’s partly due to what browsers will and will not let you do and partly due to what the platforms running our websites will and will not let
you do. For example, this blog runs on Ghost Pro which is a managed SaaS platform. I can upload whatever theme I like, but I can’t control
many aspects of how the platform actually executes, including how it handles response headers which is how a CSP is normally served by a site. Now I’m enormously supportive of running on managed platforms, but this is one of the
limitations of doing so. I also can’t add custom headers via Cloudflare at “the edge”; I’m serving the HSTS header from there because there’s first class support for that in the GUI, but not for CSP either
specifically in the GUI or via custom response headers. This will be achievable in the future via Cloudflare workers but for now, they have to come from the origin site.
However, you can add a CSP via meta tag and indeed that’s what I originally did with the upgrade-insecure-requests implementation I mentioned earlier when I fixed
the Disqus issue. However – and this is where we start getting into browser limitations – you can’t use the report-uri directive in a meta tag. Now that doesn’t matter if all the CSP
is doing is upgrading requests, but it matters a lot if you’re actually blocking content. That’s where the real value proposition of a CSP lies too; in its ability
to block things that may have been maliciously inserted into a site. I’ve had enough experience with breaking the CSP on HIBP to know that reporting is absolutely invaluable and
indeed when I’ve not paid attention to reports in the past, it’s
literally cost me money.
One of the most-popular WordPress plugins is Jetpack, a product of Automattic (best-known for providing the widely-used WordPress hosting service “WordPress.com“). Among Jetpack’s
features (many of which are very good) is Jetpack Protect which adds – among other things – the possibility for a CAPTCHA to appear on your login pages. This feature is slightly worse than pointless as it makes
it harder for humans to log in but has no significant impact upon automated robots; at best, it provides a false sense of security and merely frustrates and slows down legitimate human
editors.
“Proving your humanity”, as you’re asked to do, is a task that’s significantly easier for a robot to perform than a human. Eventually, of course, all tests of this nature seem likely to fail as robots become smarter than humans
(especially as the most-popular system is specifically geared towards training robots), but that’s hardly an excuse for inventing a system
that was a failure from its inception. Jetpack’s approach is fundamentally flawed because it makes absolutely no effort to disguise the challenge in a way that humans are able to read
any-differently than robots. I’ll demonstrate that in a moment.
A while back, a colleague of mine network-enabled Jetpack Protect across a handful of websites that I occasionally need to log into, and it bugged me that it ‘broke’ my password safe’s
ability to automatically log me in. So to streamline my workflow – as well as to demonstrate quite how broken Jetpack Protect’s CAPTCHA is, I’ve written a userscript that you can install into your web browser that will
completely circumvent it, solving the maths problems on your behalf so that you don’t have to. Here’s how to use it:
Install a userscript manager into your browser if you don’t have one already: I use Tampermonkey, but it ought to work with almost any of
them.
From now on, whenever you go to a page whose web path begins with “/wp-login.php” that contains a Jetpack Protect maths problem, the answer will be automatically calculated and
filled-in on your behalf. The usual userscript rules apply: if you don’t trust me, read the source code (there are really only five lines to check) and disable automatic updates for it
(especially as it operates across all domains), and feel free to adapt/improve however you see fit. Maybe if we can get enough people using it Automattic will fix this
half-hearted CAPTCHA – or at least give us a switch to disable it in the first
place.
Update: 15 October 2018 – the latest version of Jetpack makes an insignificant change to this CAPTCHA; version 1.2 of this script (linked above) works around the change.
Interesting research: “Long-term market implications of data breaches, not,” by Russell Lange and Eric W. Burger. Abstract: This report assesses the impact disclosure of data breaches
has on the total returns and volatility of the affected companies’ stock, with a focus on the results relative to the performance of the firms’ peer industries, as represented…
Turns out you can’t trust the free market to penalise companies whose negligence permits data breaches. I am
Dan’s lack of surprise. This is, of course, why security requires regulation.
It’s been a frantic week of security scares — it seems like every day there’s a new vulnerability. It’s been a real struggle for me personally to pretend like I understand what’s
going on when asked about it by family members.
Seeing people close to me get all flustered at the prospect of being “powned” has really put things in perspective for me.
So, it is with a heavy heart that I’ve decided to come clean and tell you all how I’ve been stealing usernames, passwords and credit card numbers from your sites for the past few
years.
This site maintains a table cross-referencing the most popular “secure” messaging apps (WhatsApp, Signal, Skype etc.) against their security features, so that you can make an informed
decision.
The tl;dr is, of course, what I’ve been saying all along: use Signal! (at least until Riot is more
mature…)
In January 2016, I spent $3,000 to buy 7.4 bitcoins. At the time, it seemed an entirely worthwhile thing to do. I had recently started working as a research director at the Institute
for the Future’s Blockchain Futures Lab, and I wanted firsthand experience with bitcoin, a cryptocurrency that uses a blockchain to record transactions on its network. I had no way of
knowing that this transaction would lead to a white-knuckle scramble to avoid losing a small fortune…
Yesterday, a hacker pulled off the second biggest heist in the history of digital currencies.
Around 12:00 PST, an unknown attacker exploited a critical flaw in the Parity multi-signature wallet on the Ethereum network, draining three massive wallets of over $31,000,000 worth
of Ether in a matter of minutes. Given a couple more hours, the hacker could’ve made off with over $180,000,000 from vulnerable wallets.
Let this pledge be duly noted on the permanent record of the Internet. I don’t know if there’s an afterlife, but I’ll be finding out soon enough, and I plan to go out mad as
hell…
One of the things people often tweet to us @ncsc are examples of websites which prevent you pasting in a password. Why do websites do this? The debate has raged – with most
commentators raging how annoying it is.
So why do organisations do this? Often no reason is given, but when one is, that reason is ‘security’. The NCSC don’t think the reasons add up. We think that stopping password
pasting (or SPP) is a bad thing that reduces security. We think customers should be allowed to paste their passwords into forms, and that
it improves security…
In early June 2014, accountants at the Lumiere Place Casino in St. Louis noticed that several of their slot machines had—just for a couple of days—gone
haywire. The government-approved software that powers such machines gives the house a fixed mathematical edge, so that casinos can be certain of how much they’ll earn over the long
haul—say, 7.129 cents for every dollar played. But on June 2 and 3, a number of Lumiere’s machines had spit out far more money than they’d consumed, despite not awarding any major
jackpots, an aberration known in industry parlance as a negative hold. Since code isn’t prone to sudden fits of madness, the only plausible explanation was that someone was cheating…