Thieves didn’t even bother with a London art gallery’s Constable landscape—and they still walked away with $3 million.
This comic is perhaps the best way to enjoy this news story, which describes the theft of £2.4 million during an unusual… let’s call it an “art heist”… in 2018. It has many the
characteristics of the kind of heist you’re thinking about: the bad guys got the money, and nobody gets to see the art. But there’s a twist: the criminals never came anywhere near the
painting.
This theft was committed entirely in cyberspace: the victim was tricked into wiring the money to pay for the painting into the wrong account. The art buyer claims that he made
the payment in good faith, though, and that he’s not culpable because it was the seller’s email that must have been hacked. Until it’s resolved, the painting’s not on display, so not
only do the criminals have the cash, the painting isn’t on display.
This is part of a series of posts on computer terminology whose popular meaning – determined by surveying my friends – has significantly
diverged from its original/technical one. Read more evolving words…
Anticipatory note: based on the traffic I already get to my blog and the keywords people search for, I imagine that some people will end up here looking to
learn “how to become a hacker”. If that’s your goal, you’re probably already asking the wrong question, but I direct you to Eric S. Raymond’s Guide/FAQ on the subject. Good luck.
Few words have seen such mutation of meaning over their lifetimes as the word “silly”. The earliest references, found in Old English, Proto-Germanic, and Old Norse and presumably having
an original root even earlier, meant “happy”. By the end of the 12th century it meant “pious”; by the end of the 13th, “pitiable” or “weak”; only by the late 16th coming to mean
“foolish”; its evolution continues in the present day.
But there’s little so silly as the media-driven evolution of the word “hacker” into something that’s at least a little offensive those of us who probably would be
described as hackers. Let’s take a look.
Hacker
What people think it means
Computer criminal with access to either knowledge or tools which are (or should be) illegal.
What it originally meant
Expert, creative computer programmer; often politically inclined towards information transparency, egalitarianism, anti-authoritarianism, anarchy, and/or decentralisation of
power.
The Past
The earliest recorded uses of the word “hack” had a meaning that is unchanged to this day: to chop or cut, as you might describe hacking down an unruly bramble. There are clear links
between this and the contemporary definition, “to plod away at a repetitive task”. However, it’s less certain how the word came to be associated with the meaning it would come to take
on in the computer labs of 1960s university campuses (the earliest references seem to come from
around April 1955).
There, the word hacker came to describe computer experts who were developing a culture of:
sharing computer resources and code (even to the extent, in extreme
cases, breaking into systems to establish more equal opportunity of access),
learning everything possible about humankind’s new digital frontiers (hacking to learn, not learning to hack)
discovering and advancing the limits of computers: it’s been said that the difference between a non-hacker and a hacker is that a non-hacker asks of a new gadget “what does it do?”,
while a hacker asks “what can I make it do?”
It is absolutely possible for hacking, then, to involve no lawbreaking whatsoever. Plenty of hacking involves writing (and sharing) code, reverse-engineering technology and systems you
own or to which you have legitimate access, and pushing the boundaries of what’s possible in terms of software, art, and human-computer interaction. Even among hackers with a specific
interest in computer security, there’s plenty of scope for the legal pursuit of their interests: penetration testing, security research, defensive security, auditing, vulnerability
assessment, developer education… (I didn’t say cyberwarfare because 90% of its
application is of questionable legality, but it is of course a big growth area.)
So what changed? Hackers got famous, and not for the best reasons. A big tipping point came in the early 1980s when hacking group The
414s broke into a number of high-profile computer systems, mostly by using the default password which had never been changed. The six teenagers responsible were arrested by the
FBI but few were charged, and those that were were charged only with minor offences. This was at least in part because
there weren’t yet solid laws under which to prosecute them but also because they were cooperative, apologetic, and for the most part hadn’t caused any real harm. Mostly they’d just been
curious about what they could get access to, and were interested in exploring the systems to which they’d logged-in, and seeing how long they could remain there undetected. These remain
common motivations for many hackers to this day.
News media though – after being excited by “hacker” ideas introduced by WarGames – rightly realised that a hacker with the
same elementary resources as these teens but with malicious intent could cause significant real-world damage. Bruce
Schneier argued last year that the danger of this may be higher today than ever before. The press ran news stories strongly associating the word “hacker” specifically with the focus
on the illegal activities in which some hackers engage. The release of Neuromancer the following year, coupled
with an increasing awareness of and organisation by hacker groups and a number of arrests on both sides of the Atlantic only fuelled things further. By the end of the decade it was
essentially impossible for a layperson to see the word “hacker” in anything other than a negative light. Counter-arguments like The
Conscience of a Hacker (Hacker’s Manifesto) didn’t reach remotely the same audiences: and even if they had, the points they made remain hard to sympathise with for those outside of
hacker communities.
A lack of understanding about what hackers did and what motivated them made them seem mysterious and otherworldly. People came to make the same assumptions about hackers that
they do about magicians – that their abilities are the result of being privy to tightly-guarded knowledge rather than years of practice – and this
elevated them to a mythical level of threat. By the time that Kevin Mitnick was jailed in the mid-1990s, prosecutors were able to successfully persuade a judge that this “most dangerous
hacker in the world” must be kept in solitary confinement and with no access
to telephones to ensure that he couldn’t, for example, “start a nuclear war by whistling into a pay phone”. Yes, really.
The Future
Every decade’s hackers have debated whether or not the next decade’s have correctly interpreted their idea of “hacker ethics”. For me, Steven Levy’s tenets encompass them best:
Access to computers – and anything which might teach you something about the way the world works – should be unlimited and total.
All information should be free.
Mistrust authority – promote decentralization.
Hackers should be judged by their hacking, not bogus criteria such as degrees, age, race, or position.
You can create art and beauty on a computer.
Computers can change your life for the better.
Given these concepts as representative of hacker ethics, I’m convinced that hacking remains alive and well today. Hackers continue to be responsible for many of the coolest and
most-important innovations in computing, and are likely to continue to do so. Unlike many other sciences, where progress over the ages has gradually pushed innovators away from
backrooms and garages and into labs to take advantage of increasingly-precise generations of equipment, the tools of computer science are increasingly available to individuals.
More than ever before, bedroom-based hackers are able to get started on their journey with nothing more than a basic laptop or desktop computer and a stack of freely-available
open-source software and documentation. That progress may be threatened by the growth in popularity of easy-to-use (but highly locked-down) tablets and smartphones, but the barrier to
entry is still low enough that most people can pass it, and the new generation of ultra-lightweight computers like the Raspberry Pi are doing
their part to inspire the next generation of hackers, too.
That said, and as much as I personally love and identify with the term “hacker”, the hacker community has never been less in-need of this overarching label. The diverse variety
of types of technologist nowadays coupled with the infiltration of pop culture by geek culture has inevitably diluted only to be replaced with a multitude of others each describing a
narrow but understandable part of the hacker mindset. You can describe yourself today as a coder, gamer, maker, biohacker, upcycler,cracker, blogger, reverse-engineer, social engineer, unconferencer, or one of dozens of other terms that more-specifically ties you to your
community. You’ll be understood and you’ll be elegantly sidestepping the implications of criminality associated with the word “hacker”.
The original meaning of “hacker” has also been soiled from within its community: its biggest and perhaps most-famous
advocate‘s insistence upon linguistic prescriptivism came under fire just this year after he pushed for a dogmatic interpretation of the term “sexual assault” in spite of a victim’s experience.
This seems to be absolutely representative of his general attitudes towards sex, consent,
women, and appropriate professional relationships. Perhaps distancing ourselves from the old definition of the word “hacker” can go hand-in-hand with distancing ourselves from some of
the toxicity in the field of computer science?
(I’m aware that I linked at the top of this blog post to the venerable but also-problematic Eric S. Raymond; if anybody can suggest an equivalent resource by another
author I’d love to swap out the link.)
Verdict: The word “hacker” has become so broad in scope that we’ll never be able to rein it back in. It’s tainted by its associations with both criminality, on one
side, and unpleasant individuals on the other, and it’s time to accept that the popular contemporary meaning has won. Let’s find new words to define ourselves, instead.
Can we solve [the problem of supply-chain attacks] by building trustworthy systems out of untrustworthy parts?
It sounds ridiculous on its face, but the Internet itself was a solution to a similar problem: a reliable network built out of unreliable parts. This was the result of decades of
research. That research continues today, and it’s how we can have highly resilient distributed systems like Google’s network even though none of the individual components are
particularly good. It’s also the philosophy behind much of the cybersecurity industry today: systems watching one another, looking for vulnerabilities and signs of attack.
Security is a lot harder than reliability. We don’t even really know how to build secure systems out of secure parts, let alone out of parts and processes that we can’t trust and
that are almost certainly being subverted by governments and criminals around the world. Current security technologies are nowhere near good enough, though, to defend against these
increasingly sophisticated attacks. So while this is an important part of the solution, and something we need to focus research on, it’s not going to solve our near-term problems.
…
Schneier provides a great summary of the state of play with nation-state supply-chain attacks, using the Huawei 5G controversy as a jumping-off point but with reference to the fact that
China are far from the only country that weaken the security and privacy of the world’s citizens in order to gain an international spying advantage. He goes on to explain what
he sees as the two broad schools of thought are in providing technical solutions to this class of problems, and demonstrates that both are for the time being beyond our reach. The
excerpt above comes from his examination of the second school of thought, and it’s a pretty-compelling illustration of why this is a different class of problem that the ones we’ve used
to build a reliable Internet.
The reality is that your sensitive data has likely already been stolen, multiple times. Cybercriminals have your credit card information. They have your social
security number and your mother’s maiden name. They have your address and phone number. They obtained the data by hacking any one of the hundreds of companies you entrust with the
data — and you have no visibility into those companies’ security practices, and no recourse when they lose your data.
Given this, your best option is to turn your efforts toward trying to make sure that your data isn’t used against you. Enable two-factor authentication for all important
accounts whenever possible. Don’t reuse passwords for anything important — and get a password manager to remember them all.
Remember “cybersecurity”? Mysterious hooded computer guys doing mysterious hooded computer guy .. things! Who knows what kind of naughty digital mischief they might be up to?
Unfortunately, we now live in a world where this kind of digital mischief is literally rewriting the world’s history. For proof of that, you need look no further than…
A good summary of the worst of the commonplace (non-spear) phishing attacks we’re seeing these days and why 2FA is positively, absolutely what you need (in addition to a
password manager) these days.
The Five Eyes — the intelligence consortium of the rich English-speaking countries (the US, Canada, the UK, Australia, and New Zealand) — have issued a “Statement of Principles on
Access to Evidence and Encryption” where they claim their needs for surveillance outweigh everyone’s needs for security and privacy. …the increasing use and sophistication of certain…
How many times must security professionals point out that there’s no such thing as a secure backdoor before governments actually listen? If you make a weakness in cryptography to make
it easier for the “good guys” – your spies and law enforcement – then either (a) a foreign or enemy power will find the backdoor too, making everybody less-secure than
before, or (b) people will use different cryptographic systems: ones which seem less-likely to have been backdoored.
Solving the information black hole is a challenging and important problem of our time. But backdoors surely aren’t the best solution, right?
When prompted to think about the way hackers will shape the future of great power war, we are wont to imagine grand catastrophes: F-35s grounded by onboard computer failures, Aegis
BMD systems failing to launch seconds before Chinese missiles arrive, looks of shock at Space Command as American surveillance satellites start careening towards the Earth–stuff like
that. This is the sort of thing that fills the opening chapters of Peter Singer and August Cole’s Ghost Fleet. [1] The catastrophes
I always imagine, however, are a bit different than this. The hacking campaigns I envision would be low-key, localized, and fairly low-tech. A cyber-ops campaign does not
need to disable key weapon systems to devastate the other side’s war effort. It will be enough to increase the fear and friction enemy leaders face to tip the balance of victory and defeat. Singer and company are
not wrong to draw inspiration from technological change; nor are they wrong to attempt to imagine operations with few historical precedents. But that isn’t my style. When asked to
ponder the shape of cyber-war, my impulse is to look first at the kind of thing hackers are doing today and ask how these tactics might be applied in a time of war.
In a report Cancian wrote for
the Center for Strategic and International Studies on how great powers adapt to tactical and strategic surprise, Cancian sketched out twelve “vignettes” of potential technological
or strategic shocks to make his abstract points a bit more concrete. Here is how Cancian imagines an “asymmetric cyber-attack” launched by the PRC against the United States Military:
The U.S. secretary of defense had wondered this past week when the other shoe would drop. Finally, it had, though the U.S. military would be unable to respond
effectively for a while.
The scope and detail of the attack, not to mention its sheer audacity, had earned the grudging respect of the secretary. Years of worry about a possible Chinese “Assassin’s Mace”-a
silver bullet super-weapon capable of disabling key parts of the American military-turned out to be focused on the wrong thing.
The cyber attacks varied. Sailors stationed at the 7th Fleet’ s homeport in Japan awoke one day to find their financial accounts, and those of their dependents, empty. Checking,
savings, retirement funds: simply gone. The Marines based on Okinawa were under virtual siege by the populace, whose simmering resentment at their presence had boiled over after a
YouTube video posted under the account of a Marine stationed there had gone viral. The video featured a dozen Marines drunkenly gang-raping two teenaged Okinawan girls. The video
was vivid, the girls’ cries heart-wrenching the cheers of Marines sickening And all of it fake. The National Security Agency’s initial analysis of the video had uncovered digital
fingerprints showing that it was a computer-assisted lie, and could prove that the Marine’s account under which it had been posted was hacked. But the damage had been done.
There was the commanding officer of Edwards Air Force Base whose Internet browser history had been posted on the squadron’s Facebook page. His command turned on him as a pervert;
his weak protestations that he had not visited most of the posted links could not counter his admission that he had, in fact, trafficked some of them. Lies mixed with the truth.
Soldiers at Fort Sill were at each other’s throats thanks to a series of text messages that allegedly unearthed an adultery ring on base.
The variations elsewhere were endless. Marines suddenly owed hundreds of thousands of dollars on credit lines they had never opened; sailors received death threats on their Twitter
feeds; spouses and female service members had private pictures of themselves plastered across the Internet; older service members received notifications about cancerous conditions
discovered in their latest physical.
Leadership was not exempt. Under the hashtag # PACOMMUSTGO a dozen women allegedly described harassment by the commander of Pacific command. Editorial writers demanded that, under
the administration’s “zero tolerance” policy, he step aside while Congress held hearings.
There was not an American service member or dependent whose life had not been digitally turned upside down. In response, the secretary had declared “an operational pause,” directing
units to stand down until things were sorted out.
Then, China had made its move, flooding the South China Sea with its conventional forces, enforcing a sea and air identification zone there, and blockading Taiwan. But the secretary
could only respond weakly with a few air patrols and diversions of ships already at sea. Word was coming in through back channels that the Taiwanese government, suddenly stripped of
its most ardent defender, was already considering capitulation.[2]
A century ago, one of the world’s first hackers used Morse code insults to disrupt a public demo of Marconi’s wireless telegraph
LATE one June afternoon in 1903 a hush fell across an expectant audience in the Royal Institution’s celebrated lecture theatre in London. Before the crowd, the physicist John
Ambrose Fleming was adjusting arcane apparatus as he prepared to demonstrate an emerging technological wonder: a long-range wireless communication system developed by his boss, the
Italian radio pioneer Guglielmo Marconi. The aim was to showcase publicly for the first time that Morse code messages could be sent wirelessly over long distances. Around 300 miles
away, Marconi was preparing to send a signal to London from a clifftop station in Poldhu, Cornwall, UK.
Yet before the demonstration could begin, the apparatus in the lecture theatre began to tap out a message. At first, it spelled out just one word repeated over and over. Then it
changed into a facetious poem accusing Marconi of “diddling the public”. Their demonstration had been hacked – and this was more than 100 years before the mischief playing out on
the internet today. Who was the Royal Institution hacker? How did the cheeky messages get there? And why?
…
An early example of hacking and a great metaphor for what would later become hacker-culture, found in the history of the wireless telegraph.
I’ve generally been pretty defensive of Microsoft Edge, the default web browser in Windows 10. Unlike its much-mocked
predecessor Internet Explorer, Edge is fast, clean, modern, and boasts good standards-compliance: all of the things that
Internet Explorer infamously failed at! I was genuinely surprised to see Edge fail to gain a significant market share in its first few years: it seemed to me
that everyday Windows users installed other browsers (mostly Chrome, which is causing its own problems) specifically because Internet Explorer was
so terrible, and that once their default browser was replaced with something moderately-good this would no longer be the case. But that’s not what’s happened. Maybe it’s because Edge’s
branding is too-remiscient of its terrible
predecessor or maybe just because Windows users have grown culturally-used to the idea that the first thing they should do on a new PC is download a different browser, but
whatever the reason, Edge is neglected. And for the most part, I’ve argued, that’s a shame.
But I’ve changed my tune this week after doing some research that demonstrates that a long-standing security issue of Internet Explorer is alive and well in Edge. This particular issue,
billed as a “feature” by Microsoft, is deliberately absent from virtually every other web browser.
About 5 years ago, Steve Gibson observed a special feature of EV (Extended Validation) SSL certificates used on HTTPS websites: that their
extra-special “green bar”/company name feature only appears if the root CA (certificate authority) is among the browser’s default trust store for EV certificate signing. That’s
a pretty-cool feature! It means that if you’re on a website where you’d expect to see a “green bar”, like Three Rings, PayPal, or HSBC, then if you don’t see the green bar one day it most-likely means that your
connection is being intercepted in the kind of way I described earlier this year, and everything you see or send including
passwords and credit card numbers could be at risk. This could be malicious software (or nonmalicious software: some antivirus software breaks EV certificates!) or it could be your friendly local
network admin’s middlebox (you trust your IT team, right?), but either way: at least you have a chance of noticing, right?
Browsers requiring that the EV certificate be signed by a one of a trusted list of CAs and not allowing that list to be manipulated (short of recompiling the browser from
scratch) is a great feature that – were it properly publicised and supported by good user interface design, which it isn’t – would go a long way to protecting web users from unwanted
surveillance by network administrators working for their employers, Internet service providers, and governments. Great! Except Internet Explorer went and fucked it up. As Gibson
reported, not only does Internet Explorer ignore the rule of not allowing administrators to override the contents of the trusted list but Microsoft even provides a tool to help them do it!
I decided to replicate Gibson’s experiment to confirm his results with today’s browsers: I was also interested to see whether Edge had resolved this problem in Internet Explorer. My
full code and configuration can be found here. As is doubtless clear from the title of this post and the
screenshot above, Edge failed the test: it exhibits exactly the same troubling behaviour as Internet Explorer.
Thanks, Microsoft.
I shan’t for a moment pretend that our current certification model isn’t without it’s problems – it’s deeply flawed; more on that in a future post – but that doesn’t give anybody an
excuse to get away with making it worse. When it became apparent that Internet Explorer was affected by the “feature” described above, we all collectively rolled our eyes
because we didn’t expect better of everybody’s least-favourite web browser. But for Edge to inherit this deliberate-fault, despite every other browser (even those that share its
certificate store) going in the opposite direction, is just insulting.
Mostly for my own benefit, as most other guides online are outdated, here’s my set-up for intercepting TLS-encrypted communications from an emulated Android device (in Android Emulator)
using Fiddler. This is useful if you want to debug, audit, reverse-engineer, or evaluate the security of an Android app. I’m using Fiddler
5.0 and Android Studio 2.3.3 (but it should work with newer versions too) to intercept connections from an Android 8 (Oreo) device
using Windows. You can easily adapt this set-up to work with physical devices too, and it’s not hard to adapt these instructions for other configurations too.
1. Configure Fiddler
Install Fiddler and run it.
Under Tools > Options > HTTPS, enable “Decrypt HTTPS traffic” and allow a root CA certificate to be created.
Click Actions > Export Root Certificate to Desktop to get a copy of the root CA public key.
On the Connections tab, ensure that “Allow remote computers to connect” is ticked. You’ll need to restart Fiddler after changing this and may be prompted to grant it additional
permissions.
If Fiddler changed your system proxy, you can safely change this back (and it’ll simplify your output if you do because you won’t be logging your system’s connections, just the Android
device’s ones). Fiddler will complain with a banner that reads “The system proxy was changed. Click to reenable capturing.” but you can ignore it.
2. Configure your Android device
Install Android Studio. Click Tools > Android > AVD Manager to get a list of virtual devices. If you haven’t created one already, create one: it’s now possible to create Android
devices with Play Store support (look for the icon, as shown above), which means you can easily intercept traffic from third-party applications without doing APK-downloading hacks: this
is great if you plan on working out how a closed-source application works (or what it sends when it “phones home”).
In Android’s Settings > Network & Internet,
disable WiFi. Then, under Mobile Network > Access Point Names > {Default access point, probably T-Mobile} set Proxy to the local IP address of your computer and Port to 8888. Now
all traffic will go over the virtual cellular data connection which uses the proxy server you’ve configured in Fiddler.
Drag the root CA file you exported to your desktop to your virtual Android device. This will automatically copy the file into the virtual device’s “Downloads” folder (if you’re using a
physical device, copy via cable or network). In Settings > Security & Location > Encryption & Credentials > Install from SD Card, use the hamburger menu to get to the Downloads
folder and select the file: you may need to set up a PIN lock on the device to do this. Check under Trusted credentials > User to check that it’s there, if you like.
Test your configuration by visiting a HTTPS website: as you browse on the Android device, you’ll see the (decrypted) traffic appear in Fiddler. This also works with apps other than the
web browser, of course, so if you’re reverse-engineering a API-backed application encryption then encryption doesn’t have to impede you.
3. Not working? (certificate pinning)
A small but increasing number of Android apps implement some variation of built-in key pinning, like HPKP but usually
implemented in the application’s code (which is fine, because most people auto-update their apps). What this does is ensures that the certificate presented by the server is signed by a
certification authority from a trusted list (a trusted list that doesn’t include Fiddler’s CA!). But remember: the app is running on your device, so you’re ultimately in
control – FRIDA’s bypass script “fixed” all of the apps I tried, but if it
doesn’t then I’ve heard good things about Inspeckage‘s “SSL uncheck” action.
Summary of steps
If you’re using a distinctly different configuration (different OS, physical device, etc.) or this guide has become dated, here’s the fundamentals of what you’re aiming to achieve:
Set up a decrypting proxy server (e.g. Fiddler, Charles, Burp, SSLSplit – note that Wireshark isn’t suitable) and export its root certificate.
Import the root certificate into the certificate store of the device to intercept.
Configure the device to connect via the proxy server.
If using an app that implements certificate pinning, “fix” the app with FRIDA or another tool.
I recently discovered a minor security vulnerability in mobile webcomic reading app Comic Chameleon, and I thought that it was interesting
(and tame) enough to share as a learning example of (a) how to find security vulnerabilities in an app like this, and (b) more importantly, how to write an app like this
without this kind of security vulnerability.
The nature of the vulnerability is that, for webcomics pushed directly into the platform by their authors, it’s possible to read comics (long) before they’re published. By way
of proof, here’s a copy of the top-right 200 × 120 pixels of episode 54 of the (excellent) Forward
Comic, which Imgur will confirm was uploaded on 2 July 2018: over three months ahead of its planned publication date.
How to hack a web-backed app
Just to be clear, I didn’t set out to hack this app, but once I stumbled upon the vulnerability I wanted to make sure that I was able to collect enough information that I’d be
able to explain to its author what was wrong and how to fix it. You’d be amazed how many systems I find security holes in almost-completely by accident. In fact, I’d just noticed that
the application supported some webcomics that I follow but for which I hadn’t been able to find RSS feeds (and so I was selfdogfooding my own tool, RSSey, to “produce” RSS feeds for my reader by screen-scraping: not the most-elegant solution). But if this app could produce a list of issues of the comic, it
must have some way of doing what I was trying to do, and I wanted to know what it was.
The app, I figured, must “phone home” to some website – probably the app’s official website itself – to get the list of comics that it supports and details of where to get their feeds
from, so I grabbed a copy of the app and started investigating. Because I figured I was probably looking for a URL, the first thing I
did was to download the raw APK file (your favourite search engine can tell you how to do this), decompressed it (APK files are just ZIP files, really) and ran strings on it to search for
likely-looking URLs:
I tried visiting a few of the addresses but many of them seemed to be API endpoints that were expecting additional parameters. Probably, I figured, the strings I’d extracted were
prefixes to which those parameters were attached. Rather than fuzz for the right parameters, I decided to watch what the app did: I spun up a simulated Android device using the official
emulator (I could have used my own on a wireless network that I control, of course, but this was lazier) and ran my favourite packet sniffer to
see what the application was requesting.
Now I had full web addresses with parameters. Comparing the parameters that appeared when I clicked different comics revealed that each comic in the “full list” was assigned a numeric
ID which was used when requesting issues of that comic (along with an intermediate stage where the year of publication is requested).
Interestingly, a number of comics were listed with the attribute s="no-show" and did not appear in the app: it looked like comics that weren’t yet being made available via
the app were already being indexed and collected by its web component, and for some reason were being exposed via the XML
API: presumably the developer had never considered that anybody but their app would look at the XML itself, but the thing about the Web is that if you put it on the Web, anybody can see it.
Still: at this point I assumed that I was about to find what I was looking for – some kind of machine-readable source (an RSS feed
or something like one) for a webcomic or two. But when I looked at the XML API for one of those webcomics I discovered quite a bit more than I’d bargained on finding:
The first webcomic I looked at included the “official” web addresses and titles of each published comic… but also several not yet published ones. The unpublished ones were
marked with s="no-show" to indicate to the app that they weren’t to be shown, but I could now see them. The “official” web addresses didn’t work for me, as I’d expected,
but when I tried Comic Chameleon’s versions of the addresses, I found that I could see entire episodes of comics, up to three and a half months ahead of their expected
publication date.
Whoops.
Naturally, I compiled all of my findings into an email and contacted the app developer with all of the details they’d need to fix it – in hacker terms, I’m one of the “good guys”! – but
I wanted to share this particular example with you because (a) it’s not a very dangerous leak of data (a few webcomics a few weeks early and/or a way to evade a few ads isn’t
going to kill anybody) and (b) it’s very illustrative of the kinds of mistakes that app developers are making a lot, these days, and it’s important to understand why
so that you’re not among them. On to that in a moment.
Responsible disclosure
Because (I’d like to think) I’m one of the “good guys” in the security world, the first thing I did after the research above was to contact the author of the software. They didn’t seem
to have a security.txt file, a disclosure policy, nor a profile on any of the major disclosure management sites, so I sent an email. Were the
security issue more-severe, I’d have sent a preliminary email suggesting (and agreeing on a mechanism for) encrypted email, but given the low impact of this particular issue, I just
explained the entire issue in the initial email: basically what you’ve read above, plus some tips on fixing the issue and an offer to help out.
I subscribe to the doctrine of responsible disclosure, which – in the event of more-significant vulnerabilities –
means that after first contacting the developer of an insecure system and giving them time to fix it, it’s acceptable (in fact: in significant cases, it’s
socially-responsible) to publish the details of the vulnerability. In this case, though, I think the whole experience makes an interesting learning example about ways in which
you might begin to “black box” test an app for data leaks like this and – below – how to think about software development
in a way that limits the risk of such vulnerabilities appearing in the first place.
The author of this software hasn’t given any answer to any of the emails I’ve sent over the last couple of weeks, so I’m assuming that they just plan to leave this particular leak in
place. I reached out and contacted the author of Forward Comic, though, which turns out (coincidentally) to be probably the most-severely affected publication on the platform, so that
he had the option of taking action before I published this blog post.
Lessons to learn
When developing an “app” (whether for the web or a desktop or mobile platform) that connects to an Internet service to collect data, here are the important things you really, really
ought to do:
Don’t publish any data that you don’t want the user to see.
If the data isn’t for everybody, remember to authenticate the user.
And for heaven’s sake use SSL, it’s not the 1990s any more.
That first lesson’s the big one of course: if you don’t want something to be on the public Internet, don’t put it on the public Internet! The feeds I found simply
shouldn’t have contained the “secret” information that they did, and the unpublished comics shouldn’t have been online at real web addresses. But aside from (or in addition to)
not including these unpublished items in the data feeds, what else might our app developer have considered?
Encryption. There’s no excuse for not using HTTPS these days. This alone wouldn’t have prevented a deliberate effort
to read the secret data, but it would help prevent it from happening accidentally (which is a risk right now), e.g. on a proxy server or while debugging something else on the same
network link. It also protects the user from exposing their browsing habits (do you want everybody at that coffee shop to know what weird comics you read?) and from having
content ‘injected’ (do you want the person at the next table of the coffee shop to be able to choose what you see when you ask for a comic?
Authentication (app). The app could work harder to prove that it’s genuinely the app when it contacts the website. No mechanism for doing this can ever be perfect,
because the user hasa access to the app and can theoretically reverse-engineer it to fish the entire authentication strategy out of it, but some approaches are better than others.
Sending a password (e.g. over Basic Authentication) is barely better than just using a complex web address, but
using a client-side certiciate or an OTP algorithm would (in conjunction
with encryption) foil many attackers.
Authentication (user). It’s a very-different model to the one currently used by the app, but requiring users to “sign up” to the service would reduce the risks and
provide better mechanisms for tracking/blocking misusers, though the relative anonymity of the Internet doesn’t give this much strength and introduces various additional burdens both
technical and legal upon the developer.
Fundamentally, of course, there’s nothing that an app developer can do to perfectly protect the data that is published to that app, because the app runs on a device that the user
controls! That’s why the first lesson is the most important: if it shouldn’t be on the public Internet (yet), don’t put it on the public Internet.
Hopefully there’s a lesson for you somewhere too: about how to think about app security so that you don’t make a similar mistake, or about some of the ways in which you might test the
security of an application (for example, as part of an internal audit), or, if nothing else, that you should go and read Forward, because it’s
pretty cool.
Nowadays, fraudulent online stock-trading schemes are common. But even before the first electric telegraph, two bankers committed the equivalent of modern-day Internet stock fraud.
Nowadays, fraudulent online stock-trading schemes are common. But even before the first electric telegraph, two bankers committed the equivalent of modern-day Internet stock fraud.
…
Fabulous article from 1999 about how two bankers in 1837 hacked additional data into the fledgling telegraph system to surreptitiously (and illicitly) send messages to give
them an edge at the stock exchange. Their innovative approach is similar to modern steganographic systems that hide information in headers, metadata, or within the encoding of invisible characters.
tl;dr: In August 2017, I reported a vulnerability to Panera Bread that allowed the full name, home address, email address, food/dietary…
tl;dr: In August 2017, I reported a vulnerability to Panera Bread that allowed the full name, home address, email address, food/dietary preferences, username, phone
number, birthday and last four digits of a saved credit card to be accessed in bulk for any user that had ever signed up for an account. This
includes my own personal data! Despite an explicit acknowledgement of the issue and a promise to fix it, Panera Bread sat on the vulnerability and,
as far as I can tell, did nothing about it for eight months. When Brian Krebs publicly broke the news, other news outlets emphasized the usual “We take your security very
seriously, security is a top priority for us” prepared statement from Panera Bread. Worse still, the vulnerability was not fixed at
all — which means the company either misrepresented its actual security posture to the media to save face or was not competent enough to
determine this fact for themselves. This post establishes a canonical timeline so subsequent reporting doesn’t get confused.
It’s been a frantic week of security scares — it seems like every day there’s a new vulnerability. It’s been a real struggle for me personally to pretend like I understand what’s
going on when asked about it by family members.
Seeing people close to me get all flustered at the prospect of being “powned” has really put things in perspective for me.
So, it is with a heavy heart that I’ve decided to come clean and tell you all how I’ve been stealing usernames, passwords and credit card numbers from your sites for the past few
years.
In January 2016, I spent $3,000 to buy 7.4 bitcoins. At the time, it seemed an entirely worthwhile thing to do. I had recently started working as a research director at the Institute
for the Future’s Blockchain Futures Lab, and I wanted firsthand experience with bitcoin, a cryptocurrency that uses a blockchain to record transactions on its network. I had no way of
knowing that this transaction would lead to a white-knuckle scramble to avoid losing a small fortune…