A century ago, one of the world’s first hackers used Morse code insults to disrupt a public demo of Marconi’s wireless telegraph
LATE one June afternoon in 1903 a hush fell across an expectant audience in the Royal Institution’s celebrated lecture theatre in London. Before the crowd, the physicist John
Ambrose Fleming was adjusting arcane apparatus as he prepared to demonstrate an emerging technological wonder: a long-range wireless communication system developed by his boss, the
Italian radio pioneer Guglielmo Marconi. The aim was to showcase publicly for the first time that Morse code messages could be sent wirelessly over long distances. Around 300 miles
away, Marconi was preparing to send a signal to London from a clifftop station in Poldhu, Cornwall, UK.
Yet before the demonstration could begin, the apparatus in the lecture theatre began to tap out a message. At first, it spelled out just one word repeated over and over. Then it
changed into a facetious poem accusing Marconi of “diddling the public”. Their demonstration had been hacked – and this was more than 100 years before the mischief playing out on
the internet today. Who was the Royal Institution hacker? How did the cheeky messages get there? And why?
…
An early example of hacking and a great metaphor for what would later become hacker-culture, found in the history of the wireless telegraph.
I’ve generally been pretty defensive of Microsoft Edge, the default web browser in Windows 10. Unlike its much-mocked
predecessor Internet Explorer, Edge is fast, clean, modern, and boasts good standards-compliance: all of the things that
Internet Explorer infamously failed at! I was genuinely surprised to see Edge fail to gain a significant market share in its first few years: it seemed to me
that everyday Windows users installed other browsers (mostly Chrome, which is causing its own problems) specifically because Internet Explorer was
so terrible, and that once their default browser was replaced with something moderately-good this would no longer be the case. But that’s not what’s happened. Maybe it’s because Edge’s
branding is too-remiscient of its terrible
predecessor or maybe just because Windows users have grown culturally-used to the idea that the first thing they should do on a new PC is download a different browser, but
whatever the reason, Edge is neglected. And for the most part, I’ve argued, that’s a shame.
I ranted at an Edge developer I met at a conference, once, about Edge’s weak TLS debugging tools that couldn’t identify an OCSP stapling issue that only affected Edge, but I thought
that was the worse of its bugs… until now…
But I’ve changed my tune this week after doing some research that demonstrates that a long-standing security issue of Internet Explorer is alive and well in Edge. This particular issue,
billed as a “feature” by Microsoft, is deliberately absent from virtually every other web browser.
About 5 years ago, Steve Gibson observed a special feature of EV (Extended Validation) SSL certificates used on HTTPS websites: that their
extra-special “green bar”/company name feature only appears if the root CA (certificate authority) is among the browser’s default trust store for EV certificate signing. That’s
a pretty-cool feature! It means that if you’re on a website where you’d expect to see a “green bar”, like Three Rings, PayPal, or HSBC, then if you don’t see the green bar one day it most-likely means that your
connection is being intercepted in the kind of way I described earlier this year, and everything you see or send including
passwords and credit card numbers could be at risk. This could be malicious software (or nonmalicious software: some antivirus software breaks EV certificates!) or it could be your friendly local
network admin’s middlebox (you trust your IT team, right?), but either way: at least you have a chance of noticing, right?
Firefox, like most browsers, shows the company name in the address bar when valid EV certificates are presented, and hides it when the validity of that certificate is put into
question by e.g. network sniffing tools set up by your IT department.
Browsers requiring that the EV certificate be signed by a one of a trusted list of CAs and not allowing that list to be manipulated (short of recompiling the browser from
scratch) is a great feature that – were it properly publicised and supported by good user interface design, which it isn’t – would go a long way to protecting web users from unwanted
surveillance by network administrators working for their employers, Internet service providers, and governments. Great! Except Internet Explorer went and fucked it up. As Gibson
reported, not only does Internet Explorer ignore the rule of not allowing administrators to override the contents of the trusted list but Microsoft even provides a tool to help them do it!
From top to bottom: Internet Explorer 11, Edge 17, Firefox 61, Chrome 68. Only Internet Explorer and Edge show the (illegitimate) certificate for “Barclays PLC”. Sorry, Barclays; I
had to spoof somebody.
I decided to replicate Gibson’s experiment to confirm his results with today’s browsers: I was also interested to see whether Edge had resolved this problem in Internet Explorer. My
full code and configuration can be found here. As is doubtless clear from the title of this post and the
screenshot above, Edge failed the test: it exhibits exactly the same troubling behaviour as Internet Explorer.
Thanks, Microsoft.
I also tried Safari (both on MacOS, above, and iOS, below) and it behaved as the other non-Microsoft browsers do (i.e. arguably more-correctly than IE or Edge).
I shan’t for a moment pretend that our current certification model isn’t without it’s problems – it’s deeply flawed; more on that in a future post – but that doesn’t give anybody an
excuse to get away with making it worse. When it became apparent that Internet Explorer was affected by the “feature” described above, we all collectively rolled our eyes
because we didn’t expect better of everybody’s least-favourite web browser. But for Edge to inherit this deliberate-fault, despite every other browser (even those that share its
certificate store) going in the opposite direction, is just insulting.
Many online accounts allow you to supplement your password with a second form of identification, which can prevent some prevalent attacks. The second factors you can use to identify yourself include authenticator apps on your phone, which generate codes that change every 30 seconds, and
security keys, small pieces of hardware similar in size and shape to USB drives. Since innovations that can actually improve the security of your online accounts are rare, there has
been a great deal of well-deserved enthusiasm for two-factor authentication (as well as for password managers, which make it easy to use a different random password for every one of
your online accounts.) These are technologies more people should be using.
However, in trying to persuade users to adopt second factors, advocates sometimes forget to disclose that all security measures have trade-offs . As second factors reduce the
risk of some attacks, they also introduce new risks. One risk is that you could be locked out of your account when you lose your second factor, which may be when you need it the most.
Another is that if you expect second factors to protect you from those attacks that they can not prevent, you may become more vulnerable to the those attacks.
Before you require a second factor to login to your accounts, you should understand the risks, have a recovery plan for when you lose your second factor(s), and know the tricks
attackers may use to defeat two-factor authentication.
…
A well-examined exploration of some of the risks of employing two-factor authentication in your everyday life. I maintain that it’s still highly-worthwhile and everybody should do so,
but it’s important that you know what you need to do in the event that you can’t access your two-factor device (and, ideally, have a backup solution in place): personally, I prefer
TOTP (i.e. app-based) 2FA and I share my generation keys
between my mobile device, my password safe (I’ll write a blog post about why this is controversial but why I think it’s a good idea anyway!), and in a console application I wrote
(because selfdogfooding etc.).
Mostly for my own benefit, as most other guides online are outdated, here’s my set-up for intercepting TLS-encrypted communications from an emulated Android device (in Android Emulator)
using Fiddler. This is useful if you want to debug, audit, reverse-engineer, or evaluate the security of an Android app. I’m using Fiddler
5.0 and Android Studio 2.3.3 (but it should work with newer versions too) to intercept connections from an Android 8 (Oreo) device
using Windows. You can easily adapt this set-up to work with physical devices too, and it’s not hard to adapt these instructions for other configurations too.
1. Configure Fiddler
Install Fiddler and run it.
Under Tools > Options > HTTPS, enable “Decrypt HTTPS traffic” and allow a root CA certificate to be created.
Click Actions > Export Root Certificate to Desktop to get a copy of the root CA public key.
On the Connections tab, ensure that “Allow remote computers to connect” is ticked. You’ll need to restart Fiddler after changing this and may be prompted to grant it additional
permissions.
If Fiddler changed your system proxy, you can safely change this back (and it’ll simplify your output if you do because you won’t be logging your system’s connections, just the Android
device’s ones). Fiddler will complain with a banner that reads “The system proxy was changed. Click to reenable capturing.” but you can ignore it.
2. Configure your Android device
Install Android Studio. Click Tools > Android > AVD Manager to get a list of virtual devices. If you haven’t created one already, create one: it’s now possible to create Android
devices with Play Store support (look for the icon, as shown above), which means you can easily intercept traffic from third-party applications without doing APK-downloading hacks: this
is great if you plan on working out how a closed-source application works (or what it sends when it “phones home”).
In Android’s Settings > Network & Internet,
disable WiFi. Then, under Mobile Network > Access Point Names > {Default access point, probably T-Mobile} set Proxy to the local IP address of your computer and Port to 8888. Now
all traffic will go over the virtual cellular data connection which uses the proxy server you’ve configured in Fiddler.
Drag the root CA file you exported to your desktop to your virtual Android device. This will automatically copy the file into the virtual device’s “Downloads” folder (if you’re using a
physical device, copy via cable or network). In Settings > Security & Location > Encryption & Credentials > Install from SD Card, use the hamburger menu to get to the Downloads
folder and select the file: you may need to set up a PIN lock on the device to do this. Check under Trusted credentials > User to check that it’s there, if you like.
Test your configuration by visiting a HTTPS website: as you browse on the Android device, you’ll see the (decrypted) traffic appear in Fiddler. This also works with apps other than the
web browser, of course, so if you’re reverse-engineering a API-backed application encryption then encryption doesn’t have to impede you.
3. Not working? (certificate pinning)
A small but increasing number of Android apps implement some variation of built-in key pinning, like HPKP but usually
implemented in the application’s code (which is fine, because most people auto-update their apps). What this does is ensures that the certificate presented by the server is signed by a
certification authority from a trusted list (a trusted list that doesn’t include Fiddler’s CA!). But remember: the app is running on your device, so you’re ultimately in
control – FRIDA’s bypass script “fixed” all of the apps I tried, but if it
doesn’t then I’ve heard good things about Inspeckage‘s “SSL uncheck” action.
Summary of steps
If you’re using a distinctly different configuration (different OS, physical device, etc.) or this guide has become dated, here’s the fundamentals of what you’re aiming to achieve:
Set up a decrypting proxy server (e.g. Fiddler, Charles, Burp, SSLSplit – note that Wireshark isn’t suitable) and export its root certificate.
Import the root certificate into the certificate store of the device to intercept.
Configure the device to connect via the proxy server.
If using an app that implements certificate pinning, “fix” the app with FRIDA or another tool.
This white paper describes our current position on quantum key distribution (QKD). QKD is an approach to key distribution that relies on the properties of quantum mechanics to
provide security.
…
For all the practical, business and security reasons given above, at this point in time we:
do not endorse QKD for any government or military applications
advise against replacing any existing public key solutions with QKD for commercial applications
The UK should continue its research and development of QKD systems. But this should be balanced by a growing body of practical QKD vulnerability research, and accompanied by the
development of methods for quantifying and validating the security claims of real-world QKD systems. Responsible innovation should be accompanied by independent validation.
…
Wise words from the NCSC here:while QKD continues to depend upon conventional components that often lack battle-testing they may have vulnerabilities. Furthermore, current
implementations of quantum cryptography fail to address the bigger and harder problems of authentication and identity – key distribution, while not perfectly solved, is still
something that we understand very well… and many real-world attacks target other parts of the process (which QKD does not seek to solve).
I recently discovered a minor security vulnerability in mobile webcomic reading app Comic Chameleon, and I thought that it was interesting
(and tame) enough to share as a learning example of (a) how to find security vulnerabilities in an app like this, and (b) more importantly, how to write an app like this
without this kind of security vulnerability.
The nature of the vulnerability is that, for webcomics pushed directly into the platform by their authors, it’s possible to read comics (long) before they’re published. By way
of proof, here’s a copy of the top-right 200 × 120 pixels of episode 54 of the (excellent) Forward
Comic, which Imgur will confirm was uploaded on 2 July 2018: over three months ahead of its planned publication date.
I’m not going to spoil this comic for you, but if you follow it then when October comes I think you’ll be pleased.
How to hack a web-backed app
Just to be clear, I didn’t set out to hack this app, but once I stumbled upon the vulnerability I wanted to make sure that I was able to collect enough information that I’d be
able to explain to its author what was wrong and how to fix it. You’d be amazed how many systems I find security holes in almost-completely by accident. In fact, I’d just noticed that
the application supported some webcomics that I follow but for which I hadn’t been able to find RSS feeds (and so I was selfdogfooding my own tool, RSSey, to “produce” RSS feeds for my reader by screen-scraping: not the most-elegant solution). But if this app could produce a list of issues of the comic, it
must have some way of doing what I was trying to do, and I wanted to know what it was.
Comic Chameleon brings a lot of comics into a single slick Android/iOS app. Some of them you’ll even have heard of!
The app, I figured, must “phone home” to some website – probably the app’s official website itself – to get the list of comics that it supports and details of where to get their feeds
from, so I grabbed a copy of the app and started investigating. Because I figured I was probably looking for a URL, the first thing I
did was to download the raw APK file (your favourite search engine can tell you how to do this), decompressed it (APK files are just ZIP files, really) and ran strings on it to search for
likely-looking URLs:
As predicted, there are several hard-coded addresses. And all over unencrypted HTTP, eww!
I tried visiting a few of the addresses but many of them seemed to be API endpoints that were expecting additional parameters. Probably, I figured, the strings I’d extracted were
prefixes to which those parameters were attached. Rather than fuzz for the right parameters, I decided to watch what the app did: I spun up a simulated Android device using the official
emulator (I could have used my own on a wireless network that I control, of course, but this was lazier) and ran my favourite packet sniffer to
see what the application was requesting.
The web addresses are even clearer, here, and include all of the parameters I need.
Now I had full web addresses with parameters. Comparing the parameters that appeared when I clicked different comics revealed that each comic in the “full list” was assigned a numeric
ID which was used when requesting issues of that comic (along with an intermediate stage where the year of publication is requested).
Each comic is assigned an ID number, probably sequentially.
Interestingly, a number of comics were listed with the attribute s="no-show" and did not appear in the app: it looked like comics that weren’t yet being made available via
the app were already being indexed and collected by its web component, and for some reason were being exposed via the XML
API: presumably the developer had never considered that anybody but their app would look at the XML itself, but the thing about the Web is that if you put it on the Web, anybody can see it.
Still: at this point I assumed that I was about to find what I was looking for – some kind of machine-readable source (an RSS feed
or something like one) for a webcomic or two. But when I looked at the XML API for one of those webcomics I discovered quite a bit more than I’d bargained on finding:
Hey, what’s this? This feed includes titles for webcomics that haven’t been published yet, marked as ‘no-show’…
The first webcomic I looked at included the “official” web addresses and titles of each published comic… but also several not yet published ones. The unpublished ones were
marked with s="no-show" to indicate to the app that they weren’t to be shown, but I could now see them. The “official” web addresses didn’t work for me, as I’d expected,
but when I tried Comic Chameleon’s versions of the addresses, I found that I could see entire episodes of comics, up to three and a half months ahead of their expected
publication date.
Whoops.
Naturally, I compiled all of my findings into an email and contacted the app developer with all of the details they’d need to fix it – in hacker terms, I’m one of the “good guys”! – but
I wanted to share this particular example with you because (a) it’s not a very dangerous leak of data (a few webcomics a few weeks early and/or a way to evade a few ads isn’t
going to kill anybody) and (b) it’s very illustrative of the kinds of mistakes that app developers are making a lot, these days, and it’s important to understand why
so that you’re not among them. On to that in a moment.
Responsible disclosure
Because (I’d like to think) I’m one of the “good guys” in the security world, the first thing I did after the research above was to contact the author of the software. They didn’t seem
to have a security.txt file, a disclosure policy, nor a profile on any of the major disclosure management sites, so I sent an email. Were the
security issue more-severe, I’d have sent a preliminary email suggesting (and agreeing on a mechanism for) encrypted email, but given the low impact of this particular issue, I just
explained the entire issue in the initial email: basically what you’ve read above, plus some tips on fixing the issue and an offer to help out.
This is what stock photo sites think “hacking” is. Well… this, pages full of green code, or hoodies.
I subscribe to the doctrine of responsible disclosure, which – in the event of more-significant vulnerabilities –
means that after first contacting the developer of an insecure system and giving them time to fix it, it’s acceptable (in fact: in significant cases, it’s
socially-responsible) to publish the details of the vulnerability. In this case, though, I think the whole experience makes an interesting learning example about ways in which
you might begin to “black box” test an app for data leaks like this and – below – how to think about software development
in a way that limits the risk of such vulnerabilities appearing in the first place.
The author of this software hasn’t given any answer to any of the emails I’ve sent over the last couple of weeks, so I’m assuming that they just plan to leave this particular leak in
place. I reached out and contacted the author of Forward Comic, though, which turns out (coincidentally) to be probably the most-severely affected publication on the platform, so that
he had the option of taking action before I published this blog post.
Lessons to learn
When developing an “app” (whether for the web or a desktop or mobile platform) that connects to an Internet service to collect data, here are the important things you really, really
ought to do:
Don’t publish any data that you don’t want the user to see.
If the data isn’t for everybody, remember to authenticate the user.
And for heaven’s sake use SSL, it’s not the 1990s any more.
It’s a good job that nobody on the Web would ever try to view something easily-available but which they shouldn’t, right? That’s why screens like this have always worked so well.
That first lesson’s the big one of course: if you don’t want something to be on the public Internet, don’t put it on the public Internet! The feeds I found simply
shouldn’t have contained the “secret” information that they did, and the unpublished comics shouldn’t have been online at real web addresses. But aside from (or in addition to)
not including these unpublished items in the data feeds, what else might our app developer have considered?
Encryption. There’s no excuse for not using HTTPS these days. This alone wouldn’t have prevented a deliberate effort
to read the secret data, but it would help prevent it from happening accidentally (which is a risk right now), e.g. on a proxy server or while debugging something else on the same
network link. It also protects the user from exposing their browsing habits (do you want everybody at that coffee shop to know what weird comics you read?) and from having
content ‘injected’ (do you want the person at the next table of the coffee shop to be able to choose what you see when you ask for a comic?
Authentication (app). The app could work harder to prove that it’s genuinely the app when it contacts the website. No mechanism for doing this can ever be perfect,
because the user hasa access to the app and can theoretically reverse-engineer it to fish the entire authentication strategy out of it, but some approaches are better than others.
Sending a password (e.g. over Basic Authentication) is barely better than just using a complex web address, but
using a client-side certiciate or an OTP algorithm would (in conjunction
with encryption) foil many attackers.
Authentication (user). It’s a very-different model to the one currently used by the app, but requiring users to “sign up” to the service would reduce the risks and
provide better mechanisms for tracking/blocking misusers, though the relative anonymity of the Internet doesn’t give this much strength and introduces various additional burdens both
technical and legal upon the developer.
Fundamentally, of course, there’s nothing that an app developer can do to perfectly protect the data that is published to that app, because the app runs on a device that the user
controls! That’s why the first lesson is the most important: if it shouldn’t be on the public Internet (yet), don’t put it on the public Internet.
Hopefully there’s a lesson for you somewhere too: about how to think about app security so that you don’t make a similar mistake, or about some of the ways in which you might test the
security of an application (for example, as part of an internal audit), or, if nothing else, that you should go and read Forward, because it’s
pretty cool.
It’s never easy to crack into a market with an innovative new product but makers of the “world’s first smart fingerprint padlock” have made one critical error: they forgot about the
existence of screwdrivers.
Tapplock raised $320,000 in 2016 for their product that would allow you to use just your finger to open the “unbreakable” lock. Amazing. Things took a turn for the worse when the ship
date of September came and went, and backers complained that the upstart has stopped posting any updates and wasn’t responding to emails nor social media posts.
But after months of silence, the startup assuredEl Reg that everything was still moving forward and the delays were due to “issues with manufacturing in China.”
Fast forward 18 months and finally – finally – the $100 Tapplock is out on the market and it is… well, how do we put this kindly? Somewhat flawed.
tl;dr: In August 2017, I reported a vulnerability to Panera Bread that allowed the full name, home address, email address, food/dietary…
tl;dr: In August 2017, I reported a vulnerability to Panera Bread that allowed the full name, home address, email address, food/dietary preferences, username, phone
number, birthday and last four digits of a saved credit card to be accessed in bulk for any user that had ever signed up for an account. This
includes my own personal data! Despite an explicit acknowledgement of the issue and a promise to fix it, Panera Bread sat on the vulnerability and,
as far as I can tell, did nothing about it for eight months. When Brian Krebs publicly broke the news, other news outlets emphasized the usual “We take your security very
seriously, security is a top priority for us” prepared statement from Panera Bread. Worse still, the vulnerability was not fixed at
all — which means the company either misrepresented its actual security posture to the media to save face or was not competent enough to
determine this fact for themselves. This post establishes a canonical timeline so subsequent reporting doesn’t get confused.
For over a decade, civil libertarians have been fighting government mass surveillance of innocent Americans over the Internet. We’ve just lost an important battle. On January 18,
President Trump signed the renewal of Section 702, domestic mass surveillance became effectively a permanent part of US law. Section 702 was initially passed in 2008, as an…
I've long been a proponent of Content Security Policies (CSPs). I've used them to fix mixed content warnings on this blog after Disqus made a little mistake, you'll see one adorning
Have I Been Pwned (HIBP) and I even wrote a dedicated Pluralsight course on browser security headers. I'm a
But it’s not all roses with CSPs and that’s partly due to what browsers will and will not let you do and partly due to what the platforms running our websites will and will not let
you do. For example, this blog runs on Ghost Pro which is a managed SaaS platform. I can upload whatever theme I like, but I can’t control
many aspects of how the platform actually executes, including how it handles response headers which is how a CSP is normally served by a site. Now I’m enormously supportive of running on managed platforms, but this is one of the
limitations of doing so. I also can’t add custom headers via Cloudflare at “the edge”; I’m serving the HSTS header from there because there’s first class support for that in the GUI, but not for CSP either
specifically in the GUI or via custom response headers. This will be achievable in the future via Cloudflare workers but for now, they have to come from the origin site.
However, you can add a CSP via meta tag and indeed that’s what I originally did with the upgrade-insecure-requests implementation I mentioned earlier when I fixed
the Disqus issue. However – and this is where we start getting into browser limitations – you can’t use the report-uri directive in a meta tag. Now that doesn’t matter if all the CSP
is doing is upgrading requests, but it matters a lot if you’re actually blocking content. That’s where the real value proposition of a CSP lies too; in its ability
to block things that may have been maliciously inserted into a site. I’ve had enough experience with breaking the CSP on HIBP to know that reporting is absolutely invaluable and
indeed when I’ve not paid attention to reports in the past, it’s
literally cost me money.
One of the most-popular WordPress plugins is Jetpack, a product of Automattic (best-known for providing the widely-used WordPress hosting service “WordPress.com“). Among Jetpack’s
features (many of which are very good) is Jetpack Protect which adds – among other things – the possibility for a CAPTCHA to appear on your login pages. This feature is slightly worse than pointless as it makes
it harder for humans to log in but has no significant impact upon automated robots; at best, it provides a false sense of security and merely frustrates and slows down legitimate human
editors.
Thanks, WordPress, for slowing me down with a CAPTCHA that a robot can solve more-easily than a human.
“Proving your humanity”, as you’re asked to do, is a task that’s significantly easier for a robot to perform than a human. Eventually, of course, all tests of this nature seem likely to fail as robots become smarter than humans
(especially as the most-popular system is specifically geared towards training robots), but that’s hardly an excuse for inventing a system
that was a failure from its inception. Jetpack’s approach is fundamentally flawed because it makes absolutely no effort to disguise the challenge in a way that humans are able to read
any-differently than robots. I’ll demonstrate that in a moment.
Don’t just disable this, though! Other “Protect” features make sense. If only you could disable just the one that doesn’t…
A while back, a colleague of mine network-enabled Jetpack Protect across a handful of websites that I occasionally need to log into, and it bugged me that it ‘broke’ my password safe’s
ability to automatically log me in. So to streamline my workflow – as well as to demonstrate quite how broken Jetpack Protect’s CAPTCHA is, I’ve written a userscript that you can install into your web browser that will
completely circumvent it, solving the maths problems on your behalf so that you don’t have to. Here’s how to use it:
Install a userscript manager into your browser if you don’t have one already: I use Tampermonkey, but it ought to work with almost any of
them.
From now on, whenever you go to a page whose web path begins with “/wp-login.php” that contains a Jetpack Protect maths problem, the answer will be automatically calculated and
filled-in on your behalf. The usual userscript rules apply: if you don’t trust me, read the source code (there are really only five lines to check) and disable automatic updates for it
(especially as it operates across all domains), and feel free to adapt/improve however you see fit. Maybe if we can get enough people using it Automattic will fix this
half-hearted CAPTCHA – or at least give us a switch to disable it in the first
place.
Update: 15 October 2018 – the latest version of Jetpack makes an insignificant change to this CAPTCHA; version 1.2 of this script (linked above) works around the change.
Interesting research: “Long-term market implications of data breaches, not,” by Russell Lange and Eric W. Burger. Abstract: This report assesses the impact disclosure of data breaches
has on the total returns and volatility of the affected companies’ stock, with a focus on the results relative to the performance of the firms’ peer industries, as represented…
Turns out you can’t trust the free market to penalise companies whose negligence permits data breaches. I am
Dan’s lack of surprise. This is, of course, why security requires regulation.
It’s been a frantic week of security scares — it seems like every day there’s a new vulnerability. It’s been a real struggle for me personally to pretend like I understand what’s
going on when asked about it by family members.
Seeing people close to me get all flustered at the prospect of being “powned” has really put things in perspective for me.
So, it is with a heavy heart that I’ve decided to come clean and tell you all how I’ve been stealing usernames, passwords and credit card numbers from your sites for the past few
years.
This site maintains a table cross-referencing the most popular “secure” messaging apps (WhatsApp, Signal, Skype etc.) against their security features, so that you can make an informed
decision.
The tl;dr is, of course, what I’ve been saying all along: use Signal! (at least until Riot is more
mature…)
In January 2016, I spent $3,000 to buy 7.4 bitcoins. At the time, it seemed an entirely worthwhile thing to do. I had recently started working as a research director at the Institute
for the Future’s Blockchain Futures Lab, and I wanted firsthand experience with bitcoin, a cryptocurrency that uses a blockchain to record transactions on its network. I had no way of
knowing that this transaction would lead to a white-knuckle scramble to avoid losing a small fortune…