Quantum Computing and Cryptography

This article is a repost promoting content originally published elsewhere. See more things Dan's reposted.

Quantum computing is a new way of computing — one that could allow humankind to perform computations that are simply impossible using today’s computing technologies. It allows for very fast searching, something that would break some of the encryption algorithms we use today. And it allows us to easily factor large numbers, something that would…

A moderately-simple explanation of why symmetric cryptography is probably (or can probably be made, where it’s not) safe from our future quantum computer overlords, but asymmetric (split-key) cryptography probably isn’t. On the journey of developing the theory of computation, are we passing through within our lifetimes the short-but-inevitable bubble during which split-key cryptography is computationally viable? If so, what will our post-split-key cryptographic future look like? Interesting to think about.

Intermediary Protocols and Google Duplex

There’s a story that young network engineers are sometimes told to help them understand network stacks and/or the OSI model, and it goes something like this:

You overhear a conversation between two scientists on the subject of some topic relevant to thier field of interest. But as you listen more-closely, you realise that the scientists aren’t in the same place at all but are talking to one another over the telephone (presumably on speakerphone, given that you can hear them both, I guess). As you pay more attention still, you realise that it isn’t the scientists on the phone call at all but their translators: each scientist speaks to their translator in the scientist’s own language, and the translators are translating what they say into a neutral language shared with the other translator who translate it into the language spoken by the other scientist. Ultimately, the two scientists are communicating with one another, but they’re doing so via a “stack” at their end which only needs to be conceptually the same as the “stack” at the other end as far up as the step-below-them (the “first link” in their communication, with the translator). Below this point, they’re entrusting the lower protocols (the languages, the telephone system, etc.), in which they have no interest, to handle the nitty-gritty on their behalf.

The OSI model reflected using the "scientists conversation" metaphor. Based upon original art by Yuki Fujimura, used under a Creative Commons License.
The two scientists are able to communicate with one another, but that communication is not direct.

This kind of delegation to shared intermediary protocols is common in networking and telecommunications. The reason relates to opportunity cost, or – for those of you who are Discworld fans – the Sam Vimes’ “Boots” Theory. Obviously an efficiency could be gained here if all scientists learned a lingua franca, a universal shared second language for their purposes… but most-often, we’re looking for a short-term solution to solve a problem today, and the short-term solution is to find a work-around that fits with what we’ve already got: in the case above, that’s translators who share a common language. For any given pair of people communicating, it’s more-efficient to use a translator, even though solving the global problem might be better accomplished by a universal second language (perhaps Esperanto, for valid if Eurocentric reasons!).

1950s illustration of "driverless cars of the future". The car follows a series of electronic markers down the middle of the highway.
In the 1950s and 1960s, the concept of a self-driving car was already well-established… but the proposed mechanism for action was quite different to that which we see today.

The phenomenon isn’t limited to communications, though. Consider self-driving cars. If you look back to autonomous vehicle designs of the 1950s (because yes, we’ve been talking about how cool self-driving cars would be for a long, long time), they’re distinctly different from the ideas we see today. Futurism of the 1950s focussed on adapting the roads themselves to make them more-suitable for self-driving vehicles, typically by implanting magnets or electronics into the road surface itself or by installing radio beacons alongside highways to allow the car to understand its position and surroundings. The modern approach, on the other hand, sees self-driving cars use LiDAR and/or digital cameras to survey their surroundings and complex computer hardware to interpret the data.

This difference isn’t just a matter of the available technology (although technological developments cetainly inspired the new approach): it’s a fundamentally-different outlook! Early proposals for self-driving cars aimed to overhaul the infrastructure of the road network: a “big solution” on the scale of teaching everybody a shared second language. But nowadays we instead say “let’s leave the roads as they are and teach cars to understand them in the same way that people do.” The “big solution” is too big, too hard, and asking everybody to chip in a little towards outfitting every road with a standardised machine-readable marking is a harder idea to swallow than just asking each person who wants to become an early adopter of self-driving technology to pay a lot to implement a more-complex solution that works on the roads we already have.

LiDAR unit on a Google Self-Driving Car
In real life, these things spin much faster.

This week, Google showed off Duplex, a technology that they claim can perform the same kind of delegated-integration for our existing telephone lives. Let’s ignore for a moment the fact that this is clearly going to be overhyped and focus on the theoretical potential of this technology, which (even if it’s not truly possible today) is probably inevitable as chatbot technology improves: what does this mean for us? Instead of calling up the hairdresser to make an appointment, Google claim, you’ll be able to ask Google Assistant to do it for you. The robot will call the hairdresser and make an appointment on your behalf, presumably being mindful of your availability (which it knows, thanks to your calendar) and travel distance. Effectively, Google Assistant becomes your personal concierge, making all of those boring phone calls so that you don’t have to. Personally, I’d be more than happy to outsource to a computer every time I’ve had to sit in a telephone queue, giving the machine a summary of my query and asking it to start going through a summary of it to the human agent at the other end while I make my way back to the phone. There are obviously ethical considerations here too: I don’t like being hounded by robot callers and so I wouldn’t want to inflict that upon service providers… and I genuinely don’t know if it’s better or worse if they can’t tell whether they’re talking to a machine or not.

Process of having Google Assistant order a pizza, by phone, on your behalf.
I, for one, welcome our pizza-ordering overlords.

But ignoring the technology and the hype and the ethics, there’s still another question that this kind of technology raises for me: what will our society look like when this kind of technology is widely-available? As chatbots become increasingly human-like, smarter, and cheaper, what kinds of ways can we expect to interact with them and with one another? By the time I’m able to ask my digital concierge to order me a pizza (safe in the knowledge that it knows what I like and will ask me if it’s unsure, has my credit card details, and is happy to make decisions about special offers on my behalf where it has a high degree of confidence), we’ll probably already be at a point at which my local takeaway also has a chatbot on-staff, answering queries by Internet and telephone. So in the end, my chatbot will talk to their chatbot… in English… and work it out between the two of them.

Let that sink in for a moment: because we’ve a tendency to solve small problems often rather than big problems rarely and we’ve an affinity for backwards-compatibility, we will probably reach the point within the lifetimes of people alive today that a human might ask a chatbot to call another chatbot: a colossally-inefficient way to exchange information built by installments on that which came before. If you’re still skeptical that the technology could evolve this way, I’d urge you to take a look at how the technologies underpinning the Internet work and you’ll see that this is exactly the kind of evolution we already see in our communications technology: everything gets stacked on top of a popular existing protocol, even if it’s not-quite the right tool for the job, because it makes one fewer problem to solve today.

Hacky solutions on top of hacky solutions work: the most believable thing about Max Headroom’s appearance in Ready Player One (the book, not the film: the latter presumably couldn’t get the rights to the character) as a digital assistant was the versatility of his conversational interface.

A man and a woman look at a laptop screen in a cafe/bar.
“See? My laptop says we should hook up.”

By the time we’re talking about a “digital concierge” that knows you better than anyone, there’s no reason that it couldn’t be acting on your behalf in other matters. Perhaps in the future your assistant, imbued with intimate knowledge about your needs and interests and empowered to negotiate on your behalf, will be sent out on virtual “dates” with other people’s assistants! Only if it and the other assistant agree that their owners would probably get along, it’ll suggest that you and the other human meet in the real world. Or you could have your virtual assistant go job-hunting for you, keeping an eye out for positions you might be interested in and applying on your behalf… after contacting the employer to ask the kinds of questions that it anticipates that you’d like to know: about compensation, work/life balance, training and advancement opportunities, or whatever it thinks matter to you.

We quickly find ourselves colliding with ethical questions again, of course: is it okay that those who have access to more-sophisticated digital assistants will have an advantage? Should a robot be required to identify itself as a robot when acting on behalf of a human? I don’t have the answers.

But one thing I think we can say, based on our history of putting hacky solutions atop our existing ways of working and the direction in which digital assistants are headed, is that voice interfaces are going to dominate chatbot development a while… even where the machines end up talking to one another!

The Web began dying in 2014, here’s how

This article is a repost promoting content originally published elsewhere. See more things Dan's reposted.

Before the year 2014, there were many people using Google, Facebook, and Amazon. Today, there are still many people using services from those three tech giants (respectively, GOOG, FB, AMZN). Not much has changed, and quite literally the user interface and features on those sites has remained mostly untouched. However, the underlying dynamics of power on the Web have drastically changed, and those three companies are at the center of a fundamental transformation of the Web.

It looks like nothing changed since 2014, but GOOG and FB now have direct influence over 70%+ of internet traffic.

Internet activity itself hasn’t slowed down. It maintains a steady growth, both in amount of users and amount of websites…

Living In The Future

Eurovision Night 2012.
Eurovision Night 2012. In a moment of surreal awesomeness, Matt R holds a mirror up to the webcam in order to show Gareth the collection of whisky that’s just outside of his field of vision.

Sometimes it’s really like we’re living in the future. Exciting new technologies keep appearing, and people just keep… using them as if they’d always been there. If tomorrow we perfected the jetpack, the flying car, and the silver jumpsuit, I’ll bet that nobody would think twice about it.

Recently, I’ve had two occasions to use Google+ Hangouts, and I’ve been incredibly impressed.

The first was at Eurovision Night 2012, which was quite a while ago now. Adam did a particularly spectacular job of putting together some wonderful pre-Eurovision entertainments, which were synched-up between our two houses. Meanwhile, he and I (and Rory and Gareth and occasionally other people) linked up our webcams and spare screens via a Google+ hangout, and… it worked.

It just worked. Now I know that the technology behind this isn’t new: back in 2004, I upgraded the Troma Night set-up in Aberystwyth to add a second webcam to the Troma Night live feed. But that was one-way, and we didn’t do sound (for lack of bandwidth and concerns about accidental piracy of the soundtracks to the movies we were watching, of all things, rather than for any particularly good reason). But it really did “just work”, and we were able to wave at each other and chat to each other and – mostly – just “share in the moment” of enjoying the Eurovision Song Contest together, just like we would have in person when we lived in the same town.

At the weekend, I was originally supposed to be in Lancashire, hanging out with my family, but owing to a series of unfortunate disasters (by the way; I’m walking with a stick right now – but that’s not interesting enough to be worth blogging about), I was stuck in Oxford. Despite torrential rain where I was, Preston was quite sunny, and my family decided to have a barbeque.

A Google+ Hangout with my family and I.
I join a Google+ hangout at my (late) father’s house, where the rest of my family are having a barbeque.

I was invited… via Google+. They didn’t have Internet access, so they used a mobile dongle plugged into a laptop. I connected in from my desktop computer and then – later – from my mobile phone. So yes, this was at times a genuine mobile-to-mobile multi-party video conference, and it was simple enough that my mother was able to set it up by herself.

Like I said: living in the future.

On This Day in 2040

Looking Forward

On this day in 2040 I first managed to get my Internet Time Portal working. It’s been a long time coming, but my efforts have finally paid off. The trick was just to run The Wayback Machine in reverse, which just required the integration of my flux capacitor with the webserver. Thankfully, Apache 5‘s plugin architecture’s made it pretty easy, but I’ve already talked about how time-travel/webserver integration works back in my blog posts at the end of 2039, so I won’t bore you with them all again.

Despite what I said about password security back in 2011, I haven’t actually changed the password for my blog in 28 years, so it was the obvious target for my first reverse-websurfing experiment. That’s why past-me will be surprised to find this article posted to his blog, now that I’ve connected back in time and posted it. And I know he’ll be surprised, because I was.

In fact, it was probably this moment – this surprising moment back in April 2012 – that first made me realise that reverse-chronological web access was possible. That’s why I spent most of the next three decades cracking the secret and finally working out a way to send information back in time through the Internet.

Looking Back

There’s so much potential for this new technology. I’m hoping that soon the technology will evolve to the point where I’ll even be able to use ancient and outdated Internet protocols like “Facebook” (remember that fad?) to actually communicate directly with the people of the early 21st century. Just think of what we can learn from them!

After the second coming of Jesus in 2028 resulted in the deletion the mysterious “Video R” from the entire Internet, as well as from the minds of everybody on Earth, we’ve only been able to speculate what this mysterious media contained. Whatever it was, it was something so offensive to our Lord and Saviour that He saw fit to wipe it from the face of the Earth… but you can’t help but be curious, can’t you? Of course, those of you reading this back in 2012 can still see the video, you lucky lucky guys.

The possibilities are limitless. As soon as I’ve finished making this post, I’ll be trying to make contact with the past version of myself and see if past-Dan is capable of looking up this Wikipedia article for me: for some reason I can’t get access to it now, in 2040…

This blog post is part of the On This Day series, in which Dan periodically looks back on years yet to come.

The Back Button

How did I get here?

While lying in bed, unwell and off work, last month, I found myself surfing (on my new phone) to the Wikipedia page on torsion springs. And that’s when I found myself wondering – how did I get here?

Thankfully, there’s always the back button: famously the second most-used bit of your web browser’s user interface. So… how did I come to be reading about torsion springs?

An anniversary clock, using a torsion pendulum, so-named because it only needs winding once a year.
  • I got there from reading about torsion pendulum clocks. My grandmother used to have one of these (an “anniversary clock”, like the one above, and I remember that I used to always enjoy watching the balls spin when I was a child).
  • I’d followed a link from the article about the Atmos clock, a type of torsion pendulum clock that uses minute variations in atmospheric temperature and pressure to power the winder and which, in ideal circumstances, will never need winding.
  • Before that, I’d been reading about the Beverly Clock, a classic timepiece that’s another example of an atmospheric-pressure-clock. It’s been running for almost 150 years despite having never been wound.
  • This was an example of another long-running experiment given on the page about the Oxford Electric Bell, which is perhaps the world’s longest-running scientific experiment. Built in 1840, it uses a pair of electrostatic batteries to continuously ring a bell.
The Oxford Electric Bell experiment. It's batteries have lasted for over 160 years, but I have to charge my mobile most nights: what gives, science?
  • I got to the Oxford Electric Bell from another long-running experiment – the one acknowledged as the world’s longest-running by the Guinness Book of Records – the University of Queensland Pitch Drop Experiment. Running since 1927, this experiment demonstrates that pitch is not solid but a high-viscosity fluid. A sample of room-temperature pitch in a funnel forms a droplet about once a decade.
  • Earlier, I was learning about the difference between the different substances we call tar. Traditionally, tar is derived by baking pine wood and roots into charcoal, and collecting the runoff, but we also use the word “tar” to describe coal tar (a byproduct of coke production) and bitumen (viscous, sticky crude oil).
  • I took the initiative to learn about those differences after reading about the name “Jack Tar“, an Empire-era slang term for a sailor in the Merchant Navy or Royal Navy…
  • …which in turn was linked from the similar article about “Tommy Atkins“, a term for a British infantryman (particularly in the First World War), which has an interesting history…
  • …to which I got from the “Doughboy” article. The Doughboys were members of the American Expeditionary Force during the First World War.
R.U.R. - "Private Robot" - loads an artillery piece.
  • Finally, I got to that first Wikipedia article while, when reading an article on The Paleofuture Blog, I wondered about the etymology of the term “doughboy”, and began this whole link-clicking adventure.

It’s fascinating to work out “how you got here” after an extended exploration of a site like Wikipedia (or TV Tropes, or Changing Minds, or Uncyclopedia – and there goes your weekend…). Thank you, Back Button.

I just wish I had a Back Button in my head so that I could “wind back” my wandering thought processes. How did I end up thinking about the salt content of airline food, exactly?

New Look, New Protocols

Oh yeah: I changed the look-and-feel of scatmania.org the other week, in case you hadn’t noticed. It’s become a sort-of-traditional January activity for me, these years, to redesign the theme of my blog at this point in the year.

This year’s colours are black, white, greys, and red, and you’ll note also that serifed fonts are centre-stage again, appearing pretty-much-universally throughout the site for the first time since 2004. Yes, I know that it’s heavier and darker than previous versions of the site: but it’s been getting fluffier and lighter year on year for ages, now, and I thought it was time to take a turn. You know: like the economy did.

This site as it appeared in 2010. All those bright blues are far too invigorating: time for some nice, sombre, serious colours.

Aside from other cosmetic changes, it’s also now written using several of the new technologies of HTML5 (I may put the shiny new logo on it, at some point). So apologies to those of you running archaic and non-standards-compliant browsers (I’m looking at you, Internet Explorer 6 users) if it doesn’t look quite right, but really: when your browser is more than half as old as the web itself, it’s time to upgrade.

I’ve also got my site running over IPv6 – the next generation Internet protocol – for those of you who care about those sorts of things. If you don’t know why IPv6 is important and “a big thing”, then here’s a simple explanation. Right now you’re probably viewing the IPv4 version: but if you’re using an IPv6-capable Internet connection, you might be viewing the IPv6 version. You’re not missing out, either way: the site looks identical: but this is just my tiny contribution towards building the Internet of tomorrow.

(if you really want to, you can go to ipv6.scatmania.org to see the IPv6 version – but it’ll only work if your Internet Service Provider is on the ball and has set you up with an IPv6 address!)

Working From Home

The phone rings. It’s clear to me by the sound it makes and by the image on it’s display that this is a business call.

“Good morning, SmartData; Dan speaking,” I say.

The caller identifies themselves, and asks to speak to Alex, another SmartData employee. I look to my right to see if Alex available (presumably if he was, he’d have answered the call before it had been forwarded to me). This is possible because of the two-way webcam feed on the monitor beside me.

“I’m afraid Alex isn’t in yet,” I begin, bringing up my co-worker’s schedule on the screen in front of me, to determine what he’s up to, “He’ll be in at about 10:30 this morning. Can I get him to call you back?”

Not for a second did it occur to the caller that I wasn’t sat right there in the office, looking over at Alex’s chair and a physical calendar. Of course, I’m actually hundreds of miles away, in my study in Oxford. Most of our clients – even those whom I deal with directly – don’t know that I’m no longer based out of SmartData’s marina-side offices. Why would they need to? Just about everything I can do from the office I can do from my own home. Aside from sorting the mail on a morning and taking part in the occasional fire drill, everything I’d regularly do from Aberystwyth I can do from here.


Back when I was young, I remember reading a book once which talked about advances in technology and had wonderful pictures of what life would be like in the future. This wasn’t a dreamland of silver jumpsuits and jetpacks; everything they talked about in this book was rooted in the trends that we were already beginning to see. Published in the early 80s, it predicted a microcomputer in every home and portable communicators that everybody would have that could be used to send messages or talk to anybody else, all before the 21st century. Give or take, that’s all come to pass. I forget what the title of the book was, but I remember enjoying it as a child because it seemed so believable, so real. I guess it inspired a hopeful futurism in me.

But it also made another prediction: that with this rise in telecommunications technologies and modern microcomputers (remember when we still routinely called them that?), we’d see a greap leap in the scope for teleworking: office workers no longer going to a place of work, but remotely “dialling in” to a server farm in a distant telecentre. Later, it predicted, with advances in robotics, specialist workers like surgeons would be able to operate remotely too: eventually, through mechanisation of factories, even manual labourers would begun to be replaced by work-at-home operators sat behind dumb terminals.

To play on a cliché: where’s my damn flying car?

By now, I thought that about a quarter of us would be working from home full-time or most of the time, with many more – especially in my field, where technology comes naturally – working from home occasionally. Instead, what have we got? Somewhere in the region of one in fifty, and that includes the idiots who’ve fallen for the “Make £££ working from home” scams that do the rounds every once in a while and haven’t yet realised that they’re not going to make any £, let alone £££.

At first, I thought that this was due to all of the traditionally-cited reasons: companies that don’t trust their employees, managers who can’t think about results-based assessment rather than presence-based assessment, old-school thinking, and not wanting to be accused of favouritism by allowing some parts of their work force to telework while others can’t. In some parts of the world, and some fields, we’ve actually seen a decrease in teleworking over recent years: what’s all that about?

I’m sure that the concerns listed above are still critical factors for many companies, but I’ve realised that there could be another, more-recent fear that’s now preventing the uptake of teleworking in many companies. That fear is one that affects everybody – both the teleworkers and their comrades in the offices, and it’s something that more and more managers are becoming aware of: the fear of outsourcing.

After all, if a company’s employees can do their work from home, then they can do it from anywhere. With a little extra work on technical infrastructure and a liberal attitude to meetings, the managers can work from anywhere, too. So why stop at working from home? Once you’ve demonstrated that your area of work can be done without coming in to the office, then you’re half-way to demonstrating that it can be done from Mumbai or Chennai, for a fraction of the price… and that’s something that’s a growing fear for many kinds of technical workers in the Western world.

Our offices are a security blanket: we’re clinging on to them because we like to pretend that they’ll protect us; that they’re something special and magical that we can offer our clients that the “New World” call centres and software houses in India and China can’t offer them. I’m not sure that a security blanket that allows us to say “we have a local presence” will mean as much in ten years time as it does today.

In the meantime, I’m still enjoying working from home. It’s a little lonely, sometimes – on days when JTA isn’t around, which are going to become more common when he starts his new job –  but the instant messenger and Internet telephony tools we use make it feel a little like I’m actually in the office, and that’s a pretty good trade-off in exchange for being able to turn up at work in my underwear, if I like.

Digital Sounds For Quiet Cars – I Totally Predicted This

The Economist has a story about a bill going through US Congress about the noise (or lack thereof) made by electric and some hybrid cars. For years, I’ve pretty much predicted this development. Only I meant it in a tongue-in-cheek way.

“Cars are getting quieter and quieter,” I’ve been heard to say, “And electric and hybrid cars promise to be quieter still. I’ll bet that someday, people will realise that these quiet cars are actually more dangerous than traditional, noisy cars with internal combustion engines, and at that point laws will be passed requiring cars to make a noise.”

“There’s already legislation that requires indicators to make a ‘tick-tock’ sound, since we did away with the relays that used to make the sound we associate with indicators. Cheap cars tend to make a shitty-sounding, very-obviously-synthesised sound. So, we can assume that cheap cars in the future will make the cheapest-sounding ‘engine’ sounds. You’ll hear them coming with a uniform ‘brum-brum-brum-brum-brum’ sound, or a grating ‘bzzzzzzzzzzzzzzz’.”

“But cars are more than a means of transport; they’re a status symbol, and we already see people tricking out their wheels with glowy lights and things that make their exhaust pipes louder and fake spoilers. And as time goes on, the technology to make higher-quality synthesised sounds will make it into the cheap, chavvy cars. And what’ll happen when the cheap, chavvy cars get sold, with sophisticated built-in synthesisers? The same thing that happened when the cheap mobile phones became capable of playing sophisticated audio formats: custom ringtones.”

“Someday, within my lifetime, somebody will be run over by a car that sounds like Crazy Frog. And it will be both sad and hilarious in equal measure.”

Something we didn’t see coming a decade ago.