An Unusual Workday

Some days, my day job doesn’t seem like a job that a real person would have at all. It seems like something out of a sitcom. Today, I have:

  • Worn a bear mask in the office (panda in my case; seen below alongside my head of department, in a grizzly mask).
    Bears in the office
  • Chatted about popular TV shows that happen to contain libraries, for inclusion in a future podcast series.
  • Experimented with Web-based augmented reality as a possible mechanism for digital exhibition content delivery. (Seen this thing from Google Arts & Culture? If you don’t have an AR-capable device to hand, here’s a video of what it’s like.)
    Virtual Reality at the Bodleian
  • Implemented a demonstrative XSS payload targetting a CMS (as a teaching tool, to demonstrate how a series of minor security vulnerabilities can cascade into one huge one).
  • Gotten my ‘flu jab.

Not every day is like this. But sometimes, just sometimes, one can be.

20 Years Of Blogging

As of next week, I’ll have been blogging for 20 years, or about 54% of my life. How did that happen?

Castle of the Four Winds in early 1999.
I’d been “blogging” – not that we called it that, yet – since late 1998, but my original collection of content-mangling Perl scripts wasn’t all that. More history…

The mid-1990s were a very different time for the World Wide Web (yes, we still called it that, and sometimes we even described its use as “surfing”). Going “on the Internet” was a calculated and deliberate action requiring tying up your phone line, minutes of “connecting” along with all of the associated screeching sounds if you hadn’t turned off your modem’s loudspeaker, and you’d typically be paying twice for the experience: both a monthly fee to your ISP for the service and a per-minute charge to your phone company for the call.

It was into this environment that in 1994 I published my first web pages: as far as I know, nothing remains of them now. It wasn’t until 1998 that I signed up an account with UserActive (whose website looks almost the same today as it did then) who offered economical subdomain hosting with shell and CGI support and launched “Castle of the Four Winds”, a set of vanity pages that included my first blog.

Except I didn’t call it a “blog”, of course, because it wasn’t until the following year that Peter Merholz invented the word (he also commemorated 20 years of blogging, this year). I didn’t even call it a “weblog”, because that word was still relatively new and I wasn’t hip enough to be around people who said it, yet. It was self-described as an “online diary”, a name which only served to reinforce the notion that I was writing principally for myself. In fact, it wasn’t until mid-1999 that I discovered that it was being more-widely read than just by me and my circle of friends when I attracted a stalker who travelled across the UK to try to “surprise” me by turning up at places she expected to find me, based on what I’d written online… which was exactly as creepy as it sounds.

AvAngel.com, my second vanity site, as seen in 2001
AvAngel.com

While the world began to panic that the coming millennium was going to break all of the computers, I migrated Castle of the Four Winds’ content into AvAngel.com, a joint vanity site venture with my friend Andy. Aside from its additional content (purity tests, funny stuff, risqué e-cards), what we hosted was mostly the same old stuff, and I continued to write snippets about my life in what was now quite-clearly a “blog-like” format, with the most-recent posts at the top and separate pages for content too old for the front page. Looking back, there’s still a certain naivety to these posts which exemplify the youth of the Web. For example, posts routinely referenced my friends by their email addresses, because spam was yet to become a big enough problem that people didn’t much mind if you put their email address on a public webpage somewhere, and because email addresses still carried with them a feeling of anonymity that ceased to be the case when we started using them for important things.

Technologically-speaking, too, this was a simpler time. Neither Javascript nor CSS support was widespread (nor consistently-standardised) enough to rely upon for anything other than the simplest progressive enhancement unless you were willing to “pick a side” in what we’d subsequently call the first browser war and put one of those apalling “best viewed in Internet Explorer” or “best viewed in Netscape Navigator” banners on your site. I’ve always been a believer in a universal web (and my primary browser at the time was Opera, anyway, as it mostly-remained until Opera went wrong in 2013), and I didn’t have the energy to write everything twice, so our cool/dynamic functionality came mostly from back-end (e.g. Perl, PHP) technologies.

Meanwhile, during my initial months as a student in Aberystwyth, I wrote a series of emails to friends back home entitled “Cool And Interesting Thing Of The Day To Do At The University Of Wales, Aberystwyth”, and put copies of each onto my student webspace; I’ve since recovered these and integrated them into my unified blog.

The first version of Scatmania.org.
Scatmania.org

In 2002 I’d bought the domain name scatmania.org – a reference to my university halls of residence nickname “Scatman Dan”; I genuinely didn’t consider the possibility that the name might be considered scatalogical until later on. As I wanted to continue my blogging at an address that felt like it was solely mine (AvAngel.com having been originally shared with a friend, although in practice over time it became associated only with me), this seemed like a good domain upon which to relaunch. And so, in mid-2003 and powered by a short-lived and ill-fated blogging engine called Flip I did exactly that. WordPress, to which I’d subsequently migrate, hadn’t been invented yet and it wasn’t clear whether its predecessor, b2/cafelog, would survive the troubles its author was experiencing.

From this point on, any web address for any post made to my blog still works to this day, despite multiple technological and infrastructural changes to my blog (and some domain name shenanigans!) in the meantime. I’d come to be a big believer in the mantra that cool URIs don’t change: something that as far as possible I’ve committed to trying to upload in my blogging, my archiving, and my paid work since then. I’m moderately confident that all extant links on the web that point to earlier posts are all under my control so they can (and in most cases have) been fixed already, so I’m pretty close to having all my permalink URIs be “cool”, for now. You might hit a short chain of redirects, but you’ll get to where you’re going.

And everything was fine, until one day in 2004 when it wasn’t. The server hosting scatmania.org died in a very bad way, and because my backup strategy was woefully inadequate, I lost a lot of content. I’ve recovered quite a lot of it and put it back in-place, but some is probably gone forever.

Scatmana.org version 2 - now with actual web design
One of the longest-lived web designs for scatmania.org paid homage to the original, but with more “blue” and a WordPress backing.

The resurrected site was powered by WordPress, and this was the first time that live database queries had been used to power my blog. Occasionally, these days, when talking to younger, cooler developers, I’m tempted to follow the hip trend of reimplementing my blog as a static site, compiling a stack of host-anywhere HTML files based upon whatever-structure-I-like at the “backend”… but then I remember that I basically did that already for six years and I’m far happier with my web presence today. I’ve nothing against static site systems (I’m quite partial to Middleman, myself, although I’m also fond of Hugo) but they’re not right for this site, right now.

IndieAuth hadn’t been invented yet, but I was quite keen on the ideals of OpenID (I still am, really), and so I implemented what was probably the first viable “install-anywhere” implementation of OpenID for WordPress – you can see part of it functioning in the top-right of the screenshot above, where my (copious, at that time) LiveJournal-using friends were encouraged to sign in to my blog using their LiveJournal identity. Nowadays, the majority of the WordPress plugins I use are ones I’ve written myself: my blog is powered by a CMS that’s more “mine” than not!

Scatmania.org in 2006
I no longer have the images that made my 2006 redesign look even remotely attractive, so here it is mocked-up with block colours instead.

Over the course of the first decade of my blogging, a few trends had become apparent in my technical choices. For example:

  • I’ve always self-hosted my blog, rather than relying on a “blog as a service” or siloed social media platform like WordPress.com, Blogger, or LiveJournal.
  • I’ve preferred an approach of storing the “master” copy of my content on my own site and then (sometimes) syndicating it elsewhere: for example, for the benefit of my friends who during their University years maintained a LiveJournal, for many years I had my blog cross-post to a LiveJournal account (and backfeed copies of comments back to my site).
  • I’ve favoured web standards that provided maximum interoperability (e.g. RSS with full content) and longevity (serving HTML pages from permanent URLs, adding “extra” functionality via progressive enhancement so as to ensure that content functioned e.g. without Javascript, with CSS disabled or the specification evolved, etc.).

These were deliberate choices, but they didn’t require much consideration: growing up with a Web far less-sophisticated than today’s (e.g. truly stateless prior to the advent of HTTP cookies) and seeing the chaos caused during the first browser war and the period of stagnation that followed, these choices seemed intuitive.

(Perhaps it’s not so much of a coincidence that I’ve found myself working at a library: maybe I’ve secretly been a hobbyist archivist all along!)

Third major design reboot of scatmania.org
That body font is plain old Verdana, you know: I’ve always felt that it (plus full justification) was the right choice for this particular design, even though I regret other parts of it (like the brightness!).

As you’d expect from a blog covering a period from somebody’s teen years through to their late thirties, there’ve been significant changes in the kinds of content I’ve posted (and the tone with which I’ve done so) over the years, too. If you dip into 2003, for example, you’ll see the results of quiz memes and unqualified daily minutiae alongside actual considered content. Go back further, to early 1999, and it is (at best) meaningless wittering about the day-to-day life of a teenage student. It took until around 2009/2010 before I actually started focussing on writing content that specifically might be enjoyable for others to read (even where that content was frankly silly) and only far more-recently-still that I’ve committed to the “mostly technical stuff, ocassional bits of ‘life’ stuff” focus that I have today.

I say “committed”, but of course I’m fully aware that whatever this blog is now, it’ll doubtless be something somewhat different if I’m still writing it in another two decades…

Graph showing my blog posts per month
2014 may have included my most-prolific month of blogging, but 2003-2005 saw the most-consistent high-volume of content.

Once I reached the 2010s I started actually taking the time to think about the design of my blog and its meaning. Conceptually, all of my content is data-driven: database tables full of different “kinds” of content and associated metadata, and that’s pretty-much ideal – it provides a strong separation between content and presentation and makes it possible to make significant design changes with less work than might otherwise be expected. I’ve also always generally favoured a separation of concerns in web development and so I’m not a fan of CSS design methodologies that encourage class names describing how things should appear, like Atomic CSS. Even where it results in a performance hit, I’d far rather use CSS classes to describe what things are or represent. The single biggest problem with this approach, to my mind, is that it violates the DRY principle… but that’s something that your CSS preprocessor’s there to fix for you, isn’t it?

But despite this philosophical outlook on the appropriate gap between content and presentation, it took until about 2010 before I actually attached any real significance to the presentation at all! Until this point, I’d considered myself to have been more of a back-end than a front-end engineer, and felt that the most-important thing was to get the content out there via an appropriate medium. After all, a site without content isn’t a site at all, but a site without design is (or at least should be) still intelligible thanks to browser defaults! Remember, again, that I started web development at a time when stylesheets didn’t exist at all.

My previous implementations of my blog design had used simple designs, often adapted from open-source templates, in an effort to get them deployed as quickly as possible and move on to the next task, but now, I felt, it was time to do a little more.

Scatmania.org in 2010
My 2010 relaunch put far more focus on the graphical design elements of my blog as well as providing a fully responsive design based on (then-new) CSS media queries. Alongside my focus on separation of concerns in web development, I’m also quite opinionated on the idea that a responsive design has almost always been a superior solution to having a separate “mobile site”.

For a few years, I was producing a new theme once per year. I experimented with different colours, fonts, and layouts, and decided (after some ad-hoc A/B testing) that my audience was better-served by a “front” page than by being dropped directly into my blog archives as had previously been the case. Highlighting the latest few – and especially the very-latest – post and other recent content increased the number of posts that a visitor would be likely to engage with in a single visit. I’ve always presumed that the reason for this is that regular (but non-subscribing) readers are more-likely to be able to work out what they have and haven’t read already from summary text than from trying to decipher an entire post: possibly because my blogging had (has!) become rather verbose.

Scatmania.org until early 2012
My 2011 design, in hindsight, said more about my mood and state-of-mind at the time than it did about artistic choices: what’s with all the black backgrounds and seriffed fonts? Is this a funeral parlour?

I went through a bit of a lull in blogging: I’ve joked that I spent more time on my 2010 and 2011 designs than I did on the sum total of the content that was published in between the pair of them (which isn’t true… at least, not quite!). In the month I left Aberystwyth for Oxford, for example, I was doing all kinds of exciting and new things… and yet I only wrote a total of two blog posts.

With RSS waning in popularity – which I can’t understand: RSS is amazing! – I began to crosspost to social networks like Twitter and Google+ (although no longer to Google+, following the news of its imminent demise) to help those readers who prefer to get their content via these media, but because I wasn’t producing much content, it probably didn’t make a significant difference anyway: the chance of a regular reader “missing” something must have been remarkably slim.

Scatmania.org in 2012
The 2012 design featured “CSS peekaboo”: a transformation that caused my head to “hide” from you behind the search bar if your cursor got too close. Ruth, I hear, spent far too long playing with just this feature.

Nobody calls me “Scatman Dan” any more, and hadn’t for a long, long time. Given that my name is already awesome and unique all by itself (having changed to be so during the era in which scatmania.org was my primary personal domain name), it felt like I had the opportunity to rebrand.

I moved my blog to a new domain, DanQ.me (which is nice and short, too) and came up with a new collection of colours, fonts, and layout choices that I felt better-reflected my identity… and the fact that my blog was becoming less a place to record the mundane details of my daily life and more a place where I talk about (principally-web) technology, security, and GPS games… and just occasionally about other topics like breadmaking and books. Also, it gave me a chance to get on top of the current trend in web design for big, clean, empty spaces, square corners, and using pictures as the hook to a story.

Second design of DanQ.me, 2016
The second design of my blog after moving to DanQ.me showed-off posts with big pictures, framed by lots of white-space.

I’ve been working harder this last year or two to re-integrate (in a PESOS-like way) into my blog content that I’ve published elsewhere, mostly geocaching logs and geohashing expedition records, and I’ve also done so retroactively, so in addition to my first blog article on the subject of geocaching, you can read my first ever cache log without switching to a different site nor relying upon the continued existence and accessibility of that site. I’ve been working at being increasingly mindful of where my content is siloed outside of my control and reclaiming it by hosting it here, on my blog.

Particular areas in which I produce content elsewhere but would like to at-least maintain a copy here, and would ideally publish here first and syndicate elsewhere, although I appreciate that this is difficult, are:

  • GPS games like geocaching and geohashing – I’ve mostly got this under control, but could enjoy streamlining the process or pushing towards POSSE
  • Reddit, where I’ve written tens of thousands of words under a variety of accounts, but I don’t really pay attention to the site any more
  • I left Facebook in 2011 but I still have a backup of what was on my “Wall” at that point, which I could look into reintegrating into my blog
  • I share a lot of the source code I write via my GitHub account, but I’m painfully aware that this is yet-another-silo that I ought to learn not to depend upon (and it ought to be simple enough to mirror my repos on my own site!)
  • I’ve got a reasonable number of videos on two YouTube channels which are online by Google’s good graces (and potential for advertising revenue); for a handful of technical reasons they’re a bit of a pain to self-host, but perhaps my blog could act as a secondary source to my own video content
  • I write business reviews on Google Maps which I should probably look into recovering from the hivemind and hosting here… in fact, I’ve probably written plenty of reviews on other sites, too, like Amazon for example…
  • On two previous occasions I’ve maintained an online photo gallery; I might someday resurrect the concept, at least for the photos that used to be published on them
  • I’ve dabbled on a handful of other, often weirder, social networks before like Scuttlebutt (which has a genius concept, by the way) and Ello, and ought to check if there’s anything “original” on there I should reintegrate
  • Going way, way back, there are a good number of usenet postings I’ve made over the last twenty-something years that I could reclaim, if I can find them…

(if you’re asking why I’m inclined to do all of these things: here’s why)

Current iteration of DanQ.me
This looks familiar.

20 years and around 717,000 words worth of blogging down, it’s interesting to look back and see how things have changed: in my life, on the Web, and in the world in general. I’ve seen many friends’ blogs come and go: they move into a new phase of their life and don’t feel like what they wrote before reflects them today, most often, and so they delete them… which is fine, of course: it’s their content! But for me it’s always felt wrong to do so, for two reasons: firstly, it feels false to do so given that once something’s been put on the Web, it might well be online forever – you can’t put the genie back in the bottle! And secondly: for me, it’s valuable to own everything I wrote before. Even the cringeworthy things I wrote as a teenager who thought they knew everything and the antagonistic stuff I wrote in my early 20s but that I clearly wouldn’t stand by today is part of my history, and hiding that would be a disservice to myself.

The 17-year-old who wrote my first blog posts two decades ago this month fully expected that the things he wrote would be online forever, and I don’t intend to take that away from him. I’m sure that when I write a post in October 2038 looking back on the next two decades, I’ll roll my eyes at myself today, too, but for me: that’s part of the joy of a long-running personal blog. It’s like a diary, but with a sense of accountability. It’s a space on the web that’s “mine” into which I can dump pretty-much whatever I like.

I love it: I’ve been blogging for over half of my life, and if I can get back to you in 2031 and tell you that I’ve by-then been doing so for two-thirds of my life, that would be a win.

× × × × × × × × × × × ×

IndieWebCamp Oxford

This weekend, I attended part of Oxford’s first ever IndieWebCamp! As a long (long, long) time proponent of IndieWeb philosophy (since long before anybody said “IndieWeb”, at least) I’ve got my personal web presence pretty-well sorted out. Still, I loved the idea of attending and pushing some of my own tools even further: after all, a personal website isn’t “finished” until its owner says it is! One of the things I ended up hacking on was pretty-predictable: enhancements to my recently-open-sourced geocaching PESOS tools… but the other’s worth sharing too, I think.

Hacking and learning at IndieWebCamp Oxford
Some of IndieWebCamp Oxford’s attendees share knowledge and hack code together.

I’ve recently been playing with WebVR – for my day job at the Bodleian, I swear! – and I was looking for an excuse to try to expand some of what I’d learned into my personal blog, too. Given that I’ve recently acquired a Ricoh Theta V I thought that this’d be the perfect opportunity to add WebVR-powered panoramas to this site. My goals were:

  • Entirely self-hosted; no external third-party dependencies
  • Must degrade gracefully (i.e. even if you’re using an older browser, don’t have Javascript enabled, etc.) it should at least show the original image
  • In plain-old browsers should support mouse (or touch) control to pan the scene
  • Where accelerators are available (e.g. mobiles), “magic window” support to allow twist-to-explore
  • And where “true” VR hardware (Cardboard, Vive, Rift etc.) with WebVR support is available, allow one-click use of that
IndieWebCamp Oxford attendees at the pub
It wouldn’t be a geeky hacky camp thingy if it didn’t finish at a bar.

Hopefully the images above are working for you and are “interactive”. Try click-and-dragging on them (or tilt your device), try fullscreen mode, and/or try WebVR mode if you’ve got hardware that supports it. The mechanism of operation is slightly hacky but pretty simple: here’s how it works:

  1. The image is inserted into the page as normal but with an extra CSS class of “vr360” and a data attribute pointing to the full-resolution image, e.g.:
    <img class="vr360" src="/uploads/2018/09/R0010005_20180922182210-1024x512.jpg" alt="IndieWebCamp Oxford attendees at the pub" width="640" height="320" data-vr360="/uploads/2018/09/R0010005_20180922182210.jpg" />
  2. Some Javascript swaps-out images with this class for an iframe of the same size, showing a special page and passing the image filename after the hash, e.g.:
    for(vr360 of document.querySelectorAll('.vr360')){
    const width = parseInt(vr360.width);
    const height = parseInt(vr360.height);
    if(width == 0) width = '100%'; // Fallback for where width/height not specified,
    if(height == 0) height = '100%'; // needed because of some quirks with Dan's lazy-loader
    vr360.outerHTML = `<iframe src="/q23-content/themes/q18/vr360/#${vr360.dataset.vr360}" width="${width}" height="${height}" class="aligncenter" class="vr360-frame" style="min-width: 340px; min-height: 340px;"></iframe>`;
    }
  3. The iframe page loads this Javascript file. This loads three.js (to make 3D things easy) and WebVR-polyfill (to fix browser quirks). Finally (scroll to the bottom of the code), it creates a camera in the centre of a sphere, loads the image specified in the hash, flips it, and paints it onto the inside surface of the sphere, sets up controls, and turns the user loose on it. That’s all there is to it!

You’re welcome to any of my code if you’d like a drop-in approach to hosting panoramic photographs on your own personal site. My solution’s pretty extensible if you want e.g. interactive hotspots or contextual overlays – in fact, that – plus an easy route to editing the content for less-technical users – is pretty-much exactly what I’m working on for my day job at the moment.

Importing Geocaching Logs into WordPress

Background

As an ocassional geocacher and geohasher, I’m encouraged to post logs describing my adventures, and each major provider wants me to post my logs into their silo (see e.g. my logs on geocaching.com, on opencache.uk, and on the geohashing wiki). But as a believer in the ideals behind the IndieWeb (since long before anybody said “IndieWeb”), I’m opposed to keeping the only copy of content that I produce in an environment controlled by somebody else (why?).

How do I reconcile this?

Wrist-mounted GPS in the snow.
Just another hundred metres to the cache, then it’s time to freeze my ass back to base.

What I’d prefer would be to be able to write my logs here, on my own blog, and for my content to by syndicated via some process into the logging systems of the various silo sites I prefer. This approach is called POSSE – Publish on Own Site, Syndicate Elsewhere. In addition to the widely-described benefits of this syndication strategy, such a system would also make it possible for me to:

  • write single posts amalgamating multiple locations (e.g. a geohashing expedition that included geocache finds) or,
  • write single posts that represent the same location published on multiple silos (e.g. a visit to a geocache published on two different listing sites [e.g. 1, 2])

Applying such an tool would require some work as different silos have different acceptable content rules (geocaching.com, for example, effectively forbids mention of the existence of other geocache listing sites), but that’d theoretically be workable.

POSSE would involve posts being made first to my blog and then converted via some process into logs in each relevant silo
The ideal solution would be POSSE-based.

Unfortunately, content rules aren’t the only factor making PESOS – writing content into each silo and then copying it to my blog – preferable to POSSE. There’s also:

  • Not all of the silos offer suitable (published) APIs, and where they do, the APIs are all distinctly different.
  • Geocaching.com specifically forbids the use of unapproved automated robots to access the site (and almost certainly wouldn’t approve the kind of tool that would be ideal).
  • The siloed services are well-supported by official and third-party apps with medium-specific logic which make them the best existing way to produce logs.
PESOS would mean that posts were made "the usual way" to the silos and then a process duplicates them onto my blog
A PESOS-based solution is far easier to implement, in this case.

Needless to say: as much as I’d have loved to POSSE my geo* logs, PESOS will do.

Implementation

My implementation is a WordPress plugin which does two things. The first is that it provides a Javascript bookmarklet and an accompanying dynamically-generated Javascript file (the former loads the latter) served from my blog’s domain. That Javascript file contains reference to every log already published to my blog, so that the Javascript code can deliberately omit these logs from any import. When executed on a log listing page like those linked above, it copies all of the details of that log into a form which submits them back to my blog, where it’s received by the second part of the plugin.

Geocache logs to WordPress importer seen running on geocaching.com
The import controls appear in a new, right-most column (GCVote is also visible running in my browser).

The second part of the plugin takes this data and creates a new draft post. My plugin is pretty opinionated on this part because it’s geared strongly towards my use-case, so if you want to use it yourself you’ll probably want to tweak the code a little (e.g. it applies specific tags and names metadata fields a particular way).

Import plugin running on OpenCache.uk
When run on OpenCache.uk effectively the same interface is presented, even though the underlying mechanisms and data locations are different.

It’s not fully-automated and it’s not POSSE,but it’s “good enough” and it’s enabled me to synchronise all of my cache logs to my blog. I’ve plans to extend it to support other GPS game services to streamline my de-siloisation even further.

And of course, I’ve open-sourced the whole thing. If it’s any use to you (probably in an adapted form), it’s all yours.

× × ×

How Edge Follows In IE’s Security Failings

I’ve generally been pretty defensive of Microsoft Edge, the default web browser in Windows 10. Unlike its much-mocked predecessor Internet Explorer, Edge is fast, clean, modern, and boasts good standards-compliance: all of the things that Internet Explorer infamously failed at! I was genuinely surprised to see Edge fail to gain a significant market share in its first few years: it seemed to me that everyday Windows users installed other browsers (mostly Chrome, which is causing its own problems) specifically because Internet Explorer was so terrible, and that once their default browser was replaced with something moderately-good this would no longer be the case. But that’s not what’s happened. Maybe it’s because Edge’s branding is too-remiscient of its terrible predecessor or maybe just because Windows users have grown culturally-used to the idea that the first thing they should do on a new PC is download a different browser, but whatever the reason, Edge is neglected. And for the most part, I’ve argued, that’s a shame.

Edge's minimalistic Certificate View.
I ranted at an Edge developer I met at a conference, once, about Edge’s weak TLS debugging tools that couldn’t identify an OCSP stapling issue that only affected Edge, but I thought that was the worse of its bugs… until now…

But I’ve changed my tune this week after doing some research that demonstrates that a long-standing security issue of Internet Explorer is alive and well in Edge. This particular issue, billed as a “feature” by Microsoft, is deliberately absent from virtually every other web browser.

About 5 years ago, Steve Gibson observed a special feature of EV (Extended Validation) SSL certificates used on HTTPS websites: that their extra-special “green bar”/company name feature only appears if the root CA (certificate authority) is among the browser’s default trust store for EV certificate signing. That’s a pretty-cool feature! It means that if you’re on a website where you’d expect to see a “green bar”, like Three Rings, PayPal, or HSBC, then if you don’t see the green bar one day it most-likely means that your connection is being intercepted in the kind of way I described earlier this year, and everything you see or send including passwords and credit card numbers could be at risk. This could be malicious software (or nonmalicious software: some antivirus software breaks EV certificates!) or it could be your friendly local network admin’s middlebox (you trust your IT team, right?), but either way: at least you have a chance of noticing, right?

Firefox address bars showing EV certificates of Three Rings CIC (GB), PayPal, Inc. (US), and HSBC Holdings plc (GB)
Firefox, like most browsers, shows the company name in the address bar when valid EV certificates are presented, and hides it when the validity of that certificate is put into question by e.g. network sniffing tools set up by your IT department.

Browsers requiring that the EV certificate be signed by a one of a trusted list of CAs and not allowing that list to be manipulated (short of recompiling the browser from scratch) is a great feature that – were it properly publicised and supported by good user interface design, which it isn’t – would go a long way to protecting web users from unwanted surveillance by network administrators working for their employers, Internet service providers, and governments. Great! Except Internet Explorer went and fucked it up. As Gibson reported, not only does Internet Explorer ignore the rule of not allowing administrators to override the contents of the trusted list but Microsoft even provides a tool to help them do it!

Address bars from major browsers connecting to a spoofed site, with EV certificate "green bars" showing only in Internet Explorer and Edge.
From top to bottom: Internet Explorer 11, Edge 17, Firefox 61, Chrome 68. Only Internet Explorer and Edge show the (illegitimate) certificate for “Barclays PLC”. Sorry, Barclays; I had to spoof somebody.

I decided to replicate Gibson’s experiment to confirm his results with today’s browsers: I was also interested to see whether Edge had resolved this problem in Internet Explorer. My full code and configuration can be found here. As is doubtless clear from the title of this post and the screenshot above, Edge failed the test: it exhibits exactly the same troubling behaviour as Internet Explorer.

Thanks, Microsoft.

Safari doesn't fall for it, either.
I also tried Safari (both on MacOS, above, and iOS, below) and it behaved as the other non-Microsoft browsers do (i.e. arguably more-correctly than IE or Edge).

I shan’t for a moment pretend that our current certification model isn’t without it’s problems – it’s deeply flawed; more on that in a future post – but that doesn’t give anybody an excuse to get away with making it worse. When it became apparent that Internet Explorer was affected by the “feature” described above, we all collectively rolled our eyes because we didn’t expect better of everybody’s least-favourite web browser. But for Edge to inherit this deliberate-fault, despite every other browser (even those that share its certificate store) going in the opposite direction, is just insulting.

× × × ×

The Mystery Button-Box

Earlier this year I found a mystery cable. But today, I’ve got an even bigger mystery. What the hell is this?

Box with two keyhole-shaped "buttons" and a power cord.
It’s a… thing?

I found it in a meeting room at work, tucked away in a corner. Aside from the power cord, there are no obvious interfaces to it.

"Buttons" on the button-box.
There are two keyhole-shaped “buttons” which can be pressed down about 2cm and which spring back up (except when they jam, but I think they’re not supposed to).

My best bet is that it’s some kind of induction-based charger? I imagine some kind of device like a radio microphone or walkie-talkie that can be pushed-in to these holes and the button “spring” is just about closing the hole when it’s not in use. But the box is old, based on the style of plug, cable, and general griminess of the hardware… not to mention that it’s got a stack of PAT test stickers going back at least 11 years.

Bottom of the mystery button-box.
No real markings anywhere on it: there’s a small hole in the (metal) base and PAT test stickers.

I’ve plugged it in and tried “pressing” the buttons but it doesn’t appear to do anything, which supports my “induction charger” hypothesis. But what does it charge? I must know!

Edit: The only Electrak I can find make lighting control systems. Could it be something to do with lighting control? I can’t find anything that looks like this on their website, though.

Edit 2: Ah-hah! Looks like it’s some weird kind of electrical socket. You put the key-shaped plug in and “twist”.

Electrak plug
The plugs apparently look something like this, although I can’t find any here.

Edit 3: Hang on a minute… the most-recent PAT test sticker indicates that it was tested in… November 2019. Now my working hypothesis is that this is some kind of power supply system for a time machine we haven’t yet built. I’ve asked a number of colleagues what it’s for (i.e. what plugs into it) and nobody seems to have a clue.

× × × ×

How Much of My Graticule is Covered With Water?

I’m a moderately-keen geohasher, as you might be aware if you follow my geohashing logs or you saw that video of me going ‘hashing earlier this month.

For those that don’t know, the skinny version is this: in May 2008 an XKCD comic was published proposing (or at least joking about) a new game with a name reminiscient of geocaching. To play the game, participants use a mathematical hashing function on the current date and the most recent Dow Jones Industrial Average opening value to generate sets of random coordinates around the globe and then try to find their way to them, hopefully experiencing adventures along the way. The nature of stock markets and hashing functions means that the coordinates for any given day are effectively random and impossible to predict (far) in advance, so it’s sometimes described as a spontaneous adventure generator.

XKCD comic #426, "Geohashing"
The XKCD comic that started it all.

Recently, I found myself wondering about how much of a disadvantage players are at if they live in very “wet” graticules. Residents of the Channel Islands graticule (49 -2), for example, are confined to two land masses surrounded entirely by water. And while it’s true that water hashpoints can be visited if you’re determined enough, it’s still got to be considered to be playing at a disadvantage compared to those of us lucky ones in landlocked graticules like mine (51 -1).

And because I’m me and so can’t comfortably leave a question unanswered, I wrote a program to try to answer it! It’s among the hackiest, dirtiest software solutions I’ve ever written, so if it works for you then it’s a flipping miracle. What it does is:

  1. Determines which OpenStreetMap tiles (the image files served to your browser when you use OpenStreetMap) cover the graticule in question, and downloads them.
  2. Extracts information about the colour of each pixel in each tile.
  3. Counts the proportion of “water blue” pixels to other pixels (this isn’t perfect, because it trips over things like ferry lines on the map as being “not water”, especially at low zoom-levels).
Extreme zoom-in on Worcester College Lake, on OpenStreetMap.
Some parts of Worcester College Lake are identified as “not water” on account of the text overlay.

I mentioned it was hacky, right?

You can try it for yourself, if you’d like. You’ll need NodeJS, wget, wc, and ImageMagick – all pretty standard or easy-to-get things on a typical Linux box. Run with node geohash-pcwater.js 51 -1, where 51 -1 is the identifier for the graticule you’re interested in. And in case you’re interested – the Swindon graticule (where I live) is about 0.68% water, but the Channel Islands graticule is closer to 93.13% water. That’s no small disadvantage: sorry, Channel Islands geohashers!

Update 2018-08-22: discovered some prior art that takes a somewhat-similar approach.

×

Intercepting HTTPS Traffic from Android Emulator

Mostly for my own benefit, as most other guides online are outdated, here’s my set-up for intercepting TLS-encrypted communications from an emulated Android device (in Android Emulator) using Fiddler. This is useful if you want to debug, audit, reverse-engineer, or evaluate the security of an Android app. I’m using Fiddler 5.0 and Android Studio 2.3.3 (but it should work with newer versions too) to intercept connections from an Android 8 (Oreo) device using Windows. You can easily adapt this set-up to work with physical devices too, and it’s not hard to adapt these instructions for other configurations too.

Intercepting a HTTPS connection to DanQ.me on a virtual Android device.

1. Configure Fiddler

Install Fiddler and run it.

Configuring Fiddler

Under Tools > Options > HTTPS, enable “Decrypt HTTPS traffic” and allow a root CA certificate to be created.

Click Actions > Export Root Certificate to Desktop to get a copy of the root CA public key.

Fiddler's Connections settings

On the Connections tab, ensure that “Allow remote computers to connect” is ticked. You’ll need to restart Fiddler after changing this and may be prompted to grant it additional permissions.

If Fiddler changed your system proxy, you can safely change this back (and it’ll simplify your output if you do because you won’t be logging your system’s connections, just the Android device’s ones). Fiddler will complain with a banner that reads “The system proxy was changed. Click to reenable capturing.” but you can ignore it.

2. Configure your Android device

Android Device Manager - New Device

Install Android Studio. Click Tools > Android > AVD Manager to get a list of virtual devices. If you haven’t created one already, create one: it’s now possible to create Android devices with Play Store support (look for the icon, as shown above), which means you can easily intercept traffic from third-party applications without doing APK-downloading hacks: this is great if you plan on working out how a closed-source application works (or what it sends when it “phones home”).

Android emulator showing network settingsIn Android’s Settings > Network & Internet, disable WiFi. Then, under Mobile Network > Access Point Names > {Default access point, probably T-Mobile} set Proxy to the local IP address of your computer and Port to 8888. Now all traffic will go over the virtual cellular data connection which uses the proxy server you’ve configured in Fiddler.

Android network proxy settings

Drag the root CA file you exported to your desktop to your virtual Android device. This will automatically copy the file into the virtual device’s “Downloads” folder (if you’re using a physical device, copy via cable or network). In Settings > Security & Location > Encryption & Credentials > Install from SD Card, use the hamburger menu to get to the Downloads folder and select the file: you may need to set up a PIN lock on the device to do this. Check under Trusted credentials > User to check that it’s there, if you like.

Installing a Root CA in Android.

Test your configuration by visiting a HTTPS website: as you browse on the Android device, you’ll see the (decrypted) traffic appear in Fiddler. This also works with apps other than the web browser, of course, so if you’re reverse-engineering a API-backed application encryption then encryption doesn’t have to impede you.

3. Not working? (certificate pinning)

A small but increasing number of Android apps implement some variation of built-in key pinning, like HPKP but usually implemented in the application’s code (which is fine, because most people auto-update their apps). What this does is ensures that the certificate presented by the server is signed by a certification authority from a trusted list (a trusted list that doesn’t include Fiddler’s CA!). But remember: the app is running on your device, so you’re ultimately in control – FRIDA’s bypass script “fixed” all of the apps I tried, but if it doesn’t then I’ve heard good things about Inspeckage‘s “SSL uncheck” action.

Summary of steps

If you’re using a distinctly different configuration (different OS, physical device, etc.) or this guide has become dated, here’s the fundamentals of what you’re aiming to achieve:

  1. Set up a decrypting proxy server (e.g. Fiddler, Charles, Burp, SSLSplit – note that Wireshark isn’t suitable) and export its root certificate.
  2. Import the root certificate into the certificate store of the device to intercept.
  3. Configure the device to connect via the proxy server.
  4. If using an app that implements certificate pinning, “fix” the app with FRIDA or another tool.
× × × × × × ×

Challenge Robin

Do you remember Challenge Anneka? It aired during the late 1980s and early 1990s and basically involved TV presenter Anneka Rice being dropped off somewhere “random” and being challenged to find and help people in sort-of a treasure-trail activity; sort of a game show but with only one competitor and the prizes are community projects and charities. No? Doesn’t matter, it’s just what I was thinking about.

Robin Varley in Oxford
“Hey, I’ve got an idea,” said Robin, shortly before we stole his phone and wallet and dumped him in the countryside.

Ruth‘s brother Robin is doing a project this year that he calls 52 Reflect (you may recall I shared his inaugural post) which sees him leaving London to visit a different place every weekend, hike around, and take some photos. This last weekend, though, he hadn’t made any plans, so he came up to Oxford and asked us to decide where he went: we were to pick a place between 10 and 15 miles away, blindfold him, and drop him off there to see if he could find his way home. Naturally he’d need to be deprived of a means of navigation or communication, so we took his phone, and to increase the challenge we also took his wallet, leaving him with only a tenner in case he needed to buy a packet of crisps or something.

Baby John tries on a blindfold.
I had a friendly assistant test out a variety of blindfolds for me: this wasn’t the one we eventually used.

After much secretive discussion, we eventually settled on N 51° 50.898′, W 001° 28.987′: a footpath through a field in the nothingness to the West of Finstock, a village near the only-slightly-larger town of Charlbury. Then the next morning we bundled Robin into a car (with a blindfold on), drove him out to near the spot, walked him the rest of the way (we’d been careful to pick somewhere we believed we could walk a blindfolded person to safely), and ran quietly away while he counted to 120 and took off his blindfold.

Field near Finstock
We selected a location based on a combination of its distance, natural beauty, and anticipated difficulty in determining the “right” direction to walk in after being abandoned. Unfortunately it rained on the day itself, so the beauty was somewhat muted.

I also slipped a “logging only” GPS received into his backpack so that we’d be able, after the fact, to extract data about his journey – distance, speeds, route etc.. And so when he turned up soaking wet on our door some hours later we could look at the path he took at the same time as he told us the story of his adventure. (If you’re of such an inclination, you can download the GPX file.)

Map showing Robin's route
His route might not have been the most direct, but he DID manage to hitch a lift for the first leg.

For the full story of his adventure, go read Robin’s piece about it (the blog posts of his other adventures are pretty good too). Robin’s expressed an interest in doing something similar – or even crazier – in future, so you might be hearing more of this kind of thing.

× × × ×

Leak in Comic Chameleon (app API hacking)

I recently discovered a minor security vulnerability in mobile webcomic reading app Comic Chameleon, and I thought that it was interesting (and tame) enough to share as a learning example of (a) how to find security vulnerabilities in an app like this, and (b) more importantly, how to write an app like this without this kind of security vulnerability.

The nature of the vulnerability is that, for webcomics pushed directly into the platform by their authors, it’s possible to read comics (long) before they’re published. By way of proof, here’s a copy of the top-right 200 × 120 pixels of episode 54 of the (excellent) Forward Comic, which Imgur will confirm was uploaded on 2 July 2018: over three months ahead of its planned publication date.

Forward Comic 0054, due for publication in October
I’m not going to spoil this comic for you, but if you follow it then when October comes I think you’ll be pleased.

How to hack a web-backed app

Just to be clear, I didn’t set out to hack this app, but once I stumbled upon the vulnerability I wanted to make sure that I was able to collect enough information that I’d be able to explain to its author what was wrong and how to fix it. You’d be amazed how many systems I find security holes in almost-completely by accident. In fact, I’d just noticed that the application supported some webcomics that I follow but for which I hadn’t been able to find RSS feeds (and so I was selfdogfooding my own tool, RSSey, to “produce” RSS feeds for my reader by screen-scraping: not the most-elegant solution). But if this app could produce a list of issues of the comic, it must have some way of doing what I was trying to do, and I wanted to know what it was.

Comic Chameleon running on Android
Comic Chameleon brings a lot of comics into a single slick Android/iOS app. Some of them you’ll even have heard of!

The app, I figured, must “phone home” to some website – probably the app’s official website itself – to get the list of comics that it supports and details of where to get their feeds from, so I grabbed a copy of the app and started investigating. Because I figured I was probably looking for a URL, the first thing I did was to download the raw APK file (your favourite search engine can tell you how to do this), decompressed it (APK files are just ZIP files, really) and ran strings on it to search for likely-looking URLs:

Running strings on the Comic Chameleon APK contents
As predicted, there are several hard-coded addresses. And all over unencrypted HTTP, eww!

I tried visiting a few of the addresses but many of them seemed to be API endpoints that were expecting additional parameters. Probably, I figured, the strings I’d extracted were prefixes to which those parameters were attached. Rather than fuzz for the right parameters, I decided to watch what the app did: I spun up a simulated Android device using the official emulator (I could have used my own on a wireless network that I control, of course, but this was lazier) and ran my favourite packet sniffer to see what the application was requesting.

Wireshark output showing Comic Chameleon traffic.
The web addresses are even clearer, here, and include all of the parameters I need.

Now I had full web addresses with parameters. Comparing the parameters that appeared when I clicked different comics revealed that each comic in the “full list” was assigned a numeric ID which was used when requesting issues of that comic (along with an intermediate stage where the year of publication is requested).

Comic Chameleon comic list XML
Each comic is assigned an ID number, probably sequentially.

Interestingly, a number of comics were listed with the attribute s="no-show" and did not appear in the app: it looked like comics that weren’t yet being made available via the app were already being indexed and collected by its web component, and for some reason were being exposed via the XML API: presumably the developer had never considered that anybody but their app would look at the XML itself, but the thing about the Web is that if you put it on the Web, anybody can see it.

Still: at this point I assumed that I was about to find what I was looking for – some kind of machine-readable source (an RSS feed or something like one) for a webcomic or two. But when I looked at the XML API for one of those webcomics I discovered quite a bit more than I’d bargained on finding:

no-shows in the episode list produced by the web component of Comic Chameleon
Hey, what’s this? This feed includes titles for webcomics that haven’t been published yet, marked as ‘no-show’…

The first webcomic I looked at included the “official” web addresses and titles of each published comic… but also several not yet published ones. The unpublished ones were marked with s="no-show" to indicate to the app that they weren’t to be shown, but I could now see them. The “official” web addresses didn’t work for me, as I’d expected, but when I tried Comic Chameleon’s versions of the addresses, I found that I could see entire episodes of comics, up to three and a half months ahead of their expected publication date.

Whoops.

Naturally, I compiled all of my findings into an email and contacted the app developer with all of the details they’d need to fix it – in hacker terms, I’m one of the “good guys”! – but I wanted to share this particular example with you because (a) it’s not a very dangerous leak of data (a few webcomics a few weeks early and/or a way to evade a few ads isn’t going to kill anybody) and (b) it’s very illustrative of the kinds of mistakes that app developers are making a lot, these days, and it’s important to understand why so that you’re not among them. On to that in a moment.

Responsible disclosure

Because (I’d like to think) I’m one of the “good guys” in the security world, the first thing I did after the research above was to contact the author of the software. They didn’t seem to have a security.txt file, a disclosure policy, nor a profile on any of the major disclosure management sites, so I sent an email. Were the security issue more-severe, I’d have sent a preliminary email suggesting (and agreeing on a mechanism for) encrypted email, but given the low impact of this particular issue, I just explained the entire issue in the initial email: basically what you’ve read above, plus some tips on fixing the issue and an offer to help out.

"Hacking", apparently
This is what stock photo sites think “hacking” is. Well… this, pages full of green code, or hoodies.

I subscribe to the doctrine of responsible disclosure, which – in the event of more-significant vulnerabilities – means that after first contacting the developer of an insecure system and giving them time to fix it, it’s acceptable (in fact: in significant cases, it’s socially-responsible) to publish the details of the vulnerability. In this case, though, I think the whole experience makes an interesting learning example about ways in which you might begin to “black box” test an app for data leaks like this and – below – how to think about software development in a way that limits the risk of such vulnerabilities appearing in the first place.

The author of this software hasn’t given any answer to any of the emails I’ve sent over the last couple of weeks, so I’m assuming that they just plan to leave this particular leak in place. I reached out and contacted the author of Forward Comic, though, which turns out (coincidentally) to be probably the most-severely affected publication on the platform, so that he had the option of taking action before I published this blog post.

Lessons to learn

When developing an “app” (whether for the web or a desktop or mobile platform) that connects to an Internet service to collect data, here are the important things you really, really ought to do:

  1. Don’t publish any data that you don’t want the user to see.
  2. If the data isn’t for everybody, remember to authenticate the user.
  3. And for heaven’s sake use SSL, it’s not the 1990s any more.
Website message asking visitor to confirm that they're old enough.
It’s a good job that nobody on the Web would ever try to view something easily-available but which they shouldn’t, right? That’s why screens like this have always worked so well.

That first lesson’s the big one of course: if you don’t want something to be on the public Internet, don’t put it on the public Internet! The feeds I found simply shouldn’t have contained the “secret” information that they did, and the unpublished comics shouldn’t have been online at real web addresses. But aside from (or in addition to) not including these unpublished items in the data feeds, what else might our app developer have considered?

  • Encryption. There’s no excuse for not using HTTPS these days. This alone wouldn’t have prevented a deliberate effort to read the secret data, but it would help prevent it from happening accidentally (which is a risk right now), e.g. on a proxy server or while debugging something else on the same network link. It also protects the user from exposing their browsing habits (do you want everybody at that coffee shop to know what weird comics you read?) and from having content ‘injected’ (do you want the person at the next table of the coffee shop to be able to choose what you see when you ask for a comic?
  • Authentication (app). The app could work harder to prove that it’s genuinely the app when it contacts the website. No mechanism for doing this can ever be perfect, because the user hasa access to the app and can theoretically reverse-engineer it to fish the entire authentication strategy out of it, but some approaches are better than others. Sending a password (e.g. over Basic Authentication) is barely better than just using a complex web address, but using a client-side certiciate or an OTP algorithm would (in conjunction with encryption) foil many attackers.
  • Authentication (user). It’s a very-different model to the one currently used by the app, but requiring users to “sign up” to the service would reduce the risks and provide better mechanisms for tracking/blocking misusers, though the relative anonymity of the Internet doesn’t give this much strength and introduces various additional burdens both technical and legal upon the developer.

Fundamentally, of course, there’s nothing that an app developer can do to perfectly protect the data that is published to that app, because the app runs on a device that the user controls! That’s why the first lesson is the most important: if it shouldn’t be on the public Internet (yet), don’t put it on the public Internet.

Hopefully there’s a lesson for you somewhere too: about how to think about app security so that you don’t make a similar mistake, or about some of the ways in which you might test the security of an application (for example, as part of an internal audit), or, if nothing else, that you should go and read Forward, because it’s pretty cool.

Further reading

7 August 2018: I’ve now written a quick explanation about how to intercept HTTPS traffic from Android apps, for those that asked.

×

Oxford’s Long-Lost Zoo and Wild Wolves

It’s been a while since I last hid geocache containers and it felt like it was time I gave a back some more to the community, especially as the “village” I live in has a lower cache density than it deserves (conversely, Oxford City Centre is chock-full of uninspiring magnetic nanos – although it’s improving – and saturated with puzzle caches that ultimately require a trek well outside the ring road). I’ve never been a heavyweight score-counting ‘cacher, but I’ve always had a soft spot for nice containers as large as their hiding place will permit coupled with well thought-out pieces of local interest, and that’s the kind of cache I wanted to add to my local area.

Annabel helps hunt for a place to hide a small clip-lock box (with attached chain).
Plus, my second-smallest caching-buddy was keen on getting involved with hiding containers rather than just finding them for me.

So imagine my joy when I discover a little-known piece of history about my village: that for a few years in the 1930s, we used to have a zoo! And I’m not talking about something on the scale of that place with the meercats that we used to go to: I’m talking about a proper zoo with lions and tigers and bears (oh my!). Attractions like Rosie the elephant and Hanno the lion would get mentioned in the local newspapers at every excuse, and a special bus service connected Oxford city centre to the entrance to the zoo, just outside then (then much-smaller) Kidlington village.

Entrance to Oxford Zoo
I’ve stood at the spot from which this photo was taken, and I couldn’t recognise it. A new boulevard, houses, a police station and a leisure centre dominate the view today.

Taking advantage of my readers’ card at the Bodleian Library, I was able to find newspapers and books and piece together the history of this short-lived place. Of particular interest were the unusual events of January 1937, when three wolves escaped from the zoo and caused chaos in the surrounding village and farms for several days. In a tale that sounds almost like a Marvel Comic origin story, the third wolf was eventually shot by local press photographer Johnny Johnson who chased the animal down on a borrowed bicycle.

Graph of the wild wolf population of Oxfordshire
Wild wolves in Oxfordshire were driven to extinction in the 16th century, but made a tiny comeback for a few days in the 1930s.

This formed the essence of our new geocaches: we planned four geocaches –

  1. Oxford’s Long-Lost Zoo (GC7Q96B / OK0456), representing the zoo and hidden at a corner of what used to be the grounds
  2. Oxford’s Wild Wolf One (GC7Q9E6 / OK0457), representing the first escaped wolf and hidden near to a garden it jumped into
  3. Oxford’s Wild Wolf Two (GC7Q9FF / OK0458), representing the second escaped wolf and hidden near to where it was shot by a farmer and his son
  4. Oxford’s Wild Wolf Three – not yet placed, but we’re planning a multicache series that follows places that the third wolf might have travelled through during its extended escape (the third wolf managed to stay at large for long enough to allegedly kill 13 sheep)
Decorated ammo can cache
Sticking to my aim of larger, higher-quality caches, the “zoo” cache is a decorated ammo can filled with toy animals.

Soon after the first three caches went live they were found by a local ‘cacher whose hides I’ve enjoyed before. She had nice things to say about the series, so that’s a good sign that we’re thinking in the right kind of direction. The bobbin – who’s taken a bit of an interest in local history this month and keeps now asking about the ages of buildings and where roads used to go and things – is continuing to help me set out places to hide the parts of the final cache in the series, Oxford’s Wild Wolf Three, so further excitement no-doubt awaits.

× × ×

Intermediary Protocols and Google Duplex

There’s a story that young network engineers are sometimes told to help them understand network stacks and/or the OSI model, and it goes something like this:

You overhear a conversation between two scientists on the subject of some topic relevant to their field of interest. But as you listen more-closely, you realise that the scientists aren’t in the same place at all but are talking to one another over the telephone (presumably on speakerphone, given that you can hear them both, I guess). As you pay more attention still, you realise that it isn’t the scientists on the phone call at all but their translators: each scientist speaks to their translator in the scientist’s own language, and the translators are translating what they say into a neutral language shared with the other translator who translate it into the language spoken by the other scientist. Ultimately, the two scientists are communicating with one another, but they’re doing so via a “stack” at their end which only needs to be conceptually the same as the “stack” at the other end as far up as the step-below-them (the “first link” in their communication, with the translator). Below this point, they’re entrusting the lower protocols (the languages, the telephone system, etc.), in which they have no interest, to handle the nitty-gritty on their behalf.

The OSI model reflected using the "scientists conversation" metaphor. Based upon original art by Yuki Fujimura, used under a Creative Commons License.
The two scientists are able to communicate with one another, but that communication is not direct.

This kind of delegation to shared intermediary protocols is common in networking and telecommunications. The reason relates to opportunity cost, or – for those of you who are Discworld fans – the Sam Vimes’ “Boots” Theory. Obviously an efficiency could be gained here if all scientists learned a lingua franca, a universal shared second language for their purposes… but most-often, we’re looking for a short-term solution to solve a problem today, and the short-term solution is to find a work-around that fits with what we’ve already got: in the case above, that’s translators who share a common language. For any given pair of people communicating, it’s more-efficient to use a translator, even though solving the global problem might be better accomplished by a universal second language (perhaps Esperanto, for valid if Eurocentric reasons!).

1950s illustration of "driverless cars of the future". The car follows a series of electronic markers down the middle of the highway.
In the 1950s and 1960s, the concept of a self-driving car was already well-established… but the proposed mechanism for action was quite different to that which we see today.

The phenomenon isn’t limited to communications, though. Consider self-driving cars. If you look back to autonomous vehicle designs of the 1950s (because yes, we’ve been talking about how cool self-driving cars would be for a long, long time), they’re distinctly different from the ideas we see today. Futurism of the 1950s focussed on adapting the roads themselves to make them more-suitable for self-driving vehicles, typically by implanting magnets or electronics into the road surface itself or by installing radio beacons alongside highways to allow the car to understand its position and surroundings. The modern approach, on the other hand, sees self-driving cars use LiDAR and/or digital cameras to survey their surroundings and complex computer hardware to interpret the data.

This difference isn’t just a matter of the available technology (although technological developments certainly inspired the new approach): it’s a fundamentally-different outlook! Early proposals for self-driving cars aimed to overhaul the infrastructure of the road network: a “big solution” on the scale of teaching everybody a shared second language. But nowadays we instead say “let’s leave the roads as they are and teach cars to understand them in the same way that people do.” The “big solution” is too big, too hard, and asking everybody to chip in a little towards outfitting every road with a standardised machine-readable marking is a harder idea to swallow than just asking each person who wants to become an early adopter of self-driving technology to pay a lot to implement a more-complex solution that works on the roads we already have.

LiDAR unit on a Google Self-Driving Car
In real life, these things spin much faster.

This week, Google showed off Duplex, a technology that they claim can perform the same kind of delegated-integration for our existing telephone lives. Let’s ignore for a moment the fact that this is clearly going to be overhyped and focus on the theoretical potential of this technology, which (even if it’s not truly possible today) is probably inevitable as chatbot technology improves: what does this mean for us? Instead of calling up the hairdresser to make an appointment, Google claim, you’ll be able to ask Google Assistant to do it for you. The robot will call the hairdresser and make an appointment on your behalf, presumably being mindful of your availability (which it knows, thanks to your calendar) and travel distance. Effectively, Google Assistant becomes your personal concierge, making all of those boring phone calls so that you don’t have to. Personally, I’d be more than happy to outsource to a computer every time I’ve had to sit in a telephone queue, giving the machine a summary of my query and asking it to start going through a summary of it to the human agent at the other end while I make my way back to the phone. There are obviously ethical considerations here too: I don’t like being hounded by robot callers and so I wouldn’t want to inflict that upon service providers… and I genuinely don’t know if it’s better or worse if they can’t tell whether they’re talking to a machine or not.

Process of having Google Assistant order a pizza, by phone, on your behalf.
I, for one, welcome our pizza-ordering overlords.

But ignoring the technology and the hype and the ethics, there’s still another question that this kind of technology raises for me: what will our society look like when this kind of technology is widely-available? As chatbots become increasingly human-like, smarter, and cheaper, what kinds of ways can we expect to interact with them and with one another? By the time I’m able to ask my digital concierge to order me a pizza (safe in the knowledge that it knows what I like and will ask me if it’s unsure, has my credit card details, and is happy to make decisions about special offers on my behalf where it has a high degree of confidence), we’ll probably already be at a point at which my local takeaway also has a chatbot on-staff, answering queries by Internet and telephone. So in the end, my chatbot will talk to their chatbot… in English… and work it out between the two of them.

Let that sink in for a moment: because we’ve a tendency to solve small problems often rather than big problems rarely and we’ve an affinity for backwards-compatibility, we will probably reach the point within the lifetimes of people alive today that a human might ask a chatbot to call another chatbot: a colossally-inefficient way to exchange information built by instalments on that which came before. If you’re still sceptical that the technology could evolve this way, I’d urge you to take a look at how the technologies underpinning the Internet work and you’ll see that this is exactly the kind of evolution we already see in our communications technology: everything gets stacked on top of a popular existing protocol, even if it’s not-quite the right tool for the job, because it makes one fewer problem to solve today.

Hacky solutions on top of hacky solutions work: the most believable thing about Max Headroom’s appearance in Ready Player One (the book, not the film: the latter presumably couldn’t get the rights to the character) as a digital assistant was the versatility of his conversational interface.

A man and a woman look at a laptop screen in a cafe/bar.
“See? My laptop says we should hook up.”

By the time we’re talking about a “digital concierge” that knows you better than anyone, there’s no reason that it couldn’t be acting on your behalf in other matters. Perhaps in the future your assistant, imbued with intimate knowledge about your needs and interests and empowered to negotiate on your behalf, will be sent out on virtual “dates” with other people’s assistants! Only if it and the other assistant agree that their owners would probably get along, it’ll suggest that you and the other human meet in the real world. Or you could have your virtual assistant go job-hunting for you, keeping an eye out for positions you might be interested in and applying on your behalf… after contacting the employer to ask the kinds of questions that it anticipates that you’d like to know: about compensation, work/life balance, training and advancement opportunities, or whatever it thinks matter to you.

We quickly find ourselves colliding with ethical questions again, of course: is it okay that those who have access to more-sophisticated digital assistants will have an advantage? Should a robot be required to identify itself as a robot when acting on behalf of a human? I don’t have the answers.

But one thing I think we can say, based on our history of putting hacky solutions atop our existing ways of working and the direction in which digital assistants are headed, is that voice interfaces are going to dominate chatbot development a while… even where the machines end up talking to one another!

× × × × ×

The Mystery Cable

While rooting through our attic, Ruth‘s brother Owen just found a mystery cable. It almost certainly belongs to me (virtually all of the cables in the house, especially the unusual ones, do), but this one is a mystery to me.

6-of-13-pin plug, believed to be proprietary.
End #1 of the cable is a 13-pin male serial connection with 6 connected pins, spring-loaded. It seems to be designed to screw in at one end. The screw is worn.

The more I look at it, the more I feel like I’m slowly going mad, as if the cable is some kind of beast from the Lovecraftian Cable Dimension which mortal minds were not meant to comprehend. It’s got three “ends” and is clearly some kind of signal combining (or separating) cable, but it doesn’t look like anything I’ve ever seen before (and don’t forget, I probably own it).

End #2 of the cable: a microphone, perhaps?
On the other side of the split, end #2 of the cable terminates in a fine metal mesh (perhaps concealing a microphone, small speaker, or temperature/humidity sensor). It has a “push-to-talk” style clicker switch and a “tie clip” on it.

Every time I look at it I have a new idea of what it could be. Some kind of digital dictophone or radio mic connector? Part of a telephone headset? Weather monitoring hardware? A set of converters between two strange and unusual pieces of hardware? But no matter what I come up with, something doesn’t add up? Why only 6 pins? Why the strange screw-in connector? Why the clicker switch? Why the tie clips? Why “split” the output (let alone have cables of different lengths)?

End #3: an earpiece, maybe?
End #3 looks like a fibreoptic audio terminator. Or perhaps a part of an earpiece. It, too, has a “tie clip” on (do I clip it to… my ear?)

In case it helps, I’ve made a video of it. You’ll note that I use the word “thingy” more times than might perhaps be justified, but I’ve been puzzling over this one for a while:

Can you help? Can you identify this mystery cable? Prize for the correct answer!

× × ×

The Better Bundo Book

Today, I received my long-awaited copy of A Day in the Life of Marlon Bundo, a book inspired by the US Vice President’s family pet not to be confused with Marlin Bundo’s A Day in the Life of the Vice President, which it satirises. In case you’ve been living under a rock: the family of US Vice President Mike Pence have a pet rabbit called Marlon Bundo (and who doesn’t appreciate some punmanship in their pet’s name) and they wrote the latter book that attempts to explain, through the eyes of Marlon Bundo, what the Vice President does. And then John Oliver, who’s become a bit of a master of doing nice things in a dickish way, released the former a few hours earlier and subsequently thoroughly outsold the Pence book.

A Day in the Life of Marlon Bundo
I wasn’t fast enough to get an order in on the first (hugely-oversubscribed) print run and had to wait on both the reprint plus international shipping.

This self-proclaimed “better Bundo book” tells a different (educational and relevant) story: in it, Marlon Bundo falls in love with another boy rabbit but their desire to get married is hampered by the animals’ leader, the Stink Bug, who proclaims that “boy rabbits can’t marry boy rabbits; boy rabbits have to marry girl rabbits!” With the help of the other animals, the rabbits vote-out the Stink Bug, get married, and go on a lovely bunnymoon… a cheery and uplifting story and, of course, a distinctly trollish way to piss off the (clearly anti-LGBT) Mike Pence. This evening, I decided to offer it as a bedtime story to our little bookwork. At four years old, she’s of an age at which the highly-hetronormative narratives of the media to which she’s exposed might be only-just beginning to sink in, so I figured this was a perfect vehicle to talk about difference, diversity, and discrimination. Starting school later this year means that she’s getting closer to the point where she may go from realising that her family is somewhat unusually-shaped to discovering that some people might think that “unusual” means “wrong”, so this is also a possible step towards thinking about her own place in the world and what other people make of it.

Ruth reading with Annabel and John.
Our little bookworm, along with bookworm-junior and their mother.

Her initial verdict was that it was “sweet”, and that she was glad that the Stink Bug was vanquished and that Marlon and Wesley got to live together happily-ever-after. I explained that while the story was made-up, a lot of what it was talking about was something that really happens in this world: that some people think that boys should not marry boys and that girls should not marry girls, even if they love them, and that sometimes, if those people get to be In Charge then they can stop those people marrying who they love. I mentioned that in our country we were fortunate enough that boys can marry boys and girls can marry girls, if they want to, but that there are places where that’s not allowed (and there are even some people who think it shouldn’t be allowed here!). And then I asked her what she thought.

“They’re like the stinky Stink Bug.”

That’ll do.

× ×

On This Day In 2005

I normally reserve my “on this day” posts to look back at my own archived content, but once in a while I get a moment of nostalgia for something of somebody else’s that “fell off the web”. And so I bring you something you probably haven’t seen in over a decade: Paul and Jon‘s Birmingham Egg.

Paul's lunch on this day 13 years ago: Birmingham Egg with Naan Bread
Is this honestly so different from the kind of crap that most of our circle of friends ate in 2005?

It was a simpler time: a time when YouTube was a new “fringe” site (which is probably why I don’t have a surviving copy of the original video) and not yet owned by Google, before Facebook was universally-available, and when original Web content remained decentralised (maybe we’re moving back in that direction, but I wouldn’t count on it…). And only a few days after issue 175 of the b3ta newsletter wrote:

* BIRMINGHAM EGG - Take 5 scotch eggs, cut in
  half and cover in masala sauce. Place in
  Balti dish and serve with naan and/or chips. 
  We'll send a b3ta t-shirt to anyone who cooks
  this up, eats it and makes a lovely little
  photo log / write up of their adventure.
Birmingham Egg preparation
Sure, this looks like the kind of thing that seems like a good idea when you’re a student.

Clearly-inspired, Paul said “Guess what we’re doing on Sunday?” and sure enough, he delivered. On this day 13 years ago and with the help of Jon, Liz, Siân, and Andy R, Paul whipped up the dish and presented his findings to the Internet: the original page is long-gone, but I’ve resurrected it for posterity. I don’t know if he ever got his promised free t-shirt, but he earned it: the page went briefly viral and brought joy to the world before being forgotten the following week when we all started arguing about whether 9 Songs was a good film or not.

It was a simpler time, when, having fewer responsibilities, we were able to do things like this “for the lulz”. But more than that, it was still at the tail-end of the era in which individuals putting absurd shit online was still a legitimate art form on the Web. Somewhere along the way, the Web got serious and siloed. It’s not all a bad thing, but it does mean that we’re publishing less weirdness than we were back then.

× ×