As I mentioned last year, for several years I’ve collected pretty complete historic location data from GPSr devices I carry with me everywhere, which I collate in a personal μlogger server.
Going back further, I’ve got somewhat-spotty data going back a decade, thanks mostly to the fact that I didn’t get around to opting-out of Google’s location tracking until only a few years ago (this data is now
also housed in μlogger). More-recently, I now also get tracklogs from my smartwatch, so I’m managing to collate more personal
location data than ever before.
The blob around my house, plus some of the most common routes I take to e.g. walk or cycle the children to school.
A handful of my favourite local walking and cycling routes, some of which stand out very well: e.g. the “loop” just below the big blob represents a walk around the lake at Dix Pit;
the blob on its right is the Devils Quoits, a stone circle and henge that I thought were sufficiently interesting that
I made a virtual geocache out of them.
The most common highways I spend time on: two roads into Witney, the road into and around Eynsham, and routes to places in Woodstock and North Oxford where the kids have often had
classes/activities.
I’ve unsurprisingly spent very little time in Oxford City Centre, but when I have it’s most often been at the Westgate Shopping Centre,
on the roof of which is one of the kids’ favourite restaurants (and which we’ve been able to go to again as Covid restrictions have lifted, not least thanks to their outdoor seating!).
One to eight years ago
Let’s go back to the 7 years prior, when I lived in Kidlington. This paints a different picture:
For the seven years I lived in Kidlington I moved around a lot more than I have since: each hotspot tells a story, and some tell a few.
This heatmap highlights some of the ways in which my life was quite different. For example:
Most of my time was spent in my village, but it was a lot larger than the hamlet I live in now and this shows in the size of my local “blob”. It’s also possible to pick out common
destinations like the kids’ nursery and (later) school, the parks, and the routes to e.g. ballet classes, music classes, and other kid-focussed hotspots.
I worked at the Bodleian from early 2011 until late in 2019, and so I spent a lot of time in
Oxford City Centre and cycling up and down the roads connecting my home to my workplace: Banbury Road glows the brightest, but I spent some time on Woodstock Road too.
For some of this period I still volunteered with Samaritans in Oxford, and their branch – among other volunteering hotspots
– show up among my movements. Even without zooming in it’s also possible to make out individual venues I visited: pubs, a cinema, woodland and riverside walks, swimming pools etc.
Less-happily, it’s also obvious from the map that I spent a significant amount of time at the John Radcliffe Hospital, an unpleasant reminder of some challenging times from that
chapter of our lives.
The data’s visibly “spottier” here, mostly because I built the heatmap only out of the spatial data over the time period, and not over the full tracklogs (i.e. the map it doesn’t
concern itself with the movement between two sampled points, even where that movement is very-guessable), and some of the data comes from less-frequently-sampled sources like Google.
Eight to ten years ago
Let’s go back further:
Back when I lived in Kennington I moved around a lot less than I would come to later on (although again, the spottiness of the data makes that look more-significant than it is).
Before 2011, and before we bought our first house, I spent a couple of years living in Kennington, to the South of Oxford. Looking at
this heatmap, you’ll see:
I travelled a lot less. At the time, I didn’t have easy access to a car and – not having started my counselling qualification yet – I
didn’t even rent one to drive around very often. You can see my commute up the cyclepath through Hinksey into the City Centre, and you can even make out the outline of Oxford’s Covered
Market (where I’d often take my lunch) and a building in Osney Mead where I’d often deliver training courses.
Sometimes I’d commute along Abingdon Road, for a change; it’s a thinner line.
My volunteering at Samaritans stands out more-clearly, as do specific venues inside Oxford: bars, theatres, and cinemas – it’s the kind of heatmap that screams “this person doesn’t
have kids; they can do whatever they like!”
Every map tells a story
I really love maps, and I love the fact that these heatmaps are capable of painting a picture of me and what my life was like in each of these three distinct chapters of my life over
the last decade. I also really love that I’m able to collect and use all of the personal data that makes this possible, because it’s also proven useful in answering questions like “How
many times did I visit Preston in 2012?”, “Where was this photo taken?”, or “What was the name of that place we had lunch when we got lost during our holiday in Devon?”.
There’s so much value in personal geodata (that’s why unscrupulous companies will try so hard to steal it from you!), but sometimes all you want to do is use it to draw pretty heatmaps.
And that’s cool, too.
How these maps were generated
I have a μlogger instance with the relevant positional data in. I’ve automated my process, but the essence of it if you’d like to try it yourself is as follows:
First, write some SQL to extract all of the position data you need. I round off the latitude and longitude to 5 decimal places to help “cluster” dots for frequency-summing, and I raise
the frequency to the power of 3 to help make a clear gradient in my heatmap by making hotspots exponentially-brighter the more popular they are:
This data needs converting to JSON. I was using Ruby’s mysql2 gem to
fetch the data, so I only needed a .to_json call to do the conversion – like this:
db =Mysql2::Client.new(host: ENV['DB_HOST'], username: ENV['DB_USERNAME'], password: ENV['DB_PASSWORD'], database: ENV['DB_DATABASE'])
db.query(sql).to_a.to_json
Approximately following this guide and leveraging my Mapbox
subscription for the base map, I then just needed to include leaflet.js, heatmap.js, and leaflet-heatmap.js before writing some JavaScript code
like this:
body.innerHTML ='<div id="map"></div>';
let map = L.map('map').setView([51.76, -1.40], 10);
// add the base layer to the map
L.tileLayer('https://api.mapbox.com/styles/v1/{id}/tiles/{z}/{x}/{y}?access_token={accessToken}', {
maxZoom:18,
id:'itsdanq/ckslkmiid8q7j17ocziio7t46', // this is the style I defined for my map, using Mapbox
tileSize:512,
zoomOffset:-1,
accessToken:'...'// put your access token here if you need one!
}).addTo(map);
// fetch the heatmap JSON and render the heatmap
fetch('heat.json').then(r=>r.json()).then(json=>{
let heatmapLayer =new HeatmapOverlay({
"radius":parseFloat(document.querySelector('#radius').value),
"scaleRadius":true,
"useLocalExtrema":true,
});
heatmapLayer.setData({ data: json });
heatmapLayer.addTo(map);
});
Back in 2005 I reblogged a Flash-based interactive advert I’d discovered via del.icio,us. And if that sentence wasn’t early-naughties enough for you, buckle up…
This screenshot isn’t from the original site but from my homage to it. More on that later.
At the end of 2004, Unilever brand Axe (Lynx here in the UK)
continued their strategy of marketing their
deodorant as magically transforming young men into hyper-attractive sex gods. This is, of course, an endless battle, pitting increasingly sexually-charged advertisements against the
fundamental experience of their product, which smells distinctly like locker rooms and school discos. To launch 2005’s new fragrance Feather, they teamed up with London-based
design agency Dare Digital to create a game at domain AxeFeather.com (long since occupied by domain squatters).
In the game, the player’s mouse pointer becomes a feather which they can use to tickle an attractive young woman lying on a bed. The woman’s movements – which vary based on where she’s
tickled – have been captured in digital video. This was aggressively compressed using the then-new H.263-ish
Sorensen Spark codec to make a download just-about small enough to be tolerable for people still on dial-up Internet access (which was still almost as popular as broadband). The ad became a viral hit. I can’t tell you whether it paid for itself in sales, but it
must have paid for itself in brand awareness: on Valentines Day 2005 it felt like it was all the Internet wanted to talk about.
I suspect its success also did wonders for the career of its creative consultant Olivier Rabenschlag, who left Dare a few years
later, hopped around Silicon Valley for a bit, then landed himself a job as Head of Creative (now Chief Creative Officer) with Google. Kudos.
Why?
I told you about the site 16 years ago: why am I telling you again? Because this site, which made
headlines at the time, is gone.
And not just a little bit gone, like a television ad no longer broadcast but which might still exist on YouTube somewhere (and here it is – you’re welcome for the earworm). The website went down in 2009, and because it was implemented in Flash the content
was locked away in a compiled, proprietary format, which has ceased to be meaningfully usable on the modern web.
The parts of AxeFeather.com’s code that are openly readable don’t help much, but I love this comment, which carries the scent of the adolescent web in the same way at Lynx deodorant
carries the scent of an adolescent human.
The ad was pioneering. Flash had only recently gained video support (this would be used the following year for the first version of YouTube), and it had so far been used mostly for
non-interactive linear video. This ad was groundbreaking… but now it’s disappeared like so much other Flash work. And for all that Flash might have been bad for the web,
it’s an important part of our
digital history [recommended reading].
Third-party Flash emulation is imperfect. I tried to make Axe Feather work in Ruffle and got… an empty bed? What is this, a metaphor for being a
lonely nerd?
So on a whim… I decided to see if I could recreate the ad.
Call it lockdown fever if you like, because it’s certainly not the work of a sane mind to attempt to resurrect a 16-year-old Internet advertisement. But that’s what I did.
How?
My plan: to reverse-engineer the digital assets (video, audio, cursor etc.) out of the original Flash file, and use them to construct a moderately-faithful recreation of the ad,
suitable for use on the modern web. My version must:
Work in any modern browser, without Flash of course.
Indicate how much of the video content you’d seen, because we live in an era of completionists who want to know they’ve seen it all.
Depend on no third-party frameworks/libraries: just vanilla HTML, CSS, and JavaScript.
Let’s get started.
Reverse-engineering
At this point I noticed that the videos had no audio tracks: the giggling and other sound effects must be stored separately.
I grabbed the compiled .swf file from archive.org and ran it through
SWFExtract and an online decompiler: neither was individually able to extract
all of the assets, but together they gave me a full set. I ran the .flv files through Handbrake to get myself a set of
.mp4 files instead.
In what appears to have been an exercise in size optimisation, the original authors cropped the videos differently depending on how much space was needed (e.g. if the subject
stretched her arms above her head, more space would be required). Clearly, some re-alignment would be needed.
Seeing that the extracted video files were clearly designed to be carefully-positioned on a static background, and not all in the exact same position, I decided to make my job easier by
combining them all together, and including the background layer (the picture of the bed) as a single video. Integrating the background with the subject meant that I was able to use
video editing software to tweak the position, which I imagined would be much easier than doing so in code. Combining all of the video clips into a single file provides compression
benefits as well as making it easier to encourage a browser to precache the entire video to begin with.
My design called for three “layers” above my web page: the video, a transparent (and usually hidden) canvas showing the hit areas for debugging purposes, and the feather-shaped
cursor.
The longest clip was a little over 6 seconds long, so I split my timeline into blocks of 7 seconds, padding each clip with a freeze-frame of its final image to make each exactly 7
seconds long. This meant that calculating the position in the finished video to which I wanted to jump was as simply as multiplying the (0-indexed) clip number by 7 and seeking to that
position. The additional “frozen” frames acted as a safety buffer in case my JavaScript code was delayed by a few milliseconds in jumping to the “next” block.
I used onion-skinning to help “line up” the actress with herself as I composited her onto the bed in a single unified video of 7-second blocks.
An additional challenge was that in the original binary, the audio files were stored separately from the video clips… and slightly longer than them! A little experimentation revealed
that the ends of each clip lined up, presumably something to do with how Flash preloads and synchronises media streams. Luckily for me, the audio clips were numbered such that
they mostly mapped to the order in which the videos appeared.
Once I had a video file suitable for use on the web (you can watch the entire clip here, if you really want to), it was time to
write some code.
It feels slightly wasteful that over 50% of the resulting video clip is a freeze-frame, but modern video compression algorithms like H.264 reduce the impact considerably and the
resulting video file is about the same size as its more-optimised predecessor.
Regular old engineering
The theory was simple: web page, video, loop the first seven seconds until you click on it, then animate the cursor (a feather) and jump to another seven-second block before jumping
back or, in some cases, on to a completely new seven second block. Simple!
Of course, any serious web development is always a little more complex than you first anticipate.
I extracted from the .swf 34 distinct animated clips, which I numbered 0 through 33. 6 and 30 appeared to be duplicates of others. 0 and 33 are each two “idling” states
from which interaction can lead to other states. Note that my interpretation of the order and relationship of animation sequences differs from the original.
For example: nowadays, putting a video on a web page is as easy as a <video> tag. But, in an effort to prevent background web pages from annoying you with unexpected
audio, modern browsers won’t let a video play sound unless user interaction is the reason that the video starts playing (or unmutes, if it was playing-but-muted to
begin with). Broadly-speaking, that means that a definitive user action like a “click” event has to be in the call stack when your code makes the video play/unmute.
But changing the .currentTime of a video to force it into a loop: that’s fine! So I set the video to autoplay muted on page load, with a script to make it loop
within its first seven-second block. The actress doesn’t make any sound in block 0 (position A) anyway; so I can unmute the video when the user interacts with a hotspot.
For best performance, I used window.requestAnimationFrame to synchronise my non-interactive events (video loops, virtual cursor repositioning). This posed a slight problem
in that animationframes wouldn’t be triggered if the tab was moved to the background: the video would play through each seven-second block and into the next! Fortunately the
visibilitychange event came to the rescue and I was able to pause the video when it wasn’t being actively watched.
I originally hoped to use the cursor: CSS directive to make the “feather” cursor, but there’d be no nice way to
animate it. Comet Cursor may have been able to use animated GIFs
as cursors back in 1997 (when it wasn’t busy selling all your personal information to advertisers, back when that kind of thing used to attract widespread controversy), but modern
browsers don’t… presumably because it would be super annoying. They also don’t all respect cursor: none, so I used the old trick of using cursor: url(null.png),
none (where null.png is an almost-entirely transparent 1×1 pixel image) to hide the original cursor, then position an image dynamically. I
usegetBoundingClientRect() to allow the video to resize dynamically in CSS and convert coordinates on it represented
as percentages into actual pixel values and vice-versa: this allows it to react responsively to any screen size without breakpoints or excessive code.
Once I’d gone that far I was able to drop the GIF idea entirely and used a CSS animation for the “tickling” motion.
The hotspot overlay was added as a debugging feature but I left it in the final version. Hold the space bar to highlight hit areas.
I added a transparent <canvas> element on top of the <video> on which the hit areas are dynamically drawn to help me test the “hotspots” and tweak
their position. I briefly considered implementing a visual tool to help me draw the hotspots, but figured it wasn’t quite worth the time it would take.
As I implemented more and more of the game, I remembered one feature from the original that I’d missed: the “blowaway”. If you trigger block 31 – a result of tickling the woman’s nose –
she’ll blow your cursor off the screen. It’s particularly fun because it subverts the player’s expectations of their user interface: once you’ve got past the surprise of your
cursor being a feather, you quickly settle in to it moving like a regular cursor… but then control’s stolen from you and the cursor vanishes! (Well I thought it was cool… 16 years ago.)
Sometimes tickling her nose will make her blow your feather off the screen. That’ll show you.
Back in February my friend Katie shared with me an already four-year-old piece of interactive fiction, Bus Station: Unbound, that I’d somehow managed to miss the first time around. In the five months since then I’ve
periodically revisited and played through it and finally gotten around to writing a review:
All of the haunting majesty of its subject, and a must-read-thrice plot
Perhaps it helps to be as intimately familiar with Preston Bus Station – in many ways, the subject of the piece – as the protagonist. This work lovingly and faithfully depicts the
space and the architecture in a way that’s hauntingly familiar to anybody who knows it personally: right down to the shape of the rubberised tiles near the phone booths, the
forbidding shadows of the underpass, and the buildings that can be surveyed from its roof.
But even without such a deep recognition of the space… which, ultimately, soon comes to diverge from reality and take on a different – darker, otherworldly – feel… there’s a magic
to the writing of this story. The reader is teased with just enough backstory to provide a compelling narrative without breaking the first-person illusion. No matter how
many times you play (and I’ve played quite a few!), you’ll be left with a hole of unanswered questions, and you’ll need to be comfortable with that to get the most out of the story,
but that in itself is an important part of the adventure. This is a story of a young person who doesn’t – who can’t – know everything that they need to bring them comfort
in the (literally and figuratively) cold and disquieting world that surrounds them, and it’s a world that’s presented with a touching and tragic beauty.
Through multiple playthroughs – or rewinds, which it took me a while to notice were an option! – you’ll find yourself teased with more and more of the story. There are a few
frankly-unfair moments where an unsatisfactory ending comes with little or no warning, and a handful of places where it feels like your choices are insignificant to the story, but
these are few and far between. Altogether this is among the better pieces of hypertext fiction I’ve enjoyed, and I’d recommend that you give it a try (even if you don’t share the
love-hate relationship with Preston Bus Station that is so common among those who spent much of their youth sitting in it).
It’s no secret that I spent a significant proportion of my youth waiting for or changing buses at (the remarkable) Preston
Bus Station, and that doubtless biases my enjoyment of this game by tingeing it with nostalgia. But I maintain that it’s a well-written piece of hypertext interactive fiction with a
rich, developed world. You can play it starting from here, and you should. It looks like the story’s
accompanying images died somewhere along the way, but you can flick through them all here and get a feel for the shadowy,
brutalist, imposing place.
It turns out that Renault’s target customer base in Brazil do, too. Presumably it was a way bigger deal over there than it was here, because this new car ad feels like it could
genuinely be a trailer for a live-action reboot of the series. And now I want to watch it.
(I do have some questions, though. Like: Diana was only 14 years old when she and her friends were transported to the Realm of Dungeons and Dragons… so when did she learn to drive? Am I
supposed to believe that she just rolled a natural 20 on that driving check? And where does Sheila go when she turns invisible so that Bobby doesn’t end up sitting on her
transparent-lap? And how does the car’s navigation computer work: are we to believe that there’s a GNSS network in
the skies above the Realm? The Internet must know!)
tl;dr: TRRTL.COM is my reimplementation of a Logo on-screen turtle as a CoffeeScript-backed web application
For many children growing up in the 1970s and 1980s, their first exposure to computer programming may have come in the form of Logo, a general-purpose educational programming language best-known for its “turtle graphics” capabilities. By issuing
commands to an on-screen – or, if they were really lucky, robotic – cursor known as a turtle, the student could draw lines and curves all over the screen (or in the case of
robotic turtles: a large sheet of paper on the floor).
Back in the day, screens were monochrome and turtles were wired. What a way to live.
While our eldest and I were experimenting with programming (because, well…) a small robotic toy of hers, inspired by a book, it occurred to me that this was an experience that she might miss out on. That’s fine, of course: she doesn’t have to find the same joy
in playing with Logo on an Amstrad CPC or a BBC Micro that I did… but I’d like her to be able to have the option. In fact, I figured, there’s probably a whole generation of folks who
played with Logo in their childhood but haven’t really had the opportunity to use something as an adult that gives the same kind of satisfaction. And that’s the kind of thing I can fix.
Don’t interrupt a programmer when she’s “in the flow”.
TRRTL.COM is my attempt to produce a modern, web-based (progressive,
offline-first) re-imagining of Logo. It uses CoffeeScript as its base language because it provides all of the power of JavaScript but supports a
syntax that’s more-similar to that of traditional Logo implementations (with e.g. optional semicolons and unparenthesised parameters).
Turtles can be surprisingly fast. Snails, less-so.
If you’ve not used Logo before, give it a go. Try typing simple commands like forward 100 (steps), right 90 (degrees), and so
on and you’ll find it’s a bit like an etch-a-sketch. Click the “help” icon in the corner for more commands (and shorter forms of them) as well as instructions on writing longer programs
and sharing your work with the world.
Users can share their creations with the world, and then optionally expand upon them. Click the image to carry on where I left off, here.
And of course the whole thing is open source in the most permissive way imaginable, so if you’re of an
inclination to do your own experiments with <canvas>, Progressive Web Apps, and the like, you’re welcome to borrow from me. Or if anybody wants to tag-team on making
a version that uses the Web Bluetooth API to talk to a robotic turtle or to use WebRTC to make LAN “multiplayer” turtle art, I’m totally game for that.
My volunteering and academic workload for the rest of this year is likely to reduce the amount of random/weird stuff I put online, so it might get boring here for a while. Hope this
tides you over in the meantime.
Unless they happened to bump into each other at QParty, the first time Ruth and JTA met my school friend Gary was at my dad’s funeral. Gary had seen mention of the death in the local paper and came to the wake. About 30 seconds later, Gary and I were reminiscing, exchanging anecdotes about our misspent youths, when
suddenly JTA blurted out: “Oh my God… you’re Sc… Sc-gary?”
Ever since then, my internal monologue has referred to Gary by the new nickname “Scgary”, but to understand why requires a little bit of history…
While one end of the hall in which we held my dad’s wake turned into an impromptu conference of public transport professionals, I was at the other end, talking to my friends.
Despite having been close for over a decade, Gary and I drifted apart somewhat after I moved to Aberystwyth in 1999, especially as I became more and more deeply involved with volunteering at Aberystwyth Nightline and the
resulting change in my social circle which soon was 90% comprised of fellow volunteers, (ultimately resulting in JTA’s “What,
Everyone?” moment). We still kept in touch, but our once more-intense relationship – which started in a primary school playground! – was put on a backburner as we tackled the next
big things in our lives.
This is what the recruitment page on the Aberystwyth Nightline website looked like after I’d improved it. The Web was younger, then.
Something I was always particularly interested both at Nightline and in the helplines I volunteered with subsequently was training. At Nightline, I proposed and pushed forward a
reimplementation of their traditional training programme that put a far greater focus on experience and practical skills and less on topical presentations. My experience as a trainee
and as a helpline volunteer had given me an appreciation of the fundamentals of listening and I wanted future
trainees to be able to benefit from this by giving them less time talking about listening and more time practising listening.
Nightline training wasn’t always like this, I promise. Well: except for the flipchart covered in brainstorming; that was pretty universal.
The primary mechanism by which helplines facilitate such practical training is through roleplaying. A trainer will pretend to be a caller and will talk to a trainee, after which the
pair (along with any other trainers or trainees who are observing) will debrief and talk about how it went. The only problem with switching wholesale to a roleplay/skills-driven
approach to training at Aberystwyth Nightline, as I saw it, was the approach that was historically taken to the generation of roleplay material, which favoured the use of anonymised
adaptations of real or imagined calls.
Roleplay scenarios must be realistic (so that they simulate the experience of genuine calls with sufficient accuracy that they are meaningful) but they must also be
effective (at promoting the growth of the skills that are needed to best-support callers). Those two criteria often come into conflict in roleplay scenarios: a caller who sits
in near-silence for 20 minutes may well be realistic, but there’s a limit to how much you can learn from sitting in silence; a roleplay which tests every facet of a trainee’s practical
knowledge provides efficiency, but does not reflect the content of any call that has ever really happened.
I spent a lot of my undergraduate degree in this poky little concrete box (most of it before the redecoration photographed above), and damned if I wasn’t going to share what I’d
learned from the experience.
I spent some time outlining the characteristics of best-practice roleplays and providing guidelines to help “train the trainers”. These included ideas, some of which were (then) a
little radical, like:
A roleplay should be based upon a character, not a story: if the trainer knows how the call is going to end, this constrains the opportunity for the
trainee to explore the space and experiment with listening concepts. A roleplay is necessarily improvisational: get into your character, let go of your preconceptions.
Avoid using emotionally-charged experiences from your own life: use your own experience, certainly, but put your own emotional baggage aside. Not only is it unfair to
your trainee (they’re not your therapist!) but it can be a can of worms in its own right – I’ve seen a (great) trainee help a trainer to make a personal breakthrough for which they
were perhaps not yet ready.
Don’t be afraid to make mistakes: you’re not infallible, and you neither need to be nor to present yourself as a perfect example of a volunteer. Be willing to learn
from the trainees (I’ve definitely made use of things I’ve learned from trainees in real calls I’ve taken at Samaritans) and create a space in which you can collectively discuss how
roleplays went, rather than simply critiquing them.
I might have inadvertently introduced other skills practice to take place during the breaks in Nightline training: several trainees learned to juggle under my instruction, or were
shown the basics of lock picking…
In order to demonstrate the concepts I was promoting, I wrote and demonstrated a significant number of sample roleplay ideas, many of which I (or others) would then go on to flesh-out
into full roleplays at training sessions. One of these for which I became well-known was entitled My Friend Scott.
The caller in this roleplay presents with suicidal ideation fuelled by feelings of guilt and loneliness following the accidental death, about six months prior, of his best friend Scott,
for which he feels responsible. Scott had been the caller’s best friend since childhood, and he’s fixated on the adventures that they’d had together. He clearly has a huge admiration
for his dead friend, bordering on infatuation, and blames himself not only for the death but for the resulting fracturing of their shared friendship group and his subsequent isolation.
(We’re close to getting back to the “Scgary story”, I promise. Hang in here.)
Gary, circa 1998, at the door to my mother’s house. Unlike Scott, Gary didn’t die “six months ago”-from-whenever. Hurray!
When I would perform this roleplay as the caller, I’d routinely flesh out Scott and the caller’s backstory with anecdotes from my own childhood and early-adulthood: it seemed important
to be able to fill in these kinds of details in order to demonstrate how important Scott was to the caller’s life. Things that I really did with any of several of my childhood
friends found their way, with or without embellishment, into the roleplay, like:
Building a raft on the local duck pond and paddling out to an island, only to have the raft disintegrate and have to swim back
An effort to dye a friend’s hair bright red which didn’t produce a terribly satisfactory result but did stain many parts of a bathroom
Camping in the garden, dragging out a desktop computer and extension cable to fully replicate the “in the wild” experience
Flooding my mother’s garden (which at that time was a long slope on clay soil) in order to make a muddy waterslide
Generating fake credit card numbers to facilitate repeated month-long free trials of an ISP‘s services
Riding on the bonnet of a friend’s first car, hanging on to the windscreen wipers, eventually (unsurprisingly) falling off and getting run over
That time Scott Gary and I tried to dye his hair red but mostly dyed what felt like everything else in the world.
Of course: none of the new Nightliners I trained knew which, if any, of these stories were real – that was never a part of the experience. But many were real, or had a morsel of truth.
And a reasonable number of them – four of those in the list above – were things that Gary and I had done together in our youth.
JTA’s surprise came from that strange feeling that occurs when two very parts of your life that you thought were completely separate suddenly and unexpectedly collide with one another
(I’m familiar with it). The anecdote that Gary had just shared about our teen years was one that exactly mirrored something
he’d heard me say during the My Friend Scott roleplay, and it briefly crashed his brain. Suddenly, this was Scott standing in front of him, and he’d been able to get
far enough through his sentence to begin saying that name (“Sc…”) before the crash stopped him in his tracks and he finished off with “…gary”.
Scott Gary always had a certain charm with young women. Who were these two and what were they doing in my bedroom? I don’t know, but if there’s an answer, then
Scott Gary is the answer.
I’m not sure whether or not Gary realises that, in my house at least, he’s to this day been called “Scgary”.
I bumped into him, completely by chance, while visiting my family in Preston this weekend. That reminded me that I’d long planned to tell this story: the story of Scgary, the imaginary
person who exists only in the minds of the tiny intersection of people who’ve both (a) met my friend Gary and know about some of the crazy shit we got up to together when we were young
and foolish and (b) trained as a volunteer at Aberystwyth Nightline during the window between me overhauling how training was provided and ceasing to be involved with the training
programme (as far as I’m aware, nobody is performing My Friend Scott in my absence, but it’s possible…).
That time Scott Gary (drunk) hooked up with my (even more drunk) then crush at my (drunken) 18th birthday party.
Gary asked me to give him a shout and meet up for a beer next time I’m in his neck of the woods, but it only occurred to me after I said goodbye that I’ve no idea what the best way to
reach him is, these days. Like many children of the 80s, I’ve still got the landline phone numbers memorised of all of my childhood friends, but even if that number is still
valid, it’d be his parents house!
I guess that I’ll let the Internet do the work for me: perhaps if I write this, here, he’ll find it, somehow. Hi, Scgary!
As of next week, I’ll have been blogging for 20 years, or about 54% of my life. How did that happen?
I’d been “blogging” – not that we called it that, yet – since late 1998, but my original collection of content-mangling Perl scripts wasn’t all that. More history…
The mid-1990s were a very different time for the World Wide Web (yes, we still called it that, and sometimes we even described its use as “surfing”). Going “on the Internet” was a
calculated and deliberate action requiring tying up your phone line, minutes of “connecting” along with all of the associated screeching sounds if you hadn’t
turned off your modem’s loudspeaker, and you’d typically be paying twice for the experience: both a monthly fee to your ISP for the service and a per-minute charge to your phone company for the call.
It was into this environment that in 1994 I published my first web pages: as far as I know, nothing remains of them now. It wasn’t until 1998 that I signed up an account with UserActive (whose website looks almost the same today as it did then) who offered economical subdomain hosting with shell and CGI support and launched “Castle of the Four Winds”, a set of vanity pages that
included my first blog.
Except I didn’t call it a “blog”, of course, because it wasn’t until the following year that Peter Merholz invented the word (he also commemorated 20 years of blogging, this year). I didn’t even call it a “weblog”, because
that word was still relatively new and I wasn’t hip enough to be around people who said it, yet. It was self-described as an “online diary”, a name which only served to
reinforce the notion that I was writing principally for myself. In fact, it wasn’t until mid-1999 that I discovered that it was being more-widely read than just by me and my
circle of friends when I attracted a stalker who travelled across the UK to try to “surprise” me by turning up at places she expected to
find me, based on what I’d written online… which was exactly as creepy as it sounds.
AvAngel.com
While the world began to panic that the coming millennium was going to break all of the computers, I migrated Castle of the Four Winds’ content into AvAngel.com, a joint vanity site
venture with my friend Andy. Aside from its additional content (purity tests, funny stuff, risqué e-cards), what we hosted was mostly the same old stuff, and I continued to write
snippets about my life in what was now quite-clearly a “blog-like” format, with the most-recent posts at the top and separate pages for content too old for the front page. Looking back,
there’s still a certain naivety to these posts which exemplify the youth of the Web. For example, posts routinely referenced my friends by their email
addresses, because spam was yet to become a big enough problem that people didn’t much mind if you put their email address on a public webpage somewhere, and because email
addresses still carried with them a feeling of anonymity that ceased to be the case when we started using them for important things.
Technologically-speaking, too, this was a simpler time. Neither Javascript nor CSS support was widespread (nor
consistently-standardised) enough to rely upon for anything other than the simplest progressive enhancement unless you were willing to “pick a side” in what we’d subsequently call the
first browser war and put one of those apalling “best viewed in Internet Explorer” or “best viewed in Netscape Navigator” banners on your site. I’ve always been a believer in a
universal web (and my primary browser at the time was Opera, anyway, as it mostly-remained until Opera went wrong in 2013), and I didn’t have the energy to write everything twice, so our cool/dynamic
functionality came mostly from back-end (e.g. Perl, PHP) technologies.
Meanwhile, during my initial months as a student in Aberystwyth, I wrote a series of emails to friends back home entitled “Cool And Interesting Thing Of The Day To Do At The University
Of Wales, Aberystwyth”, and put copies of each onto my student webspace; I’ve since recovered these and integrated them into my unified blog.
Scatmania.org
In 2002 I’d bought the domain name scatmania.org – a reference to my university halls of residence nickname “Scatman Dan”; I genuinely didn’t consider the possibility that the name
might be considered scatalogical until later on. As I wanted to continue my blogging at an address that felt like it was solely mine (AvAngel.com having been originally shared with a
friend, although in practice over time it became associated only with me), this seemed like a good domain upon which to relaunch. And so, in
mid-2003 and powered by a short-lived and ill-fated blogging engine called Flip I did exactly that. WordPress, to which I’d subsequently migrate, hadn’t been invented yet and it wasn’t clear whether its predecessor,
b2/cafelog, would survive the troubles its author was experiencing.
From this point on, any web address for any post made to my blog still works to this day, despite multiple technological and infrastructural changes to my blog (and
some domain name shenanigans!) in the meantime. I’d come to be a big believer in the mantra that cool URIs don’t change: something that as far as possible I’ve committed to trying to upload in my blogging, my archiving, and my paid work since
then. I’m moderately confident that all extant links on the web that point to earlier posts are all under my control so they can (and in most cases have) been fixed
already, so I’m pretty close to having all my permalink URIs be “cool”, for now. You might hit a short chain of redirects,
but you’ll get to where you’re going.
And everything was fine, until one day in 2004 when it wasn’t. The server hosting scatmania.org died in a very bad way, and because
my backup strategy was woefully inadequate, I lost a lot of content. I’ve recovered quite a lot of it and put it back in-place, but some is probably gone forever.
One of the longest-lived web designs for scatmania.org paid homage to the original, but with more “blue” and a WordPress backing.
The resurrected site was powered by WordPress, and this was the first time that live database queries had been used to power my blog. Occasionally,
these days, when talking to younger, cooler developers, I’m tempted to follow the hip trend of reimplementing my blog as a static site, compiling a stack of host-anywhere HTML files based upon whatever-structure-I-like at the “backend”… but then I remember that I basically did that already for six
years and I’m far happier with my web presence today. I’ve nothing against static site systems (I’m quite partial to Middleman, myself,
although I’m also fond of Hugo) but they’re not right for this site, right now.
IndieAuth hadn’t been invented yet, but I was quite keen on the ideals of OpenID (I still am, really), and
so I implemented what was probably the first viable “install-anywhere” implementation of OpenID for WordPress – you can see part of it
functioning in the top-right of the screenshot above, where my (copious, at that time) LiveJournal-using friends were encouraged to sign in to my blog using their LiveJournal identity.
Nowadays, the majority of the WordPress plugins I use are ones I’ve written myself: my blog is powered by a CMS that’s more
“mine” than not!
I no longer have the images that made my 2006 redesign look even remotely attractive, so here it is mocked-up with block colours instead.
Over the course of the first decade of my blogging, a few trends had become apparent in my technical choices. For example:
I’ve always self-hosted my blog, rather than relying on a “blog as a service” or siloed social media platform like WordPress.com, Blogger, or LiveJournal.
I’ve preferred an approach of storing the “master” copy of my content on my own site and then (sometimes) syndicating it elsewhere: for
example, for the benefit of my friends who during their University years maintained a LiveJournal, for many years I had my blog cross-post to a LiveJournal account (and backfeed copies of comments back to my site).
I’ve favoured web standards that provided maximum interoperability (e.g. RSS with full content)
and longevity (serving HTML pages from permanent URLs, adding
“extra” functionality via progressive enhancement so as to ensure that content functioned e.g. without Javascript, with CSS
disabled or the specification evolved, etc.).
These were deliberate choices, but they didn’t require much consideration: growing up with a Web far less-sophisticated than today’s (e.g. truly stateless prior to the advent of
HTTP cookies) and seeing the chaos caused during the first browser war and the period of stagnation that followed, these choices seemed intuitive.
That body font is plain old Verdana, you know: I’ve always felt that it (plus full justification) was the right choice for this particular design, even though I regret other parts of
it (like the brightness!).
As you’d expect from a blog covering a period from somebody’s teen years through to their late thirties, there’ve been significant changes in the kinds of content I’ve posted (and the
tone with which I’ve done so) over the years, too. If you dip into 2003, for example, you’ll see the results of quiz memes and
unqualified daily minutiae alongside actual considered content. Go back
further, to early 1999, and it is (at best) meaningless wittering about the day-to-day life of a teenage student. It took until around
2009/2010 before I actually started focussing on writing content that specifically might be enjoyable for others to read (even where
that content was frankly silly) and only far more-recently-still that I’ve committed to the “mostly technical stuff, ocassional bits of ‘life’ stuff” focus that I have today.
I say “committed”, but of course I’m fully aware that whatever this blog is now, it’ll doubtless be something somewhat different if I’m still writing it in another two decades…
2014 may have included my most-prolific month of blogging, but 2003-2005 saw the most-consistent high-volume of content.
Once I reached the 2010s I started actually taking the time to think about the design of my blog and its meaning. Conceptually, all of my content is data-driven: database tables full of
different “kinds” of content and associated metadata, and that’s pretty-much ideal – it provides a strong separation between content and presentation and makes it possible to make
significant design changes with less work than might otherwise be expected. I’ve also always generally favoured a separation of concerns in web development and so I’m not a fan
of CSS design methodologies that encourage class names describing how things should appear, like Atomic CSS. Even where it results
in a performance hit, I’d far rather use CSS classes to describe what things are or represent. The single biggest
problem with this approach, to my mind, is that it violates the DRY principle… but that’s something that your CSS preprocessor’s there to fix for you, isn’t it?
But despite this philosophical outlook on the appropriate gap between content and presentation, it took until about 2010 before I actually attached any real significance to the
presentation at all! Until this point, I’d considered myself to have been more of a back-end than a front-end engineer, and felt that the most-important thing was to get the
content out there via an appropriate medium. After all, a site without content isn’t a site at all, but a site without design is (or at least should be) still intelligible
thanks to browser defaults! Remember, again, that I started web development at a time when stylesheets didn’t exist at all.
My previous implementations of my blog design had used simple designs, often adapted from open-source templates, in an effort to get them deployed as quickly as possible and move on to
the next task, but now, I felt, it was time to do a little more.
My 2010 relaunch put far more focus on the graphical design elements of my blog as well as providing a fully responsive design based on (then-new) CSS media queries. Alongside my
focus on separation of concerns in web development, I’m also quite opinionated on the idea that a responsive design has almost always been a superior solution to having a separate
“mobile site”.
For a few years, I was producing a new theme once per year. I experimented with different colours, fonts, and layouts, and decided (after some ad-hoc A/B testing) that my audience was
better-served by a “front” page than by being dropped directly into my blog archives as had previously been the case. Highlighting the latest few – and especially the very-latest – post
and other recent content increased the number of posts that a visitor would be likely to engage with in a single visit. I’ve always presumed that the reason for this is that regular
(but non-subscribing) readers are more-likely to be able to work out what they have and haven’t read already from summary text than from trying to decipher an entire post: possibly
because my blogging had (has!) become rather verbose.
My 2011 design, in hindsight, said more about my mood and state-of-mind at the time than it did about artistic choices: what’s with all the black backgrounds and seriffed fonts? Is
this a funeral parlour?
I went through a bit of a lull in blogging: I’ve joked that I spent more time on my 2010 and 2011 designs than I did on the sum total of the content that was published in between the
pair of them (which isn’t true… at least, not quite!). In the month I left Aberystwyth for Oxford, for example, I was doing all kinds of exciting and new things…
and yet I only wrote a total of two blog posts.
With RSS waning in popularity – which I can’t understand: RSS is amazing! – I began to crosspost to
social networks like Twitter and Google+ (although no longer to Google+, following the news of its imminent demise) to help those readers who prefer to get their content via these
media, but because I wasn’t producing much content, it probably didn’t make a significant difference anyway: the chance of a regular reader “missing” something must have been remarkably
slim.
The 2012 design featured “CSS peekaboo”: a transformation that caused my head to “hide” from you behind the search bar if your cursor got too close. Ruth, I hear, spent far too long playing with just this feature.
Nobody calls me “Scatman Dan” any more, and hadn’t for a long, long time. Given that my name is already awesome and unique
all by itself (having changed to be so during the era in which scatmania.org was my primary personal domain name), it felt like I had the opportunity to rebrand.
I moved my blog to a new domain, DanQ.me (which is nice and short, too) and came up with a new collection of colours, fonts, and layout choices that I felt better-reflected my identity…
and the fact that my blog was becoming less a place to record the mundane details of my daily life and more a place where I talk about (principally-web)
technology, security, and GPS games… and just occasionally about other topics like breadmaking and books. Also, it gave me a chance to get on top of the current trend in web design for big, clean, empty spaces, square corners, and using pictures
as the hook to a story.
The second design of my blog after moving to DanQ.me showed-off posts with big pictures, framed by lots of white-space.
I’ve been working harder this last year or two to re-integrate (in a PESOS-like way) into my blog content that I’ve published elsewhere, mostly geocaching logs and
geohashing expedition records, and I’ve also done so retroactively, so in addition to my first blog article on the subject
of geocaching, you can read my first ever cache log without switching to a different site nor relying upon the
continued existence and accessibility of that site. I’ve been working at being increasingly mindful of where my content is siloed outside of my control and reclaiming it by hosting it
here, on my blog.
Particular areas in which I produce content elsewhere but would like to at-least maintain a copy here, and would ideally publish here first and syndicate elsewhere, although I
appreciate that this is difficult, are:
Reddit, where I’ve written tens of thousands of words under a variety of accounts, but I don’t really pay attention to the site any more
I left Facebook in 2011 but I still have a backup of what was on my “Wall” at that point, which I could look into reintegrating into my
blog
I share a lot of the source code I write via my GitHub account, but I’m painfully aware that this is yet-another-silo that I ought to learn
not to depend upon (and it ought to be simple enough to mirror my repos on my own site!)
I’ve got a reasonable number of videos on two YouTube channels which are online by Google’s good graces (and potential for advertising
revenue); for a handful of technical reasons they’re a bit of a pain to self-host, but perhaps my blog could act as a secondary source to my own video content
I write business reviews on Google Maps which I should probably look into recovering from the hivemind and hosting here… in fact, I’ve
probably written plenty of reviews on other sites, too, like Amazon for example…
On two previous occasions I’ve maintained an online photo gallery; I might someday resurrect the concept, at least for the photos that used to be published on them
I’ve dabbled on a handful of other, often weirder, social networks before like Scuttlebutt (which has a genius concept, by the way) and
Ello, and ought to check if there’s anything “original” on there I should reintegrate
Going way, way back, there are a good number of usenet postings I’ve made over the last twenty-something years that I could reclaim, if I can find them…
(if you’re asking why I’m inclined to do all of these things: here’s why)
This looks familiar.
20 years and around 717,000 words worth of blogging down, it’s interesting to look back and see how things have changed: in my life, on the Web, and in the world in general. I’ve seen
many friends’ blogs come and go: they move into a new phase of their life and don’t feel like what they wrote before reflects them today, most often, and so they delete them… which is
fine, of course: it’s their content! But for me it’s always felt wrong to do so, for two reasons: firstly, it feels false to do so given that once something’s been put on the Web, it
might well be online forever – you can’t put the genie back in the bottle! And secondly: for me, it’s valuable to own everything I wrote before. Even the cringeworthy things I
wrote as a teenager who thought they knew everything and the antagonistic stuff I wrote in my early 20s but that I clearly wouldn’t stand by today is part of my history, and
hiding that would be a disservice to myself.
The 17-year-old who wrote my first blog posts two decades ago this month fully expected that the things he wrote would be online forever, and I don’t intend to take that away from him.
I’m sure that when I write a post in October 2038 looking back on the next two decades, I’ll roll my eyes at myself today, too, but for me: that’s part of the joy of a
long-running personal blog. It’s like a diary, but with a sense of accountability. It’s a space on the web that’s “mine” into which I can dump pretty-much whatever I like.
I love it: I’ve been blogging for over half of my life, and if I can get back to you in 2031 and tell you that I’ve by-then been doing so for two-thirds of my life, that would be a win.
On this day in 2004… Troma Night XXI took place at The Flat. Six people were in attendance: Claire, Paul, Kit, Bryn, (Strokey) Adam and I and, unusually – remember that the digital cameras in phones were still appalling – I took pictures of everybody who showed up.
Cue exclamations of “didn’t we all look young”, etc.
Troma Night was, of course, our weekly film night back in Aberystwyth (the RockMonkey wiki once described it as “fun”). Originally launched as a one-off and then a maybe-a-few-off event with a theme of watching films produced (or
later: distributed) by Troma Entertainment, it quickly became a regular event with a remit to watch “all of the best and the worst films ever made”.
Expanding into MST3K, the IMDb “bottom 250”, and once in a while a good film, we eventually spent
somewhere over 300 nights on this activity (you can relive our 300th, if you like!)
and somehow managed to retain a modicum of sanity.
Copious quantities of alcohol might have been part of our survival strategy, as evidenced by these pictures from Troma Night V and Troma Night VI.
Troma Night XXI was among those captured by the Troma Night Webcam, streamed out to the Internet in 1-megapixel, 4 frames per second glory (when it worked).
In addition to running for over 300 weeks, Troma Night became, for many of us, a central facet of our social lives. The original attendees were all volunteers at Aberystwyth Nightline, but we were later joined by their friends, lovers, housemates… and by Liz‘s dates (who after meeting all of her friends, we usually never saw again). We quickly developed our own traditions and ideas, such as:
Pizzas like the Alec Special – a Hollywood Special (ham, pepperoni, beef, mushrooms, green peppers, onions,
sweetcorn) but without the onions and with pineapple substituted in instead – and the Pepperoni Feast particularly enjoyed by our resident vegetarian,
For those who – like me – insist that our regular Hollywood Pizza got greasier over this years, these photos from Troma Night VI.5 are pretty damning. Maybe it’s just that our tastes
changed.
Paul spontaneously throwing a sponge out of the window to mark the beginning of the evening’s activities,
Alec bringing exactly one more can of Grolsch than he’s capable of drinking and leaving the remainder in the fridge to be consumed by Kit at the start of the subsequent event,
A fight over the best (or in some cases only) seats in Claire and I’s various small (and cluttered) homes: we once got 21 people into the living room at The Flat, but it wasn’t
exactly pleasant,
Becoming such a regular customer to Hollywood Pizza that they once phoned us when we hadn’t placed an order in a timely fashion, on another ocassion turned up
with somebody else’s order because it “looked like the kind of thing we usually ordered”, and at least one time were persuaded to deliver the pizza directly up to the living room
and to each recipient’s lap (you can’t get much better delivery service than that).
Decisions about how Claire and I would lay out our furniture were eventually influenced directly by maximising the efficiency of our seating plan. This picture, from Troma Night IV,
makes it seem quite spacious and relaxed compared to later nights.
And I still enjoy the occasional awful film. I finally got around to watching Sharknado the other month, and my RiffTrax account’s library grows year on year. One of my reward card accounts is still under the name of Mr. Troma Knight. So I suppose that Troma Night
lives on in some the regulars, even if we don’t make ourselves suffer of a weekend in quite the
same ways as we once did.
The year was 1995, and CompuServe’s online service cost $4.95 per hour. Yet thousands of people logged into this virtual world daily.
WorldsAway was born 20 years ago, when Fujitsu Cultural Technologies, a subsidiary of Japanese electronics giant Fujitsu, released this online experiment in multiplayer communities.
It debuted as part of the CompuServe online service in September, 1995. Users needed a special client to connect; once online, they could chat with others while represented onscreen
as a graphical avatar.
I was already a veteran of BBSes (I even started my own), Prodigy, CompuServe, and the Internet when I saw an advertisement for WorldsAway in CompuServe magazine (one of my favorite
magazines at the time). It promised a technicolor online world where you could be anything you wanted, and share a virtual city with people all over the globe. I signed up to receive
the client software CD. Right after its launch in September, I was up and running in the new world. It blew my young mind.
Back in the early 2000s when I was suffering from insomnia I used to sit up and watch all kinds of trashy late-night TV on the UK’s (then new) Channel 5. There was one show that I got
hooked on and tuned in to religiously, simply because its presentation was so bizarre. That show was outTHERE (iMDB) (Wikipedia). I’d love to find some episodes of it: I’m happy to pay for DVDs or watch episodes online or whatever, but I just can’t find
any. Anywhere. Seriously: it’s like the entire show has vanished.
I read a book as a child, probably in the early 1990s, whose story sick with me but which I haven’t been able to find since. The plot goes thusly: a child plays a semi text-based
video game in which he controls a character (represented by an asterisk), but it later becomes apparent that the character he’s controlling is real and self-aware. He’s an alien, or
something similar, and he needs help… and that’s most of what I remember, but I can’t be the only one who read it, right?
Recently, I’ve reduced my hours working at the Bodleian in order to be able to spend more time working on
Three Rings and engaging in other bits of freelance work… and to increase my
flexibility so that I can be available for childcare and to generally make things more-convenient for the other Greendalians and I. Unfortunately, on my very second day of this new working arrangement Nena (which I built in 2008) had her power supply blow up, which sort-of threw a spanner into the works. This, along with a scary recent
hard drive failure in JTA‘s computer, I took as being a sign from the Universe that it was time to build myself a new
PC to replace Toni, my primary box, and relegate Toni to be the new Nena. It was time to build: Cosmo.
This episode of Basic Instructions, which came out disturbingly close to the construction of Cosmo, somewhat parallels my experience. Click for full comic.
Given that I had a little cash to burn, I decided that it must finally be time to fulfil a couple of long-standing dreams I’ve had – things I’ve wanted to do when building my last two
or three computers, but never been able to justify the expense. And so I set out to build my new “dream computer”: a beast of a machine which would present me with some fresh
engineering challenges during construction. Key features that I wanted to include were:
Liquid cooling
Most computers are air-cooled: the “hot” components like the processor and graphics chipset are covered with a heatsink (which works just like the fins on a motorcycle engine: drawing
heat away through contact with cool air) and, generally, a fan (to improve airflow over the heatsink and thus increase cooling). Air cooling, though, is inefficient (the transfer of
heat from components to air isn’t very fast) and noisy (“hot”-running air-cooled computers are annoyingly loud), and so in my last few PC builds I’ve drifted towards using cooler and
quieter components, such as processors that are overpowered for what they’ll actually be asked to do (like Tiffany2, who’s virtually silent) and all-in-one liquid coolers for my CPUs (like these ones, from CoolerMaster – note that these still have a
fan, but the use of a radiator means that the fan can be large, slow, and quiet, unlike conventional CPU fans which spin quickly and make noise).
The “business end” of the cooling system of a typical air-cooled graphics card. That grey sticky bit on the copper square touches the processor, and the entire rest of the system is
about dissipating the heat produced there.
But I’ve always had this dream that I’d one day build a true, complete, custom water-cooled system: taking a pump and a reservoir and a radiator and cutting pipe to fit it all around
the “hot” components in my case. The pumps and fans of water-cooled systems make them marginally louder than the quietest of fan-driven, air-cooled computers… but are far more
efficient, drawing a massive amount of heat away from the components and therefore making it possible to pack more-powerful components closer together and overclock them to speeds
undreamed of by their manufacturers. A liquid cooling solution was clearly going to be on the list.
Multi-GPU
And how to best make use of that massive cooling potential? By putting an extra graphics card in! The demands of modern 3D games mean that if you want to run at the highest resolutions,
quality settings, and frame rates, you need a high-end graphics card. And if you want to go further still (personally: I love to be able to run Bioshock Infinite, Far Cry 3, or Call
Of Duty: Ghosts at a massive “ultra-widescreen”, wrap-around resolution of 5760×1080 – that’s triple the number of pixels found on your 1080p HDTV), well: you’re going to
want several high-end graphics cards.
Even though the capability to run graphics cards in tandem, pooling resources, has existed since the 1990s, it’s only within the last decade that it’s become truly meaningful: and
even now, it’s still almost-exclusively the domain of the enthusiast.
Both ATI/AMD’s Radeon and Nvidia’s GeForce series’ of chipsets are capable of running in tandem, triple, or quadruple
configurations (so long as your motherboard and power supply hold up, and assuming that you’ve got the means to keep them all cool, of course!), and as a result all of my last few PC
builds have deliberately been “ready” for me to add a second graphics card, down the line, if I decided I needed some extra “oomph” (instead, I’ve always ended up with a new computer by
that point, instead), but this would be the first time I’d actually design the computer to be multi-GPU from the outset.
SSD/RAID 1+0 Combo
Toni featured a combination of a solid-state drive (flash
memory, like you get in pendrives, but faster) instead of a conventional hard drive, to boot from, and a pair of 2TB “traditional” hard drives, all connected through the
perfectly-adequate SATA 2
interface. Using an SSD for the operating system meant that the machine booted up ludicrously quickly, and this was something I wanted to maintain, so clearly the next step was a
larger, faster, SATA 3 SSD for Cosmo.
This is your computer. This is your computer on RAID.
Anybody who’s messed about with computer hardware for as long as I have has seen a hard drive break down at least once, and JTA’s recent malfunction of that type reminded me that even
with good backups, the downtime resulting from such a component fault is pretty frustrating. This, plus the desire to squeeze as much speed as possible out of conventional hard drives,
made me opt for a RAID 1+0 (or “RAID
10”). I’d tie together four 2TB hard drives to act as a single 4TB disk, providing a dramatic boost in redundancy (one, or possbily even two drives can be completely destroyed without
any data loss) and speed (reading data that’s duplicated across two disks is faster because the computer can be effectively “reading ahead” with the other disk; and writing data to
multiple disks is no slower because the drives work at the same time).
A few other bits of awesome
Over my last few PC builds, I’ve acquired a taste for a handful of nice-to-have’s which are gradually becoming luxuries I can’t do without. My first screwless case
was Duality, back in the early 2000s, and I’d forgotten how much easier it was to simply clip hard drives to rails until I built Nena years later into a
cheap case that just wasn’t the same thing.
If you were at, for example, Troma Night IV, on 17th May 2003, you’ll have seen Duality: she’s the huge blue thing on the right.
Another thing I’ve come to love and wonder how I ever did without is modular power supplies. Instead of having a box with a huge bundle of cables sticking out of it, these are just a
box… the cables come separately, and you only use the ones you need, which takes up a lot less space in your case and makes the whole process a lot tidier. How did it take
us so long to invent these things?
Needless to say, the planning about building Cosmo was the easy and stress-free bit. I shall tell you about the exciting time I had actually putting her together
– and the lessons learned! – later. Watch this space, and all that!
Hot on the heels of our long weekend in Jersey, and right after the live deployment of Three
Rings‘ Milestone: Krypton, came
another trip away: I’ve spent very little time in Oxford, lately! This time around, though, it was an experimental new activity that we’ve inserted into the Three Rings
calendar: Dev Training.
We rented a secluded cottage to which we could whisk away our prospective new developers. By removing day-to-day distractions at work and home, our thinking was that we could fully
immerse them in coding.
The format wasn’t unfamiliar: something that we’ve done before, to great success, is to take our dedicated volunteer programmers away on a “Code Week”: getting everybody together in one
place, on one network, and working 10-14 hour days, hammering out code to help streamline charity rota management. Sort-of like a LAN party, except instead of games, we do
work. The principle of Code Week is to turn volunteer developers, for a short and intense burst, in to machines that turn sugar into software. If you get enough talented people
around enough computers, with enough snacks, you can make miracles happen.
I’m not certain that the driveway was really equipped for the number of cars we brought. But I don’t get on terribly well with laptops, so clearly I was going to bring a desktop
computer. And a second desktop computer, just in case. And that takes up a lot of seat space.
In recent years, Three Rings has expanded significantly. The test team has exploded; the support team now has to have a rota of their own in order to keep track of who’s
working when; and – at long last – the development team was growing, too. New developers, we decided, needed an intensive session of hands-on training before they’d be set loose on
real, production code… so we took the principles of Code Week, and turned it into a boot camp for our new volunteers!
New developers Rich, Chris, and Mike set up their development environments. Owing to the complexity of the system, this can be a long part of the course (or, at least, it feels that
way!).
Recruiting new developers has always been hard for us, for a couple of reasons. The first reason is that we’ve always exclusively recruited from people who use the system. The thinking
is that if you’re already a volunteer at, say, a helpline or a community library or a fireboat-turned-floating-museum or any of the other organisations that use Three Rings, then you already understand why what we
do is important and valuable, and why volunteer work is the key to making it all happen. That’s the bit of volunteering that’s hardest to ‘teach’, so the thinking is that by making it a
prerequisite, we’re always moving in the right direction – putting volunteering first in our minds. But unfortunately, the pool of people who can program computers to a satisfactory
standard is already pretty slim (and the crossover between geeks and volunteers is, perhaps, not so large as you might like)… this makes recruitment for the development
team pretty hard.
Turfed out of the Ops Centre and into the living room, JTA works on important tasks like publicity, future posts on the Three Rings blog, and ensuring that we all remember to eat at
some point.
A second difficulty is that Three Rings is a hard project to get involved with, as a newbie. Changing decisions in development convention, a mess of inter-related (though
thankfully not inter-depedent) components, and a sprawling codebase make getting started as a developer more than a little intimidating. Couple that with all of the things our
developers need to know and understand before they get started (MVC, RoR, TDD, HTML, CSS, SQL, DiD… and that’s just the acronyms!), and you’ve got a learning curve that’s close to vertical. Our efforts to integrate
new developers without a formal training program had met with limited success, because almost nobody already has the exact set of skills we’re looking for: that’s how we knew it was
time to make Dev Training Weekend a reality.
Conveniently, there was a pub literally just out the gate from the back garden of the cottage, which proved incredibly useful when we (finally) downed tools and went out for a drink.
We’d recruited three new potential developers: Mike, Rich, and Chris. As fits our pattern, all are current or former volunteers from organisations that use Three Rings. One of them had
been part of our hard-working support team for a long time, and the other two were more-new to Three Rings in general. Ruth and I ran a series of workshops covering Ruby, Rails, Test-Driven Development, Security, and so on, alternated between stretches of supervised
“hands-on” programming, tackling genuine Three Rings bugs and feature requests. We felt that it was important that the new developers got the experience of making a real difference,
right from the second or the third day, they’d all made commits against the trunk (under the careful review of a senior developer, of course).
Mike demonstrates test-driven development, down at the local pub: 1. touch cat 2. assert cat.purring? When the test fails, of course, the debugging challenge begins: is the problem
with the test, the touch, or the cat?
We were quite pleased to discover that all three of them took a particular interest early on in different parts of the system. Of course, we made sure that each got a full and
well-rounded education, but we found that they were all most-interested in different areas of the system (Comms, Stats, Rota, etc.), and different layers of development (database,
business logic, user interface, etc.). It’s nice to see people enthused about the system, and it’s infectious: talking with some of these new developers about what
they’d like to contribute has really helped to inspire me to take a fresh look at some of the bits that I’m responsible for, too.
Chris drip-feeds us fragments of his life in computing and in volunteering; and praises Ruby for being easier, at least, than programming using punchcards.
It was great to be able to do this in person. The Three Rings team – now about a dozen of us in the core team, with several dozen more among our testers – is increasingly geographically
disparate, and rather than face-to-face communication we spend a lot of our time talking to each other via instant messengers, email, and through the comments and commit-messages of our
ticketing and source control systems! But there’s nothing quite like being able to spend a (long, hard) day sat side-by-side with a fellow coder, cracking through some infernal bug or
another and talking about what you’re doing (and what you expect to achieve with it) as you go.
Chris, Mike and Rich discuss some aspect or another of Three Rings development.
I didn’t personally get as much code written as I’d have liked. But I was pleased to have been able to support three new developers, who’ll go on to collectively achieve more than I
ever will. It’s strange to look back at the early 2000s, when it was just me writing Three
Rings (and Kit testing/documenting most of it: or, at least, distracting me with facts about Hawaii while I was trying to write
the original Wiki feature!). Nowadays Three Rings is a bigger (and more-important) system than ever before, supporting tens of thousands of volunteers at hundreds of voluntary
organisations spanning five time zones.
I’ve said before how much
it blows my mind that what began in my bedroom over a decade ago has become so critical, and has done so much good for so many people. And it’s still true today: every time I think
about it, it sends my head spinning. If that’s what it’s done in the last ten years, what’llitdo in the next ten?