Not convinced they my route through the 2m nettles to reach the host was optimal, but I made it through to this decent sized cache relatively unscathed. TFTC!
Found without difficulty but at W 001°14’16.7886″, N 51°40’12.5960″: the cache description provides a good clue though so it was easy to get back on track. TFTC
Skipped this one as I’m still playing catchup to the rest of my group and wasn’t interested in a long search. Many previous finders note that the coordinates are significantly off but I
couldn’t see anybody posting alternates. Maybe next time I’m down here!
After a brief overshoot – too excited to finally be catching up to Ruth and the rest of my squad! – doubled back to find this easy location.
Cache was lying on the floor which I assume isn’t the right hiding place, so I returned it to the V. While running from the last cache I’ve dropped my writing implement somewhere, so
have photographed the (almost pristine!) logbook as proof that I actually found it. This has been my favourite of this mini-series so far; FP
awarded for the enjoyable container theming if nothing else!
No difficulty with this QEF; title clue plus a little botany had me stop in exactly the right place. And the tall nettles weren’t any risk to me
with long trousers! TFTC.
No luck here. Couldn’t spare more than a little while to hunt without Ruth and the rest of the Thames Path Source-to-Mouth party getting too
far ahead but spent that time getting thoroughly nettled. They’re fierce around here! Maybe another time.
Found after a brief search while accompanying my partner Ruth on the latest leg of her effort to (by instalments) walk the entire length of
the Thames from its source to its mouth. Initially looked in the wrong place but once I was willing to brave the nettles and hack my way off the footpath the cache location became
obvious. A bit damp, but serviceable. TFTC!
Another QEF down this stretch of the Thames Path. Having to jog to catch up with my party now! Cache container camouflage slightly damaged but still
plenty good for concealment so long as it’s stored the right way up! TFTC, nice one!
I used to pay for VaultPress. Nowadays I get it for free as one of the many awesome perks of my job. But I’d probably still pay for it
because it’s a lifesaver.
Like my occasional video content, this isn’t designed to replace any of my blogging: it’s just a different medium for those that might prefer it.
For some stories, I guess that audio might be a better way to find out what I’ve been thinking about. Just like how the vlog version of my post about
my favourite video game Easter Egg might be preferable because video as a medium is better suited to demonstrating a computer game, perhaps
audio’s the right medium for some of the things I write about, too?
But as much as not, it’s just a continuation of my efforts to explore different media over which a WordPress blog can be delivered2.
Also, y’know, my ongoing effort to do what I’m bad at in the hope that I might get better at a wider diversity of skills.
How?
Let’s start by understanding what a “podcast” actually is. It is, in essence, just an RSS feed (something you might have heard me talk about before…) with audio enclosures – basically, “attachments” – on each item. The idea was spearheaded by Dave Winer back in 2001 as a
way of subscribing to rich media like audio or videos in such a way that slow Internet connections could pre-download content so you didn’t have to wait for it to buffer.3
Here’s what I had to do to add podcasting capability to my theme:
The tag
I use a post tag, dancast, to represent posts with accompanying podcast content4.
This way, I can add all the podcast-specific metadata only if the user requests the feed of that tag, and leave my regular feeds untampered . This means that you don’t
get the podcast enclosures in the regular subscription; that might not be what everybody would want, but it suits me to serve podcasts only to people who explicitly ask for
them.
Okay, onto the code (which I’ve open-sourced over here). I’ve use a series of standard WordPress hooks to
add the functionality I need. The important bits are:
rss2_item – to add the <enclosure>, <itunes:duration>, <itunes:image>, and
<itunes:explicit> elements to the feed, when requesting a feed with my nominated tag. Only <enclosure> is strictly required, but appeasing Apple
Podcasts is worthwhile too. These are lifted directly from the post metadata.
the_excerpt_rss – I have another piece of post metadata in which I can add a description of the podcast (in practice, a list of chapter times); this hook
swaps out the existing excerpt for my custom one in podcast feeds.
rss_enclosure – some podcast syndication platforms and players can’t cope with RSS feeds in which an item has multiple enclosures, so as a
safety precaution I strip out any enclosures that WordPress has already added (e.g. the featured image).
the_content_feed – my RSS feed usually contains the full text of every post, because I don’t like feeds that try to force you to go to the
original web page5
and I don’t want to impose that on others. But for the podcast feed, the text content of the post is somewhat redundant so I drop it.
rss2_ns – of critical importance of course is adding the relevant namespaces to your XML declaration. I use the itunes namespace, which provides the widest compatibility for specifying metadata, but I also use the
newer podcast namespace, which has growing compatibility and provides some modern features, most of which I don’t
use except specifying a license. There’s no harm in supporting both.
rss2_head – here’s where I put in the metadata for the podcast as a whole: license, category, type, and so on. Some of these fields are
effectively essential for best support.
You’re welcome, of course, to lift any of all of the code for your own purposes. WordPress makes a perfectly reasonable platform for podcasting-alongside-blogging, in my experience.
What?
Finally, there’s the question of what to podcast about.
My intention is to use podcasting as an alternative medium to my traditional blog posts. But not every blog post is suitable for conversion into a podcast! Ones that rely on images
(like my post about dithering) aren’t a great choice. Ones that have lots of code that you might like to copy-and-paste are especially unsuitable.
Also: sometimes I just can’t be bothered. It’s already some level of effort to write a blog post; it’s like an extra 25% effort on top of that to record, edit, and upload a podcast
version of it.
That’s not nothing, so I’ve tended to reserve podcasts for blog posts that I think have a sort-of eccentric “general interest” vibe to them. When I learn something new and feel the need
to write a thousand words about it… that’s the kind of content that makes it into a podcast episode.
Which is why I’ve been calling the endeavour “a podcast nobody asked for, about things only Dan Q cares about”. I’m capable of getting nerdsniped
easily and can quickly find my way down a rabbit hole of learning. My podcast is, I guess, just a way of sharing my passion for trivial deep dives with the rest of the world.
My episodes are probably shorter than most podcasts: my longest so far is around fifteen minutes, but my shortest is only two and a half minutes and most are about seven. They’re meant
to be a bite-size alternative to reading a post for people who prefer to put things in their ears than into their eyes.
Anyway: if you’re not listening already, you can subscribe from here or in your favourite podcasting app. Or you can just follow my blog as normal
and look for a streamable copy of podcasts at the top of selected posts (like this one!).
2 As well as Web-based non-textual content like audio (podcasts) and video (vlogs), my blog is wholly or partially available over a variety of more-exotic protocols: did you find me yet on Gemini (gemini://danq.me/), Spartan (spartan://danq.me/), Gopher (gopher://danq.me/), and even Finger
(finger://danq.me/, or run e.g. finger blog@danq.me from your command line)? Most of these are powered by my very own tool CapsulePress, and I’m itching to try a few more… how about a WordPress blog that’s accessible over FTP, NNTP, or DNS? I’m not even kidding when I say
I’ve got ideas for these…
3 Nowadays, we have specialised media decoder co-processors which reduce the size of media
files. But more-importantly, today’s high-speed always-on Internet connections mean that you probably rarely need to make a conscious choice between streaming or downloading.
4 I actually intended to change the tag to podcast when I went-live,
but then I forgot, and now I can’t be bothered to change it. It’s only for my convenience, after all!
Last month I implemented an alternative mode to view this website “like it’s 1999”, complete with with cursor trails, 88×31 buttons, tables for
layout1,
tiled backgrounds, and even a (fake) hit counter.
One thing I’d have liked to do for 1999 Mode but didn’t get around to would have been to make the images look like it was the 90s, too.
Back then, many Web users only had graphics hardware capable of displaying 256 distinct colours. Across different platforms and operating systems, they weren’t even necessarily
the same 256 colours2!
But the early Web agreed on a 216-colour palette that all those 8-bit systems could at least approximate pretty well.
I had an idea that I could make my images look “216-colour”-ish by using CSS to apply an SVG filter, but didn’t implement it.
But Spencer, a long-running source of excellent blog comments, stepped up and wrote an SVG
filter for me! I’ve tweaked 1999 Mode already to use it… and I’ve just got to say it’s excellent: huge thanks, Spencer!
The filter coerces colours to their nearest colour in the “Web safe” palette, resulting in things like this:
Plenty of pictures genuinely looked like that on the Web of the 1990s, especially if you happened to be using a computer only capable of 8-bit colour to view a page built by
somebody who hadn’t realised that not everybody would experience 24-bit colour like they did3.
Dithering
But not all images in the “Web safe” palette looked like this, because savvy web developers knew to dither their images when converting them to a limited palette.
Let’s have another go:
Dithering introduces random noise to media4
in order to reduce the likelihood that a “block” will all be rounded to the same value. Instead; in our picture, a block of what would otherwise be the same colour ends up being rounded
to maybe half a dozen different colours, clustered together such that the ratio in a given part of the picture is, on average, a better approximation of the correct
colour.
The result is analogous to how halftone printing – the aesthetic of old comics and newspapers, with different-sized dots made from
few colours of ink – produces the illusion of a continuous gradient of colour so long as you look at it from far-enough away.
The other year I read a spectacular article by Surma that explained in a very-approachable way
how and why different dithering algorithms produce the results they do. If you’ve any interest whatsoever in a deep dive or just want to know what blue noise is and why you
should care, I’d highly recommend it.
You used to see digital dithering everywhere, but nowadays it’s so rare that it leaps out as a revolutionary aesthetic when, for example, it gets used in
a video game.
All of which is to say that: I really appreciate Spencer’s work to make my “1999 Mode” impose a 216-colour palette on images. But while it’s closer to the truth, it still doesn’t
quite reflect what my website would’ve looked like in the 1990s because I made extensive use of dithering when I saved my images in Web safe palettes5.
Why did I take the time to dither my images, back in the day? Because doing the hard work once, as a creator of graphical Web pages, saves time and computation (and can look
better!), compared to making every single Web visitor’s browser do it every single time.
Which, now I think about it, is a lesson that’s still true today (I’m talking to you, developers who send a tonne of JavaScript and ask my browser to generate the HTML for you
rather than just sending me the HTML in the first place!).
Footnotes
1 Actually, my “1999 mode” doesn’t use tables for layout; it pretty much only applies a
CSS overlay, but it’s deliberately designed to look a lot like my blog did in 1999, which did use tables for layout. For those too young to remember: back before CSS
gave us the ability to lay out content in diverse ways, it was commonplace to use a table – often with the borders and cell-padding reduced to zero – to achieve things that today
would be simple, like putting a menu down the edge of a page or an image alongside some text content. Using tables for non-tabular data causes problems, though: not only is
it hard to make a usable responsive website with them, it also reduces the control you have over the order of the content, which upsets some kinds of accessibility
technologies. Oh, and it’s semantically-invalid, of course, to describe something as a table if it’s not.
2Perhaps as few as 22 colours were defined the same across all
widespread colour-capable Web systems. At first that sounds bad. Then you remember that 4-bit (16 colour) palettes used to look look perfectly fine in 90s videogames. But then you
realise that the specific 22 “very safe” colours are pretty shit and useless for rendering anything that isn’t composed of black, white, bright red, and maybe one of a few
greeny-yellows. Ugh. For your amusement, here’s a copy of the image rendered using only the “very safe” 22 colours.
3 Spencer’s SVG filter does pretty-much the same thing as a computer might if asked to
render a 24-bit colour image using only 8-bit colour. Simply “rounding” each pixel’s colour to the nearest available colour is a fast operation, even on older hardware and with larger
images.
4 Note that I didn’t say “images”: dithering is also used to produce the same “more
natural” feel for audio, too, when reducing its bitrate (i.e. reducing the number of finite states into which the waveform can be quantised for digitisation), for example.
5 I’m aware that my footnotes are capable of nerdsniping Spencer, so by writing this
there’s a risk that he’ll, y’know, find a way to express a dithering algorithm as an SVG filter too. Which I suspect isn’t possible, but who knows! 😅
We still didn’t feel up to a repeat of the bigger summer party we held the year before last, but we love our Abnib buddies, so put the call out to say: hey, come on over, bring a tent (or be willing to crash on a sofa bed) if you want to stay over; we’ll let the
kids run themselves ragged with a water fight and cricket and football and other garden games, then put them in front of a film or two while we hang out and drink and play board games
or something.
The entire plan was deliberately low-effort. Drinks? We had a local brewery drop us off a couple of kegs, and encouraged people to BYOB. Food? We
threw a stack of pre-assembled snacks onto a table, and later in the day I rotated a dozen or so chilled pizzas through the oven. Entertainments? Give the kids a pile of toys and the
adults one another’s company.
We didn’t even do more than the bare minimum of tidying up the place before people arrived. Washing-up done? No major trip hazards on the floor? That’s plenty good enough!
I found myself recalling our university days, when low-effort ad-hoc socialising seemed… easy. We lived close together and we had uncomplicated schedules, which combined to make it socially-acceptable to “just turn up” into one another’s lives and spaces. Many were the times that people would descend upon Claire and
I’s house in anticipation that there’d probably be a film night later, for example1.
I remember one occasion a couple of decades ago, chilling with friends2. Somebody – possibly Liz
– commented that it’d be great if in the years to come our kids would be able to be friends with one another. I was reminded of it when our eldest asked me, of our weekend
guests, “why are all of your friends’ children are so great?”
What pleased me in particular was how relatively-effortless it was for us all to slip back into casually spending time together. With a group of folks who have, for the most part, all
known each other for over two decades, even not seeing one another in-person for a couple of years didn’t make a significant dent on our ability to find joy in each other’s company.
Plus, being composed of such laid-back folks, it didn’t feel awkward that we had, let’s face it, half-arsed the party. Minimal effort was the order of the day, but the flipside of that
was that the value-for-effort coefficient was pretty-well optimised3.
A delightful weekend that I was glad to be part of.
3 I’m pretty sure that if I’d have used the term “value-for-effort coefficient”
at the party, though, then it’d have immediately sucked 100% of the fun out of the room.
Runners will talk about how much they enjoy the feeling of wind in their hair. Boxers won’t shut up about the grace and art of their profession. Even soccer players can be moved to
wax poetical about how enjoyable it is to be part of a truly great game.
But all golfers ever talk about is how little golf they hope to play. A typical pre-match interview will go something like this:
Some guy in a blazer: Great to have you here with us, what are your goals for the first round this morning.
Golfer: Well today I hope to play as little golf as possible. Mathematically speaking the course could be done in 18 shots but that is probably physically
impossible. But ideally as close to 18 as I can get. Any additional golf is bad.
Blazer: What is your strategy for avoiding the golf.
Golfer: I have a guy who follows me around to help share the burden of all this damn golf. He is going to help me out by suggesting ways to avoid playing any more
golf than we have to. Of course, I pay him but his real motivation is to bring this sorry excuse for a pastime to the speediest conclusion.
Blazer: Better you than me, but good luck out there.
Why must a blog comment be text? Why could it not be… a drawing?1
I started hacking about and playing with a few ideas and now, on selected posts including this one, you can draw me a comment instead of typing one.
I opened the feature, experimentally (in a post available only to RSS subscribers2) the
other week, but now you get a go! Also, I’ve open-sourced the whole thing, in case you want to pick it apart.
What are you waiting for: scroll down, and draw me a comment!
Footnotes
1 I totally know the reasons that a blog comment shouldn’t be a drawing; I’m not
completely oblivious. Firstly, it’s less-expressive: words are versatile and you can do a lot with them. Secondly, it’s higher-bandwidth: images take up more space, take longer to
transmit, and that effect compounds when – like me – you’re tracking animation data too. But the single biggest reason, and I can’t stress this enough, is… the
penises. If you invite people to draw pictures on your blog, you’re gonna see a lot of penises. Short penises, long penises, fat penises, thin penises. Penises of every shape
and size. Some erect and some flacid. Some intact and some circumcised. Some with hairy balls and some shaved. Many of them urinating or ejaculating. Maybe even a few with smiley
faces. And short of some kind of image-categorisation AI thing, you can’t realistically run an anti-spam tool to detect hand-drawn penises.
2 I’ve copied a few of my favourites of their drawings below. Don’t forget to subscribe if you want early access to any weird shit I make.