Reply to: Mobile writing, part 2

Clayton Errington wrote:

After a first attempt at mobile blogging, I found a process that works better for my work flow.

Throughout the day, I have ideas and need to write them down. This could be a coding process, a thought to remember, the start of a blog post, and more. I love a good notes app. I’ve gone through quite a few and use a few for different things. Lately it’s been Simplenote.

As I took part in Bloganuary and began what’ll hopefully become a fifth consecutive year of 100 Days To Offload, I started to hate my approach to mobile blogging and seek something better, too. My blog’s on WordPress, but it’s so highly-customised that I can’t meaningfully use any of the standard apps, and I find the mobile interface too slow and clunky to use over anything less than a great Internet connection… which – living out in the sticks – I don’t routinely have when I’m out and about. So my blogging almost-exclusively takes place at my desktop or laptop.

But your experience of using a notetaking app is reasonably inspiring. I’m almost never away from a “real” computer for more than a day, so there’s no reason I can’t simply write into such an app, let it sync, and copy-paste into a blog post (and make any tweaks) when I’m sitting at a proper keyboard! I’m using Obsidian for notetaking, and it Syncthing‘s to my other computers, so I should absolutely be leveraging that. I already have an Obsidian folder full of “blog post ideas”… why shouldn’t I just write blog posts there.

Thanks for the inspiration!

Reply to The ethics of syndicating comments using WebMentions

In his blog post “The ethics of syndicating comments using WebMentions”, Terence Eden said:

I want to see what people are writing in public about my posts. I also want to direct people to the conversations which are happening elsewhere on the web. But people – quite rightly – might not want their content permanently stored by my site.

So I think I have a few options.

  1. Do nothing. My site; my rules. If you don’t want me to grab your hot takes, don’t post them in public. (Feels a bit rude, TBQH.)
  2. Be reactive. If someone asks me to remove their content, do so. (But, of course, how will they know I’ve made a copy?)
  3. Stop syndicating comments. (I don’t wanna!)
  4. Replace the verbatim comments with a link saying “Fred mentioned this article on Twitter” . (A bit of a disruptive experience for readers.)
  5. Use oEmbed to capture the user’s comment and dynamically load it from the 3rd party site. That would update automatically if the user changes their name or deleted the comment. (A massive faff to set up.)

Terence describes a problem that I’ve wrestled with myself. If somebody comments directly on my blog using the form at the bottom of a post, that’s a pretty strong indicator of them giving their consent for their comment to be published at the bottom of that post (at my discretion). If somebody publicly replies somewhere my post is syndicated, that’s less-obvious, but still pretty clear. If somebody merely mentions my post publicly, writing their own post and linking to mine… that’s a real fuzzy area.

I take a minimal approach; only capturing their full content if it’s short and otherwise trying to extract a snippet that contains the bit that mentioned my content, and I think that works great. But Terence points out an important follow-up: what if the commenter deletes that content?

My approach so far has always been a reactive one – the second in Terence’s list – and I think it’s a morally-acceptable stance for a personal blogger. But I’m not sure it scales. I find myself asking: what if a news outlet did this, taking my self-published feedback to their story and publishing it on their site, even if I later amended, retracted, or deleted it on my own? If somebody’s making money out of my content, that feels different: I’ve always been clear that what I write on my blog is permissively-licensed, but that permissiveness is based on the prohibition of commercial use of my content.

Perhaps down the line this can be solved technologically: something machine-readable akin to the <link rel="license" ...> tag could state an author’s preference for how their content is syndicated by third parties they’ve mentioned, answering questions like:

  • Can you quote me, or just link to me? Who do these rules apply to? (Should we be attaching metadata to individual links?)
  • Should you inform me that you’ve done so, and if so: how (WebMention, etc.)?
  • If you (or your site) observe that my content has disappeared or changed for an extended time, should that be taken as revokation of consent to syndicate it?

Right now, the relevant technologies are not well-established enough to even begin this kind of work, but if a modern interconected federated web of personal websites takes off, it’s the kind of question we might one day have to answer.

For now my gut feeling is that option #2 (reactive moderation of syndicated comments) is ethically-sufficient for personal websites. But I’ll be watching the feedback Terence (who probably gets many more readers than I) receives in case my gut doesn’t represent the majority!

Reply to Decentralization and verification

In “Decentralization and verification”, Derek Kedziora said:

…Mastodon by its very nature as a decentralized service can’t verify accounts.

We’d still need some trusted third party to do offline verifications and host them in a centralized repository.

Let’s not sell Mastodon short here. The service you compare it to – Twitter – solves this problem… but only if you trust Twitter as an authority on the identity of people. Mastodon also solves the problem, but it puts the trust in a different place: domain names and account pages.

If you want to “verify” yourself on Mastodon, you can use a rel=”me” link from a page or domain you control. It looks like this:

Screenshot showing @dan@danq.me's Mastodon account as the verified owner of website DanQ.me.
The tick is green, not blue, but I can’t imagine anybody complains.

A great thing about this form of verification is you don’t have to trust my server (and you probably shouldn’t): you can check it for yourself to ensure that the listed website really does state that this is the official Mastodon account of “me”.

You can argue this just moves the problem further down the road – instead of trusting a corporation that have shown that they’re not above selling the rights to your identity you have to trust that a website is legitimate – and you’d be right. But in my case for example you can use years of history, archive.org, cross-links etc. to verify that the domain is “me”, and from that you can confirm the legitimacy of my Mastodon account. Anybody who can spoof multiple decades of my history and maintain that lie for a decade of indepdendent web archiving probably deserves to be able to pretend to be me!

There are lots of other distributed methods too: web-of-trust systems, signed keys, even SSL certificates would be a potential solution. Looking again at my profile, you’ll see that I list the fingerprint of my GPG key, which you can compare to ones in public directories (which are co-signed by other people). This way you’d know that if you sent an encrypted DM to my Mastodon inbox it could only be decrypted if I were legitimately me. Or I could post a message signed with that key to prove my identity, insofar as my web-of-trust meets your satisfaction.

If gov.uk’s page about 10 Downing Street had profile pages for cabinet members with rel=”me” links to their social profiles I’d be more-likely to trust the legitimacy of those social profiles than I would if they had a centralised verification such as a Twitter “blue tick”.

Fediverse identify verification isn’t as hard a problem to solve as Derek implies, and indeed it’s already partially-solved. Not having a single point of authority is less convenient, sure, but it also protects you from some of the more-insidious identity problems that systems like Twitter’s have.

Screenshot showing @dan@danq.me's Mastodon account as the verified owner of website DanQ.me.×

How to date a recording using background electrical noise

This article is a repost promoting content originally published elsewhere. See more things Dan's reposted.

Matching a target ENF series with a section of a reference series

We’re going to use ENF matching to answer the question “here’s a recording, when was it was (probably) taken?” I say “probably” because all that ENF matching can give us is a statistical best guess, not a guarantee. Mains hum isn’t always present on recordings, and even when it is, our target recording’s ENF can still match with the wrong section of the reference database by statistical misfortune.

Still, even though all ENF matching gives us is a guess, it’s usually a good one. The longer the recording, the more reliable the estimate; in the academic papers that I’ve read 10 minutes is typically given as a lower bound for getting a decent match.

To make our guess, we’ll need to:

  1. Extract the target recording’s ENF values over time
  2. Find a database of reference ENF values, taken directly from the electrical grid serving the area where the recording was made
  3. Find the section of the reference ENF series that best matches the target. This section is our best guess for when the target recording was taken

We’ll start at the top.

About a year after Tom Scott did a video summarising how deviation over time (and location!) of the background electrical “hum” produced by AC power can act as a forensic marker on audio recordings, Robert Heaton’s produced an excellent deep-dive into how you can play with it for yourself, including some pretty neat code.

I remember first learning about this technique a few years ago during my masters in digital forensics, and my first thought was about how it might be effectively faked. Faking the time of recording of some audio after the fact (as well as removing the markers) is challenging, mostly because you’ve got to ensure you pick up on the harmonics of the frequencies, but it seems to me that faking it at time-of-recording ought to be reasonably easy: at least, so long as you’re already equipped with a mechanism to protect against recording legitimate electrical hum (isolated quiet-room, etc.):

Taking a known historical hum-pattern, it ought to be reasonably easy to produce a DC-to-AC converter (obviously you want to be running off a DC circuit to begin with, e.g. from batteries, so you don’t pick up legitimate hum) that regulates the hum frequency in a way that matches the historical pattern. Sure, you could simply produce the correct “noise”, but doing it this way helps ensure that the noise behaves appropriately under the widest range of conditions. I almost want to build such a device, perhaps out of an existing portable transformer (they come in big battery packs nowadays, providing a two-for-one!) but of course: who has the time? Plus, if you’d ever seen my soldering skills you’d know why I shouldn’t be allowed to work on anything like this.

Matching a target ENF series with a section of a reference series×

Reply to OpenID for WP

This weekend I was experimentally reimplenting how my blog displays comments. For testing I needed to find an old post with both trackbacks and pingbacks on it. I found my post that you linked, here, and was delighted to be reminded that despite both of our blogs changing domain name (from photomatt.net to ma.tt and from blog.scatmania.org to danq.me, respectively), all the links back and forth still work perfectly because clearly we share an apporopriate dedication to the principle that Cool URIs Don’t Change, and set up our redirects accordingly. 🙌

Incidentally, this was about the point in time at which I first thought to myself “hey, I like what Matt’s doing with this Automattic thing; I should work there someday”. It took me like a decade to a decade-and-a-half to get around to applying, though… 😅

Anyway: thanks for keeping your URIs cool so I could enjoy this trip down memory lane (and debug an experimental wp_list_comments callback!).

Reply to An Accidentally ‘Anti-January’ January

Siobhan said:

I fell off the blogging bandwagon just as everyone else was hopping onto it.

I was just thinking the same! It felt like everybody and their dog did Bloganuary last month, meanwhile I went in the opposite direction!

Historically, there’s been an annual dip in my blogging around February/March (followed by a summer surge!), but in recent years it feels like that hiatus has shifted to January (I haven’t run the numbers yet to be sure, though).

I don’t think that’s necessarily a problem, for me at least. I write for myself first, others afterwards, and so it follows that if I blog when it feels right then an ebb-and-flow to my frequency ought to be a natural consequence. But it still interests me that I have this regular dip, and I wonder if it affects the quality of my writing in any way. I feel the pressure, for example, for post-hiatus blogging to have more impact the longer it’s been since I last posted! Like: “it’s been so long, the next thing I publish has to be awesome, right?”, as if my half-dozen regular readers are under the assumption that I’m always cooking-up something and the longer it’s been, the better it’s going to be.

Reply to “I was permanently banned by Facebook”

Donncha Ó Caoimh said:

My Facebook account was permanently banned on Wednesday along with all the people who take care of the Cork Skeptics page. We’re still not sure why but it might have something to do with the Facebook algorithm used to detect far-right conspiracy groups.

If you have a Facebook account you should download your information too because it could happen to you too, even though you did nothing wrong. Go here and click the “Create File” button now.

Yeah, I know you won’t do it but you really should.

Great advice.

After I got banned from Facebook in 2011 (for using a “fake name”, which is actually my real name) I took a similar line of thinking: I can’t trust Facebook (or Twitter, or Instagram, or whoever else) to be responsible custodians of my content, so I shan’t. Now, virtually all content I create is hosted on my WordPress-powered blog, at my own domain, first and foremost… and syndicated copies may appear on various social media.

In a very few instances I go the other way around, producing content in silos and then copying it back to my blog: e.g. my geocaching/geohashing expeditions are posted first to their respective sites (because it’s easiest and most-practical to do that using their apps, especially “in the field”), but then they get imported into my blog using a custom plugin. If any of these sites closes, deletes my data, adds paid tiers I’m not happy with, or just bans me from my own account… I’m still set.

Backing up all your social content is a good strategy. Owning it all to begin with is an even better one, IMHO. See also: Indieweb.

Reply to @ADumbGreyHam

Adam Graham tweeted:

@scatmandan I’ve just been in a meeting with with some people who were saying some very positive things about the legacy of work your dad (I think) did with buses in the North East

That seems likely. Conversely: if people are still talking about my work 7½ years after I die it’ll probably be to say “who wrote this bit of legacy code this way and what were they thinking?”

CC @TASPartnership @bornvulcan @TheGodzillaGirl

Reply to: Indelible

Bryn wrote:

The surgery had happened at a weirdly transitional point in my life. Only a few days earlier I’d performed improv on stage for the first time (see “Yes And” and “Memory Lane“), I’d changed jobs and was contemplating another move. The scar from the surgery seemed to be part of that too, and I had an idle thought to have a tattoo done on the scar as a permanent reminder to myself not to let work swallow my life up again.

Every time I consider getting a tattoo, I’m stopped by the fact that I’m sufficiently indecisive about what I’d get and where. Somehow, a tattoo would represent a sort of irreversible permanence that I feel is difficult for me to commit to. (I fully accept that this may seem a strange sentiment to many, coming from a sterilised-without-breeding man – I didn’t say I was consistent!)

But to add personalisation to a scar, especially one with a personal meaning and message: that I can really get behind. Unfortunately my only likely-permanent scars don’t have any messages behind them more-significant than, for example, “don’t let Dan play with knives”. Is it possible to get a tattoo on top of an emotional scar instead?

Reply to: A modern font loading strategy with the vanilla JS FontFaceSet.load() method

Chris Ferdinandi‘s daily tip for yesterday addressed a common familiar to Web developers using custom fonts (i.e. basically all of them):

In many browsers, if a custom typeface is declared but hasn’t finished downloading and parsing yet, browsers will leave space for the text but not render it until the file is ready.

This is often called a Flash Of Invisible Text (or FOIT).

In a now slightly outdated article, Ilya Grigorik, a web performance engineer at Google, reports:

29% of page loads on Chrome for Android displayed blank text: the user agent knew the text it needed to paint, but was blocked from doing so due to the unavailable font resource. In the median case the blank text time was ~350 ms, ~750 ms for the 75th percentile, and a scary ~2300 ms for the 95th.

To make matters worse, some mobile browsers never timeout a failed font file, and therefore never show text in a fallback typeface if the custom one fails to load. You get nothing at all.

Let’s talk about how to fix that.

Chris is right…

He’s right that the FOIT is annoying, and he’s right that for most text (and especially body text) the best result would be if a fallback system font was used immediately and swapped-out for the designer’s preferred font as soon as it becomes available: this maximises usability, especially on slower devices and connections. His solution is this:

  1. Set the font to a fallback font initially.
  2. Set the font to the preferred font once a CSS class is applied to a root element.
  3. Use Javascript to set apply that CSS class either when FontFaceSet.load() indicates that the font is available, and (via a cookie) for as long as the font file is expected to appear in the browser cache.

This approach is not without its problems. It requires Javascript (users for whom Javascript fails for some reason won’t see the font at all, but may still have to download the font file!), conflates cookie lifetime with cache lifetime (the two can be cleared independently, cookies can sometimes be synchronised across devices that don’t necessarily share caches, etc.), and uses Javascript techniques that don’t work in some browsers (Edge and Internet Explorer are both capable of showing custom web fonts but both will use the fallback font unless either (a) further Javascript is added (which Chris doesn’t supply) or (b) browser detection and/or conditional comments are used to trigger different behaviour in these browsers (which is icky).

…but he’s also wrong…

If only there was a better way to prevent the FOIT. One which degrades gracefully in older browsers, doesn’t require Javascript, doesn’t make assumptions about user cookie/cache configuration, and ideally involves a lot less code. It turns out, there is!

The font-display CSS directive exists to solve this exact issue [MDN]. Here’s what it looks like being used to solve the problem Chris presents (example taken from my blog’s CSS!):

@font-face{
  font-family:"Raleway";
  font-style:normal;
  font-weight:400;
  src: local("Raleway"),
       local("Raleway-Regular"),
       url(/wp-content/themes/q18/fonts/raleway-v11-latin-regular.woff2) format("woff2"),
       url(/wp-content/themes/q18/fonts/raleway-v11-latin-regular.woff) format("woff");
  font-display:swap;
}

Setting font-display: swap in the @font-face block tells the browser to use fallback fonts in place of this font while it loads. That’s probably exactly what you want for text fonts and especially body text; it means that the user sees the text as soon as possible and it’s swapped-out for the preferred font the moment it becomes available: no Javascript necessary! Conversely, font-display: block is a better choice for icon fonts where you want to force the browser to wait as long as possible for the font file to load (because any content rendered using it makes no sense otherwise).

font-display works out-of-the-box with Chrome, Firefox, and Safari and with the next version of Edge; older versions of Edge and Internet Explorer will simply fall-back to their default behaviour (FOIT where-necessary) – this is a progressive enhancement technique. But instead of a couple of dozen lines of Javascript, it’s a single line of CSS.

The only downside is that Google Web Fonts won’t add this directive, so you’ll need to self-host your font files (which is really easy, by the way: there’s a tool that’ll show you how). You should consider doing this anyway, of course: CDNs introduce a number of problems and no longer provide the relative performance benefits they used to. So self-host your fonts, add font-display: swap, and enjoy the most-lightweight and well-standardised approach possible to combatting the FOIT.

Goodbye Google Analytics, Hello Fathom

This article is a repost promoting content originally published elsewhere. See more things Dan's reposted.

Big news! This site is no longer using Google Analytics and I’ve switched to a self-hosted version of brand new analytics product Fathom.

Fathom Analytics dashboard

 

Fathom is very simple. It only tracks 4 things: Unique Visitors, Page Views, Time on Site, and Bounce Rate. It shows me a chart of page views and visitors and then gives me a break down of referrers and top performing content. That’s it. And to be quite honest, that’s about all I need from my blog analytics.

You know what, Dave:me too! I’ve been running Google Analytics since forever and Piwik/Matomo (in parallel with it) for about a year and honestly: I get more than enough of what I need from the latter. So you’ve inspired me to cut the line with Google: after all, all I was doing was selling them my friends’ data in exchange for some analytics I wasn’t really paying attention to… and I’d frankly rather not.

So: for the first time in a decade or so, there’s no Google Analytics on this site. Woop!

Update 2023-12-13: I eventually went further still and dropped all analytics, even self-hosted variants, and it feels great.

Reply #12885

a kind of synth bagpipe on Twitter (Twitter)

“the local part of my email address is just the letter ‘d’ by itself, which means microsoft has imposed the interesting restriction that my randomly generated password cannot contain the letter ‘d’”

@notrevenant @PWtoostrong you wouldn’t believe how often I’m told I’m not allowed to use the letter “Q” in passwords, because “my password mustn’t contain my surname”. O_o

Reply to Hardware Issue – when did hard drive space get like this?

Hardware Issue (Aquarionics)

So, I am a professional system administrator. It says it on my business cards and everything. Every couple of months, when I have to explain to the receptionist at the London office that yes, I do work here, and so

Nicholas Avenell (Aquarionics) wrote:

(My first hard drive for the Amiga 600 was second hand from my dad’s old laptop. It was SIXTY MEGABYTES. It held DOZENS of games. I would need over EIGHT HUNDRED of those drives to hold a 50Gb World of Warcraft install).

I remember my first hard drive. It was 40Mb, and that felt flipping MASSIVE because I’d previously, like most people, been using floppy disks of no larger than 1.44Mb. My second hard drive was 105Mb and it felt like a huge step-up; I ripped my first MP3s onto that drive, and didn’t care for a moment that they each consumed 2%-3% of the available space (and took about 15 minutes each to encode).

Nowadays I look at my general-purpose home desktop’s 12TB RAID array and I think to myself… yeah, but it’s over half full… probably time to plan for the next upgrade. What happened‽ Somewhere along the line hard drive space became like mobile phone battery level became before it: something where you start to worry if you have less than half left. I don’t know how we got here and I’m not sure I’m happy about it, but suffice to say: technology today is nuts.

What is a “Bag of Choppy”?

dobegood posted to Reddit:

My mum just used the phrase “we’d be in for a bucket of custard.” I’m not sure if this is a real phrase or she’s having a stroke. Please advise.

My mother once described her new cat, which was very skinny when she got it, as looking “like a bag of choppy”. I asked her what “choppy” was and she admitted that she didn’t actually know – all she knew is that her mother used to describe things that looked like that cat did as looking “like a bag of choppy”. We tried to look it up or work it out but we couldn’t get to the bottom of it – what is “choppy” except for something that comes in a bag and looks like a scrawny cat? – and so we decided to ask her mother, who was then still alive.

My gran was a proper Hartlepudlian, the kind that you could genuinely imagine hanging a monkey, and she was full of old phrases that defied definition. If you ate with your mouth open she’d tell you off for “clacking”, babies were “baens”, and we were always told not to sit on cold doorsteps or else we’d get “chin cough” (I’ve no idea to this day what chin cough is; all I know about it is the mechanism of transmission, and I’m skeptical of that!). Anyway, on our next visit to the North East we resolved to ask my grandmother what exactly was a “bag of choppy”.

Turns out she didn’t know either; it was just a phrase her mother had used.

tl;dr: what is a “bag of choppy”?