Reply to short note on emoji text alternative variations

In an article about the accessibility of emoji characters to screen readers, Steve Faulkner said:

Note that there are differences in how they are described in some cases:

  • “grinning face” is also “beaming face”
  • “beaming face” is also a “smiling face”
  • “open mouth” is described by JAWS/Narrator but not by NVDA/VoiceOver
  • “big eyes” are described by NVDA/VoiceOver but not by JAWS/Narrator
  • “cold sweat” is “sweat” and also “sweat drop”

The differences don’t matter to me (but I am just one and not the intended consumer), as I usually experience just the symbol. Reading the text descriptions is useful though as quite often I have no idea what the symbols are meant to represent. It is also true that emoji’s take on different meanings in different contexts and to different people. For example I thought 🤙 meant “no worries” but its description is “call me hand”, what do I know 🤷

Steve Faulkner

What Steve observes is representative of a the two sides of emoji’s biggest problem, which are

  1. that when people use them for their figurative meaning, there’s a chance that they have a different interpretation than others (this is, of course, a risk with any communication, although the effect is perhaps more-pronounced when abbreviating1), and
  2. when people use them for the literal image they show, it can appear differently: consider the inevitable confusion that arises from the fact that Twitter earlier this year changed the “gun” emoji, which everybody changed to look like a water pistol to the extent that the Emoji Consortium changed its official description, which is likely to be used by screen readers, to “water pistol”, back to looking like a firearm. 🤦

But the thing Steve’s post really left me thinking about was a moment from Season 13, Episode 1 of Would I Lie To You? (still available on iPlayer!), during which blind comedian Chris McCausland described how the screen reader on his phone processes emoji:

Chris McCausland on Would I Lie To You
My phone talks, so it reads everything out. And just to give you an insight, even the emojis… if you use an emoji it tells you what the emoji is… and the smiley face – the main smiley face – specifically for blind people… that one is called “smiling face with normal eyes”. I don’t know if I’m expected to use the smiling face with sunglasses?

I don’t know if it’s true that Chris’s phone actually describes the generic smileys as having “normal eyes”, but it certainly makes for a fantastic gag.

Footnotes

1 I remember an occasion where a generational divide resulted in a hilarious difference of interpretation of a common acronym, for example. My friend Ash, like most people of their generation, understood “LOL” to mean “laughing out loud”, i.e. an expression of humour. Their dad still used it in the previous sense of “lots of love”. And so there was a moment of shock and confusion when Ash’s dad, fondly recalling their recently-deceased mother, sent Ash a text message saying something like: “Thought of your mum today. I miss her. LOL.”.

×

If you’ve ever found yourself missing the “good old days” of the web, what is it that you miss?

Jason Weill said:

Molly White said:

If you’ve ever found yourself missing the “good old days” of the #web, what is it that you miss? (Interpret “it” broadly: specific websites? types of activities? feelings? etc.) And approximately when were those good old days?

No wrong answers — I’m working on an article and wanted to get some outside thoughts.

I miss the era of personal web sites started out of genuine admiration for something, rather than out of a desire to farm a few advertising pennies

This. You wanted to identify a song? Type some of the lyrics into a search engine and hope that somebody transcribed the same lyrics onto their fansite. You needed to know a fact? Better hope some guru had taken the time to share it, or it’d be time for a trip to the library

Not having information instantly easy to find meant that you really treasured your online discoveries. You’d bookmark the best sites on whatever topics you cared about and feel no awkwardness about emailing a fellow netizen (or signing their guestbook to tell them) about a resource they might like. And then you’d check back, manually, from time to time to see what was new.

The young Web was still magical and powerful, but the effort to payoff ratio was harder, and that made you appreciate your own and other people’s efforts more.

Reply to: Mobile writing, part 2

Clayton Errington wrote:

After a first attempt at mobile blogging, I found a process that works better for my work flow.

Throughout the day, I have ideas and need to write them down. This could be a coding process, a thought to remember, the start of a blog post, and more. I love a good notes app. I’ve gone through quite a few and use a few for different things. Lately it’s been Simplenote.

As I took part in Bloganuary and began what’ll hopefully become a fifth consecutive year of 100 Days To Offload, I started to hate my approach to mobile blogging and seek something better, too. My blog’s on WordPress, but it’s so highly-customised that I can’t meaningfully use any of the standard apps, and I find the mobile interface too slow and clunky to use over anything less than a great Internet connection… which – living out in the sticks – I don’t routinely have when I’m out and about. So my blogging almost-exclusively takes place at my desktop or laptop.

But your experience of using a notetaking app is reasonably inspiring. I’m almost never away from a “real” computer for more than a day, so there’s no reason I can’t simply write into such an app, let it sync, and copy-paste into a blog post (and make any tweaks) when I’m sitting at a proper keyboard! I’m using Obsidian for notetaking, and it Syncthing‘s to my other computers, so I should absolutely be leveraging that. I already have an Obsidian folder full of “blog post ideas”… why shouldn’t I just write blog posts there.

Thanks for the inspiration!

Reply to The ethics of syndicating comments using WebMentions

In his blog post “The ethics of syndicating comments using WebMentions”, Terence Eden said:

I want to see what people are writing in public about my posts. I also want to direct people to the conversations which are happening elsewhere on the web. But people – quite rightly – might not want their content permanently stored by my site.

So I think I have a few options.

  1. Do nothing. My site; my rules. If you don’t want me to grab your hot takes, don’t post them in public. (Feels a bit rude, TBQH.)
  2. Be reactive. If someone asks me to remove their content, do so. (But, of course, how will they know I’ve made a copy?)
  3. Stop syndicating comments. (I don’t wanna!)
  4. Replace the verbatim comments with a link saying “Fred mentioned this article on Twitter” . (A bit of a disruptive experience for readers.)
  5. Use oEmbed to capture the user’s comment and dynamically load it from the 3rd party site. That would update automatically if the user changes their name or deleted the comment. (A massive faff to set up.)

Terence describes a problem that I’ve wrestled with myself. If somebody comments directly on my blog using the form at the bottom of a post, that’s a pretty strong indicator of them giving their consent for their comment to be published at the bottom of that post (at my discretion). If somebody publicly replies somewhere my post is syndicated, that’s less-obvious, but still pretty clear. If somebody merely mentions my post publicly, writing their own post and linking to mine… that’s a real fuzzy area.

I take a minimal approach; only capturing their full content if it’s short and otherwise trying to extract a snippet that contains the bit that mentioned my content, and I think that works great. But Terence points out an important follow-up: what if the commenter deletes that content?

My approach so far has always been a reactive one – the second in Terence’s list – and I think it’s a morally-acceptable stance for a personal blogger. But I’m not sure it scales. I find myself asking: what if a news outlet did this, taking my self-published feedback to their story and publishing it on their site, even if I later amended, retracted, or deleted it on my own? If somebody’s making money out of my content, that feels different: I’ve always been clear that what I write on my blog is permissively-licensed, but that permissiveness is based on the prohibition of commercial use of my content.

Perhaps down the line this can be solved technologically: something machine-readable akin to the <link rel="license" ...> tag could state an author’s preference for how their content is syndicated by third parties they’ve mentioned, answering questions like:

  • Can you quote me, or just link to me? Who do these rules apply to? (Should we be attaching metadata to individual links?)
  • Should you inform me that you’ve done so, and if so: how (WebMention, etc.)?
  • If you (or your site) observe that my content has disappeared or changed for an extended time, should that be taken as revokation of consent to syndicate it?

Right now, the relevant technologies are not well-established enough to even begin this kind of work, but if a modern interconected federated web of personal websites takes off, it’s the kind of question we might one day have to answer.

For now my gut feeling is that option #2 (reactive moderation of syndicated comments) is ethically-sufficient for personal websites. But I’ll be watching the feedback Terence (who probably gets many more readers than I) receives in case my gut doesn’t represent the majority!

Reply to Decentralization and verification

In “Decentralization and verification”, Derek Kedziora said:

…Mastodon by its very nature as a decentralized service can’t verify accounts.

We’d still need some trusted third party to do offline verifications and host them in a centralized repository.

Let’s not sell Mastodon short here. The service you compare it to – Twitter – solves this problem… but only if you trust Twitter as an authority on the identity of people. Mastodon also solves the problem, but it puts the trust in a different place: domain names and account pages.

If you want to “verify” yourself on Mastodon, you can use a rel=”me” link from a page or domain you control. It looks like this:

Screenshot showing @dan@danq.me's Mastodon account as the verified owner of website DanQ.me.
The tick is green, not blue, but I can’t imagine anybody complains.

A great thing about this form of verification is you don’t have to trust my server (and you probably shouldn’t): you can check it for yourself to ensure that the listed website really does state that this is the official Mastodon account of “me”.

You can argue this just moves the problem further down the road – instead of trusting a corporation that have shown that they’re not above selling the rights to your identity you have to trust that a website is legitimate – and you’d be right. But in my case for example you can use years of history, archive.org, cross-links etc. to verify that the domain is “me”, and from that you can confirm the legitimacy of my Mastodon account. Anybody who can spoof multiple decades of my history and maintain that lie for a decade of indepdendent web archiving probably deserves to be able to pretend to be me!

There are lots of other distributed methods too: web-of-trust systems, signed keys, even SSL certificates would be a potential solution. Looking again at my profile, you’ll see that I list the fingerprint of my GPG key, which you can compare to ones in public directories (which are co-signed by other people). This way you’d know that if you sent an encrypted DM to my Mastodon inbox it could only be decrypted if I were legitimately me. Or I could post a message signed with that key to prove my identity, insofar as my web-of-trust meets your satisfaction.

If gov.uk’s page about 10 Downing Street had profile pages for cabinet members with rel=”me” links to their social profiles I’d be more-likely to trust the legitimacy of those social profiles than I would if they had a centralised verification such as a Twitter “blue tick”.

Fediverse identify verification isn’t as hard a problem to solve as Derek implies, and indeed it’s already partially-solved. Not having a single point of authority is less convenient, sure, but it also protects you from some of the more-insidious identity problems that systems like Twitter’s have.

×

How to date a recording using background electrical noise

This is a repost promoting content originally published elsewhere. See more things Dan's reposted.

Matching a target ENF series with a section of a reference series

We’re going to use ENF matching to answer the question “here’s a recording, when was it was (probably) taken?” I say “probably” because all that ENF matching can give us is a statistical best guess, not a guarantee. Mains hum isn’t always present on recordings, and even when it is, our target recording’s ENF can still match with the wrong section of the reference database by statistical misfortune.

Still, even though all ENF matching gives us is a guess, it’s usually a good one. The longer the recording, the more reliable the estimate; in the academic papers that I’ve read 10 minutes is typically given as a lower bound for getting a decent match.

To make our guess, we’ll need to:

  1. Extract the target recording’s ENF values over time
  2. Find a database of reference ENF values, taken directly from the electrical grid serving the area where the recording was made
  3. Find the section of the reference ENF series that best matches the target. This section is our best guess for when the target recording was taken

We’ll start at the top.

About a year after Tom Scott did a video summarising how deviation over time (and location!) of the background electrical “hum” produced by AC power can act as a forensic marker on audio recordings, Robert Heaton’s produced an excellent deep-dive into how you can play with it for yourself, including some pretty neat code.

I remember first learning about this technique a few years ago during my masters in digital forensics, and my first thought was about how it might be effectively faked. Faking the time of recording of some audio after the fact (as well as removing the markers) is challenging, mostly because you’ve got to ensure you pick up on the harmonics of the frequencies, but it seems to me that faking it at time-of-recording ought to be reasonably easy: at least, so long as you’re already equipped with a mechanism to protect against recording legitimate electrical hum (isolated quiet-room, etc.):

Taking a known historical hum-pattern, it ought to be reasonably easy to produce a DC-to-AC converter (obviously you want to be running off a DC circuit to begin with, e.g. from batteries, so you don’t pick up legitimate hum) that regulates the hum frequency in a way that matches the historical pattern. Sure, you could simply produce the correct “noise”, but doing it this way helps ensure that the noise behaves appropriately under the widest range of conditions. I almost want to build such a device, perhaps out of an existing portable transformer (they come in big battery packs nowadays, providing a two-for-one!) but of course: who has the time? Plus, if you’d ever seen my soldering skills you’d know why I shouldn’t be allowed to work on anything like this.

×

Reply to OpenID for WP

This weekend I was experimentally reimplenting how my blog displays comments. For testing I needed to find an old post with both trackbacks and pingbacks on it. I found my post that you linked, here, and was delighted to be reminded that despite both of our blogs changing domain name (from photomatt.net to ma.tt and from blog.scatmania.org to danq.me, respectively), all the links back and forth still work perfectly because clearly we share an apporopriate dedication to the principle that Cool URIs Don’t Change, and set up our redirects accordingly. 🙌

Incidentally, this was about the point in time at which I first thought to myself “hey, I like what Matt’s doing with this Automattic thing; I should work there someday”. It took me like a decade to a decade-and-a-half to get around to applying, though… 😅

Anyway: thanks for keeping your URIs cool so I could enjoy this trip down memory lane (and debug an experimental wp_list_comments callback!).

Reply to An Accidentally ‘Anti-January’ January

Siobhan said:

I fell off the blogging bandwagon just as everyone else was hopping onto it.

I was just thinking the same! It felt like everybody and their dog did Bloganuary last month, meanwhile I went in the opposite direction!

Historically, there’s been an annual dip in my blogging around February/March (followed by a summer surge!), but in recent years it feels like that hiatus has shifted to January (I haven’t run the numbers yet to be sure, though).

I don’t think that’s necessarily a problem, for me at least. I write for myself first, others afterwards, and so it follows that if I blog when it feels right then an ebb-and-flow to my frequency ought to be a natural consequence. But it still interests me that I have this regular dip, and I wonder if it affects the quality of my writing in any way. I feel the pressure, for example, for post-hiatus blogging to have more impact the longer it’s been since I last posted! Like: “it’s been so long, the next thing I publish has to be awesome, right?”, as if my half-dozen regular readers are under the assumption that I’m always cooking-up something and the longer it’s been, the better it’s going to be.

Reply to “I was permanently banned by Facebook”

Donncha Ó Caoimh said:

My Facebook account was permanently banned on Wednesday along with all the people who take care of the Cork Skeptics page. We’re still not sure why but it might have something to do with the Facebook algorithm used to detect far-right conspiracy groups.

If you have a Facebook account you should download your information too because it could happen to you too, even though you did nothing wrong. Go here and click the “Create File” button now.

Yeah, I know you won’t do it but you really should.

Great advice.

After I got banned from Facebook in 2011 (for using a “fake name”, which is actually my real name) I took a similar line of thinking: I can’t trust Facebook (or Twitter, or Instagram, or whoever else) to be responsible custodians of my content, so I shan’t. Now, virtually all content I create is hosted on my WordPress-powered blog, at my own domain, first and foremost… and syndicated copies may appear on various social media.

In a very few instances I go the other way around, producing content in silos and then copying it back to my blog: e.g. my geocaching/geohashing expeditions are posted first to their respective sites (because it’s easiest and most-practical to do that using their apps, especially “in the field”), but then they get imported into my blog using a custom plugin. If any of these sites closes, deletes my data, adds paid tiers I’m not happy with, or just bans me from my own account… I’m still set.

Backing up all your social content is a good strategy. Owning it all to begin with is an even better one, IMHO. See also: Indieweb.

Reply to @ADumbGreyHam

Adam Graham tweeted:

@scatmandan I’ve just been in a meeting with with some people who were saying some very positive things about the legacy of work your dad (I think) did with buses in the North East

That seems likely. Conversely: if people are still talking about my work 7½ years after I die it’ll probably be to say “who wrote this bit of legacy code this way and what were they thinking?”

CC @TASPartnership @bornvulcan @TheGodzillaGirl

Reply to: Indelible

Bryn wrote:

The surgery had happened at a weirdly transitional point in my life. Only a few days earlier I’d performed improv on stage for the first time (see “Yes And” and “Memory Lane“), I’d changed jobs and was contemplating another move. The scar from the surgery seemed to be part of that too, and I had an idle thought to have a tattoo done on the scar as a permanent reminder to myself not to let work swallow my life up again.

Every time I consider getting a tattoo, I’m stopped by the fact that I’m sufficiently indecisive about what I’d get and where. Somehow, a tattoo would represent a sort of irreversible permanence that I feel is difficult for me to commit to. (I fully accept that this may seem a strange sentiment to many, coming from a sterilised-without-breeding man – I didn’t say I was consistent!)

But to add personalisation to a scar, especially one with a personal meaning and message: that I can really get behind. Unfortunately my only likely-permanent scars don’t have any messages behind them more-significant than, for example, “don’t let Dan play with knives”. Is it possible to get a tattoo on top of an emotional scar instead?

Reply to: A modern font loading strategy with the vanilla JS FontFaceSet.load() method

Chris Ferdinandi‘s daily tip for yesterday addressed a common familiar to Web developers using custom fonts (i.e. basically all of them):

In many browsers, if a custom typeface is declared but hasn’t finished downloading and parsing yet, browsers will leave space for the text but not render it until the file is ready.

This is often called a Flash Of Invisible Text (or FOIT).

In a now slightly outdated article, Ilya Grigorik, a web performance engineer at Google, reports:

29% of page loads on Chrome for Android displayed blank text: the user agent knew the text it needed to paint, but was blocked from doing so due to the unavailable font resource. In the median case the blank text time was ~350 ms, ~750 ms for the 75th percentile, and a scary ~2300 ms for the 95th.

To make matters worse, some mobile browsers never timeout a failed font file, and therefore never show text in a fallback typeface if the custom one fails to load. You get nothing at all.

Let’s talk about how to fix that.

Chris is right…

He’s right that the FOIT is annoying, and he’s right that for most text (and especially body text) the best result would be if a fallback system font was used immediately and swapped-out for the designer’s preferred font as soon as it becomes available: this maximises usability, especially on slower devices and connections. His solution is this:

  1. Set the font to a fallback font initially.
  2. Set the font to the preferred font once a CSS class is applied to a root element.
  3. Use Javascript to set apply that CSS class either when FontFaceSet.load() indicates that the font is available, and (via a cookie) for as long as the font file is expected to appear in the browser cache.

This approach is not without its problems. It requires Javascript (users for whom Javascript fails for some reason won’t see the font at all, but may still have to download the font file!), conflates cookie lifetime with cache lifetime (the two can be cleared independently, cookies can sometimes be synchronised across devices that don’t necessarily share caches, etc.), and uses Javascript techniques that don’t work in some browsers (Edge and Internet Explorer are both capable of showing custom web fonts but both will use the fallback font unless either (a) further Javascript is added (which Chris doesn’t supply) or (b) browser detection and/or conditional comments are used to trigger different behaviour in these browsers (which is icky).

…but he’s also wrong…

If only there was a better way to prevent the FOIT. One which degrades gracefully in older browsers, doesn’t require Javascript, doesn’t make assumptions about user cookie/cache configuration, and ideally involves a lot less code. It turns out, there is!

The font-display CSS directive exists to solve this exact issue [MDN]. Here’s what it looks like being used to solve the problem Chris presents (example taken from my blog’s CSS!):

@font-face{
  font-family:"Raleway";
  font-style:normal;
  font-weight:400;
  src: local("Raleway"),
       local("Raleway-Regular"),
       url(/wp-content/themes/q18/fonts/raleway-v11-latin-regular.woff2) format("woff2"),
       url(/wp-content/themes/q18/fonts/raleway-v11-latin-regular.woff) format("woff");
  font-display:swap;
}

Setting font-display: swap in the @font-face block tells the browser to use fallback fonts in place of this font while it loads. That’s probably exactly what you want for text fonts and especially body text; it means that the user sees the text as soon as possible and it’s swapped-out for the preferred font the moment it becomes available: no Javascript necessary! Conversely, font-display: block is a better choice for icon fonts where you want to force the browser to wait as long as possible for the font file to load (because any content rendered using it makes no sense otherwise).

font-display works out-of-the-box with Chrome, Firefox, and Safari and with the next version of Edge; older versions of Edge and Internet Explorer will simply fall-back to their default behaviour (FOIT where-necessary) – this is a progressive enhancement technique. But instead of a couple of dozen lines of Javascript, it’s a single line of CSS.

The only downside is that Google Web Fonts won’t add this directive, so you’ll need to self-host your font files (which is really easy, by the way: there’s a tool that’ll show you how). You should consider doing this anyway, of course: CDNs introduce a number of problems and no longer provide the relative performance benefits they used to. So self-host your fonts, add font-display: swap, and enjoy the most-lightweight and well-standardised approach possible to combatting the FOIT.

Goodbye Google Analytics, Hello Fathom

This is a repost promoting content originally published elsewhere. See more things Dan's reposted.

Big news! This site is no longer using Google Analytics and I’ve switched to a self-hosted version of brand new analytics product Fathom.

Fathom Analytics dashboard

 

Fathom is very simple. It only tracks 4 things: Unique Visitors, Page Views, Time on Site, and Bounce Rate. It shows me a chart of page views and visitors and then gives me a break down of referrers and top performing content. That’s it. And to be quite honest, that’s about all I need from my blog analytics.

You know what, Dave:me too! I’ve been running Google Analytics since forever and Piwik/Matomo (in parallel with it) for about a year and honestly: I get more than enough of what I need from the latter. So you’ve inspired me to cut the line with Google: after all, all I was doing was selling them my friends’ data in exchange for some analytics I wasn’t really paying attention to… and I’d frankly rather not.

So: for the first time in a decade or so, there’s no Google Analytics on this site. Woop!

Update 2023-12-13: I eventually went further still and dropped all analytics, even self-hosted variants, and it feels great.