The Race to Win Staten Island

This is a repost promoting content originally published elsewhere. See more things Dan's reposted.

Possibly CGP Grey‘s best video yet: starting with the usual lighthearted and slightly silly look into an interesting piece of history, it quickly diverges from a straightforward path through the Forest of All Knowledge (remember No Flag Northern Ireland?) and becomes an epic adventure into fact-checking, healthy scepticism, and demonstrable information literacy. Speaking admittedly as somebody who genuinely loved the Summer of Grey Tesla Road Trip series of vlogs, this more-human-than-average Grey adventure is well-worth watching to the end.

Water Science

This afternoon, the kids and I helped with some citizen science as part of the Thames WaterBlitz, a collaborative effort to sample water quality of the rivers, canals, and ponds of the Thames Valley to produce valuable data for the researchers of today and tomorrow.

Annabel fetches water from the canal.
Our sampling point was by bridge 228 on the Oxford Canal: the first job was fetching water.

My two little science assistants didn’t need any encouragement to get out of the house and into the sunshine and were eager to go. I didn’t even have to pull out my trump card of pointing out that there were fruiting brambles along the length of the canal. As I observed in a vlog last year, it’s usually pretty easy to motivate the tykes with a little foraging.

Annabel and John assess the colour of the canal water.
Some collaboration was undertaken to reach a consensus on the colour of the sample.

The EarthWatch Institute had provided all the chemicals and instructions we needed by post, as well as a mobile app with which to record our results (or paper forms, if we preferred). Right after lunch, we watched their instructional video and set out to the sampling site. We’d scouted out a handful of sites including some on the River Cherwell as it snakes through Kidlington but for this our first water-watch expedition we figured we’d err on the safe side and aim to target only a single site: we chose this one both because it’s close to home and because a previous year’s citizen scientist was here, too, improving the comparability of the results year-on-year.

Chemistry experiments on the banks of the Oxford Canal.
Lots of nitrates, as indicated by the colour of the left tube. Very few phosphates, as indicated by the lack of colour in the right (although it’s still a minute and a half from completing its processing time at the point this photo was taken and would darken a tiny bit yet).

Plus, this expedition provided the opportunity to continue to foster the 5 year-old’s growing interest in science, which I’ve long tried to encourage (we launched bottle rockets the other month!).

Annabel compares her sample to the colour chart.
Which colour most-resembles the colour of our reagent?

Our results are now online, and we’re already looking forward to seeing the overall results pattern (as well as taking part in next year’s WaterBlitz!).

× × × ×

Enter The D&DDJ

This is a repost promoting content originally published elsewhere. See more things Dan's reposted.

One way I’ve found to enhance my nights as Dungeon Master is to call on experiences as an amateur musician and fan, to ramp up the intensity and sense of fantasy with playlists of tunes from the history of composed and recorded music.

I realised that this might be something I was OK at when I saw our party’s rogue lost in imagination and stabbing to the beat of a bit of Shostakovich.

Over the months some of the collections I’ve curated have picked up a few followers on Spotify and upvotes on Reddit but I thought it was time to put more effort in and start writing about it.

The opening post from Lute the Bodies, a new blog by my friend Alec. It promises an exploration of enhancing tabletop roleplaying with music, which is awesome: I’ve occasionally been known to spend longer picking out the music for a given roleplaying event than I have on planning the roleplaying activities themselves! Looking forward to see where this goes…

Elf Chalkboard Puzzle

This is a repost promoting content originally published elsewhere. See more things Dan's reposted.

I write the integers 1-9999 (inclusive) on a huge chalkboard. Each number is written once.

During the night the board is visited by a series of naughty math elves (it’s a thing!)

Each elf approaches the board, selects two numbers at random, erases them, and replaces them with a new number that is the absolute difference of the two numbers erased.

This vandalism continues all night until there is just one number remaining.

I return to the board the next morning and find the single number of the board. The question is: Is this remaining number odd or even?

Elf Chalkboard

A fun, lightweight maths puzzle for your amusement. I was able to find the right answer pretty quickly by spotting the pattern; it took me longer to find the words to adequately explain the pattern.

Using the Web for Exhibition Interactives

As part of the preparing to leave the Bodleian I’ve been revisiting a lot of the documentation I’ve written over the last eight years. It occurred to me that I’ve never written publicly about how the Bodleian’s digital signage/interactives actually work; there are possible lessons to learn.

The Bodleian‘s digital signage is perhaps more-diverse, both in terms of technology and audience, than that of most organisations. We’ve got signs in areas that are exclusively reader-facing to help students and academics find what they’re looking for, signs in publicly accessible rooms that advertise and educate, and signs in gallery spaces upon which we try to present engaging and often-interactive content to support exhibitions.

Test digital signage screen showing My Little Pony: Friendship is Magic
Getting an extra touchscreen for the office for prototyping/user testing purposes was great, even when it wasn’t showing MLP: FiM.

Throughout those three spheres, we’ve routinely delivered a diversity of content (let’s just ignore the countdown clock, for now…). Traditional directional signage, advertisements, games, digital exhibitions, interpretation, feedback surveys…

In the vast majority of cases – and this is where the Bodleian’s been unusual (though certainly not unique) among cultural sector institutions – we’ve created those in-house rather than outsourcing them.

Dan sings into a screwdriver handle alongside three touchscreens.
Using off-the-shelf technology also allows the Bodleian to in-house much of their hardware maintenance, as a secondary part of other job roles. Singing into your screwdriver remains optional, though.

To do this economically – the volume of work on interactive signage is inconsistent throughout the year – we needed to align the skills required with skills used elsewhere in the organisation. To do this, we use the web as our medium! Collectively, the Bodleian’s Digital Communications team already had at least some experience in programming, web design, graphic design, research, user testing, copyediting etc.: the essential toolkit for web application development.

Pong on the Heritage Window (video wall) and a Tolkien-themed quiz on an i-Board (upright touchscreen)
Whether you were playing Pong on the video wall at the back or testing your Middle-earth knowledge on the touchscreen at the front… behind the scenes you were interacting with a web page I wrote.

By shifting our digital signage platform to lean heavily on web technologies, we were able to leverage talented people we already had to produce things that we might otherwise have had to outsource. This, in turn, meant that more exhibitions and displays get digital enhancement, on a shorter turnaround.

It also means that there’s a tighter integration between exhibition content and content for web and social media: it’s easier for us to re-use content across multiple platforms. Sometimes we’ve even made our digital interactives, or adapted version of them, available directly online, allowing our exhibitions to reach people that can’t get to our physical spaces at all.

Kiosk interface for an Ada Lovelace display in 2015.
Because we’re able to produce our own content on-demand, even our smaller, shorter-duration displays can have hands-on digital interactives associated with them.

On to the technology! We’re using a real mixture of tech: when it’s donated or reclaimed from previous projects (and when the bidding and acquisition processes are, well… as you’d expect at the University of Oxford), you learn not to say no to freebies. Our fleet includes:

  • Samsung Android tablets with freestanding kiosk frames. We run the excellent-value Kiosk Browser Lockdown app on these, which loads on boot and prevents access to anything but a specified website.
  • Onelan NTBs connected to a mixture of touch and non-touch screens, wall-mounted or in kiosk frames. We use Onelan’s standard digital signage features as well as – for interactive content – their built-in touch-capable web browser.
  • Dell PCs of the standard variety supplied by University IT services, connected to wall-mounted touch screens, running Google Chrome in Kiosk Mode. More on this below.
Chrome Dev Tools showing the Sheldonian Bust "Thinking 3D" exhibit
The browsers’ responsive simulators are invaluable when we’re targeting signage at five (!) different resolutions.

When you’re developing content for a very small number of browsers and a limited set of screen sizes, you quickly learn to throw a lot of “best practice” web development out of the window. You’ll never come across a text browser or screen reader, so alt-text doesn’t matter. You’ll never have to rescale responsively, so you might as well absolutely-position almost everything. The devices are all your own, so you never need to ask permission to store cookies. And because you control the platform, you can get away with making configuration tweaks to e.g. allow autoplaying videos with audio. Coming from a conventional web developer background to producing digital signage content makes feels incredibly lazy.

Shakespeare's Dead
Helping your users see your interactive as “app-like” rather than “web-like” encourages them to feel comfortable engaging with it in ways uncharacteristic of web pages. In our Shakespeare’s Dead interactive, for example, we started the experience in the middle of a long horizontally-scrolling “page”, which might feel very unusual in a conventional browser.

This is the “techy bit”. Skip it?

Using Chrome to run digital signage requires, in the Bodleian’s case, a couple of configuration tweaks and the right command-line switches. We use:

  • chrome://flags/#overscroll-history-navigation – disabling this prevents users from triggering “back”/”forward” by swiping with two fingers
  • chrome://flags/#pull-to-refresh – disabling this prevents the user from triggering a “refresh” by scrolling up beyond the top of the page (this only happens on some kinds of devices)
  • chrome://flags/#system-keyboard-lock – we don’t use attached keyboards, but if you do, you might want to set this flag so you can use the keyboard.lock() API to intercept e.g. ALT+F4 so users can’t escape the application
  • running on startup with e.g. chrome --kiosk --noerrdialogs --allow-file-access-from-files --disable-touch-drag-drop --incognito https://example.com/some/url
    • Kisok mode makes the browser run fullscreen and prevents e.g. opening additional tabs, giving an instant “app-like” experience. As we don’t have keyboards attached to our digital signage, this also prevents visitors from closing Chrome.
    • Turning off error dialogs reduces the risk that an error will result in an unslightly message to the user.
    • Enabling “file access from files” allows content hosted at file:// addresses to access content at other file:// addresses, which makes it possible to write “offline” sites (sometimes useful where we’re serving large videos or on previous occasions when WiFi has been shaky) that can still take advantage of features like the Fetch API.
    • Unless you need drag-and-drop, it’s simpler to disable it; this prevents a user long-press-and-dragging an image around the screen.
    • Incognito mode ensures that the browser doesn’t remember what site was showing last time it ran; our computers often end up switched off at the wall at the end of the day, and without this the browser will offer to load the site it had open last time, when it runs.
    • We usually host our interactives directly on the web, at “secret” addresses, and this is generally preferable to us as we can more-easily make on-the-fly adjustments to content (plus it makes it easier to hook up analytic tools).

"Talking Maps" exhibition panel showing bioproductivity map.
Be sure to test the capabilities of your hardware! Our Onelan NTBs, unlike your desktop PCs, can’t handle multitouch input, which affects the design of our user interfaces for these devices.

Meanwhile, in the application’s CSS code, we set * { user-select: none; } to prevent the user from highlighting text by selecting it with their finger. We also make heavy use of absolutely-sized/positioned, overflow: hidden blocks to ensure that scrollbars never appear, and CSS animations to make content feel dynamic and to draw attention to particular elements.

Annabel explores the Talking Maps digital interactives.
There’s no substitute for good testing. And there’s no stress-testing quite like letting a 5 year-old loose on your work.

Altogether, this approach gives the Bodleian the capability to produce engaging interactive content at low cost and using the existing skills of their digital and exhibitions teams. It’s not an approach that would work for every cultural institution: in particular, some of the Bodleian’s sister institutions already outsource the technical parts of their web work, and so don’t have the expertise in-house to share with a web-powered digital signage solution.

Shadows Out Of Time on a touchscreen
A few minor CSS tweaks to make the buttons finger-friendly and our Halloween game Shadows Out Of Time, which I’d already made web-friendly, was touchscreen-ready too. I wonder if they’ll get this one out again, this Halloween?

But for those museums that can fit into this model – or can adapt to do so in future – using the web to produce interactive digital content and digital signage is a highly cost-effective way to engage with visitors, even (or especially!) when dealing with short-lived and/or rotating displays.

It’s also been among my favourite parts of my job at the Bod these last 8½ years, and I’m sure I’ll miss it!

× × × × × × × × ×

Motorbiking to Scotland

This is a repost promoting content originally published elsewhere. See more things Dan's reposted.

This adventure took a lot of planning. It’s 350 miles from where I live to Glasgow. I have a Honda CG 125cc, and my maximum range in one day is around 200 miles – if I have the full day for travelling, which I wouldn’t have, most days. I figured if I was going to have a road trip, I’d have to make stop offs at various parts of the UK, to break it up. This actually worked out really well, as there are lots of parts of the UK that I wanted to visit.

After booking the series of hotel rooms, I started to think about the actual riding. It was two weeks before the trip. I didn’t have enough thermals, or a bike suit that was protective enough. I also didn’t have a way of storing luggage on my bike, or keeping it dry (and two laptops would be in the bags). There was also an issue with the chain on my bike that needed fixing. Not exactly a trivial to do list! So the next two weeks turned into a bit of an eBay and Amazon frenzy, with a trip down to see my dad in Kent to get the bike chain fixed, and rummage around for my old waterproofs in my grandparent’s attic. It was pretty close: the final item arrived the day before the trip. I got ridiculously lucky on eBay with my new, more visible, better padded, comfy bike suit though, which I love to bits. In hindsight, more time for all of this would have been helpful!

My friend Bev wrote about their motorcycling adventure up and down the UK; it’s pretty awesome.

Forward #100 – Gamification

This is a repost promoting content originally published elsewhere. See more things Dan's reposted.

Forward Comic issue 100, frame 1

(It should be noted that I have now made 100 Forward strips. I’m not saying that I’ve earned myself a blowjob, but I’m also not not saying that.)

Sure: I think you’ve earned a blowjob, Tailsteak. I think we might need a spin-off of Patreon where you can offer content creators sexual favours in exchange for their work…

(In other news, I’ve said before that you should read Forward, and now’s a great time to start. You can catch up, don’t worry.)

Which Face is Real?

This is a repost promoting content originally published elsewhere. See more things Dan's reposted.

But while we’ve learned to distrust user names and text more generally, pictures are different. You can’t synthesize a picture out of nothing, we assume; a picture had to be of someone. Sure a scammer could appropriate someone else’s picture, but doing so is a risky strategy in a world with google reverse search and so forth. So we tend to trust pictures. A business profile with a picture obviously belongs to someone. A match on a dating site may turn out to be 10 pounds heavier or 10 years older than when a picture was taken, but if there’s a picture, the person obviously exists.

No longer. New adverserial machine learning algorithms allow people to rapidly generate synthetic ‘photographs’ of people who have never existed. Already faces of this sort are being used in espionage.

Computers are good, but your visual processing systems are even better. If you know what to look for, you can spot these fakes at a single glance — at least for the time being. The hardware and software used to generate them will continue to improve, and it may be only a few years until humans fall behind in the arms race between forgery and detection.

Our aim is to make you aware of the ease with which digital identities can be faked, and to help you spot these fakes at a single glance.

I was at a conference last month where research was presented which concluded pretty solidly that the mechanisms used to make “deepfakes” meant that it was probably impossible to create artificial intelligence that can learn to distinguish between real and fake pictures of humans. Simply put, this is because the way we make such images is with generative adversarial networks, an AI technique which thrives upon having an effective discriminator component, and any research into differentiating between real and fake images feeds the capability of the next generation of discriminators!

Instead, then, the best medium-term defence against deepfakes is training humans to be able to identify them, and that’s what this website aims to do. I was pleased that I did very well on my first attempt (I sort-of knew what to look for already, based on a basic understanding of the underlying technologies) but I was also pleased that I was able to learn to do better with the aid of the authors’ tips. Nice.

Proposal to allow specifying a text snippet in a URL fragment

This is a repost promoting content originally published elsewhere. See more things Dan's reposted.

To enable users to easily navigate to specific content in a web page, we propose adding support for specifying a text snippet in the URL. When navigating to such a URL, the browser will find the first instance of the text snippet in the page and bring it into view.

Web standards currently specify support for scrolling to anchor elements with name attributes, as well as DOM elements with ids, when navigating to a fragment. While named anchors and elements with ids enable scrolling to limited specific parts of web pages, not all documents make use of these elements, and not all parts of pages are addressable by named anchors or elements with ids.

Current Status

This feature is currently implemented as an experimental feature in Chrome 74.0.3706.0 and newer. It is not yet shipped to users by default. Users who wish to experiment with it can use chrome://flags#enable-text-fragment-anchor. The implementation is incomplete and doesn’t necessarily match the specification in this document.

tl;dr

Allow specifying text to scroll and highlight in the URL:

https://example.com##targetText=prefix-,startText,endText,-suffix

Using this syntax

##targetText=[prefix-,]textStart[,textEnd][,-suffix]

              context  |-------match-----|  context

(Square brackets indicate an optional parameter)

This is a feature that I’ve wished that the Web had on many, many occasions. I’m sure you’ve needed it before, too: you’ve wanted to give somebody the URL of (or link to) a particular part of a page but there’s been no appropriately-placed anchor to latch on to. Being able to select part of the text on the page and just copy that after a ## in the address bar would be so much simpler.

Chrome's experimental fragment text link targetting
Naturally, I tried this experimental feature out on this very web page; it worked pretty nicely!

Chrome’s implementation is somewhat conservative, requiring a prefix of ##targetText= (this minimises the risk of collision with other applications which store/pass data via hashes), but it’s still pretty full-featured, with support for prefixes and suffixes to the text to-be-selected. I quite like it, but of course it needs running down the standards track before it can be relied upon as anything other than a progressive enhancement.

I do wonder, though, whether this will be met with resistance by ad/subscription-supported content creators as a new example of the deep linking they seem to hate so much.

(with thanks to Jeremy Keith for sharing this)

×

Dan Q found GC5MYFN 01 It’s a Bugs Life x2

This checkin to GC5MYFN 01 It's a Bugs Life x2 reflects a geocaching.com log entry. See more of Dan's cache logs.

Some fellow volunteers and I are on an “away weekend” in the forest; this morning before our first meeting I lead a quick expedition of both established and first-timer geocachers around a few of the local caches.

Passed another couple of ‘cachers on the way from GC840TN, but it sounded like they’d been having less luck than us this morning. Coordinates spot on; dropped me right on top of the cache and I was familiar with this kind of container so I picked it up as soon as I got there – quick and easy find, and our last for the morning! TFTC.

Dan Q found GC840TN Georassic Diplodocus

This checkin to GC840TN Georassic Diplodocus reflects a geocaching.com log entry. See more of Dan's cache logs.

Some fellow volunteers and I are on an “away weekend” in the forest; this morning before our first meeting I lead a quick expedition of both established and first-timer geocachers around a few of the local caches.

Coordinates didn’t put us very close: perhaps because of tree cover interfering with the GPSr; we needed to decipher the hint. The hint was good, though, and I went straight to the dino’s hiding place, trampled past a few fresh nettles, and retreived it. Excellent caches; we loved these!

Dan Q found GC5MYFW 02 It’s a Bugs Life x2

This checkin to GC5MYFW 02 It's a Bugs Life x2 reflects a geocaching.com log entry. See more of Dan's cache logs.

Some fellow volunteers and I are on an “away weekend” in the forest; this morning before our first meeting I lead Ga quick expedition of both established and first-timer geocachers around a few of the local caches.

Geoff pounced right onto this one, stunning some of our less-experienced ‘cachers who’d never considered the possibility of a container like this!