Geohashing expedition 2020-09-09 51 -1

This checkin to geohash 2020-09-09 51 -1 reflects a geohashing expedition. See more of Dan's hash logs.

Location

Edge of a field bounded by Letcombe Brook, over the A338 from Landmead Solar Farm.

Participants

Plans

We’re discussing the possibility of a Subdivision geohash achievement for people who’ve reached every “X in a Y”, and Fippe pointed out that I’m only a hash in the Vale of White Horse from being able to claim such an achievement for Oxfordshire’s regions. And then this hashpoint appears right in the Vale of White Horse: it’s like it’s an omen!

Technically it’s a workday so this might have to be a lunchtime expedition, but I think that might be workable. I’ve got an electric vehicle with a hundred-and-something miles worth of batteries in the tank and it looks like there might be a lay-by nearby the hashpoint (with a geocache in it!): I can drive down there at lunchtime, walk carefully back up the main road, and try to get to the hashpoint!

Expedition

I worked hard to clear an hour of my day to take a trip, then jumped in my (new) electric car and set off towards the hashpoint. As I passed Newbridge I briefly considered stopping and checking up on my geocache there but feeling pressed for time I decided to push on. I parked in the lay-by where GC5XHJG is apparently hidden but couldn’t find it: I didn’t search for long because the farmer in the adjacent field was watching me with suspicion and I figured that anyway I could hunt for it on the way back.

Walking along the A338 was treacherous! There are no paths, only a verge covered in thick grass and spiky plants, and a significant number of the larger vehicles (and virtually all of the motorbikes) didn’t seem to be obeying the 60mph speed limit!

Reaching the gate, I crawled under (reckoning that it’s probably there to stop vehicles and not humans) and wandered along the lane. I saw a red kite and a heron doing their thing before I reached the bridge, crossed Letcombe Brook, and followed the edge of the field. Stuffing my face with blackberries as I went, it wasn’t long before I reached the hashpoint on one edge of the field.

I took a short-cut back before realising that this would put me in the wrong place to leave a The Internet Was Here sign, so I doubled-back to place it on the gate I’d crawled under. Then I returned to the lay-by, where another car had just pulled up (right over the GZ of the geocache I’d hoped to find!) and didn’t seem to be going anywhere. Sadly I couldn’t wait around all day – I had work to do! – so I went home, following the satnav in the car in a route that resulted in a figure-of-eight tracklog.

Tracklog

My GPS keeps a tracklog. Here you go:

Geohashing expedition 2020-09-09 51 -1 tracklog map

Video

You can also watch it at:

Photos

360° panoramic VR photo of the 2020-09-09 51 -1 geohashpoint

Geohashing expedition 2020-09-09 51 -1 tracklog map× 360° panoramic VR photo of the 2020-09-09 51 -1 geohashpoint×

Boneworks

This article is a repost promoting content originally published elsewhere. See more things Dan's reposted.

I know only a small percentage of you use VR and to everyone else I might as well by telling you how spiffy the handrails are up in this ivory tower, but for what it’s worth, Boneworks is the first game in a while to make me think VR might be getting somewhere. It’s not there yet. The physics is full of little niggles as you might expect from a game trying to juggle so much. The major issue with the climbing is only your hands and head can be moved and your in-game legs just flop around getting in the way of things like two stubborn trails of cum dangling off your mum’s chin, but forget all that.

Speaking of VR, Yahtzee’s still playing with it and thinks it’s improving, which is high praise. So there’s hope yet.

I really need to dig my heavyweight gear out of the attic, but I’m waiting until we (eventually) move house. And I absolutely agree with Yahtzee’s observation about the value of VR games in which you can sit down, sometimes.

Difference Between VR and AR

This weekend, my sister Sarah challenged me to define the difference between Virtual Reality and Augmented Reality. And the more I talked about the differences between them, the more I realised that I don’t have a concrete definition, and I don’t think that anybody else does either.

A man wearing a VR headset while writing on a whiteboard.
AR: the man sees simulated reality and, probably, the whiteboard.
VR: the man sees simulated reality only, which may or may not include a whiteboard.
Either way: what the hell is he doing?

After all: from a technical perspective, any fully-immersive AR system – for example a hypothetical future version of the Microsoft Hololens that solves the current edition’s FOV problems – exists in a theoretical superset of any current-generation VR system. That AR augments the reality you can genuinely see, rather than replacing it entirely, becomes irrelevant if that AR system could superimpose a virtual environment covering your entire view. So the argument that compared to VR, AR only covers part of your vision is not a reliable definition of the difference.

Spectrum showing AR view of the Louvre on the left (with 5% of the view occluded by UI) and VR view of a 3D model of the Louvre on the right (with 100% of the view occluded). The middle of the spectrum remains undefined.
The difference by how much of your view is occluded by machine-generated images fails to define where the boundary between AR and VR lies. 5%? 50%? 99%?

This isn’t a new conundrum. Way back in 1994 back when the Sega VR-1 was our idea of cutting edge, Milgram et al. developed a series of metaphorical spectra to describe the relationship between different kinds of “mixed reality” systems. The core difference, they argue, is whether or not the computer-generated content represents a “world” in itself (VR) is just an “overlay” (AR).

But that’s unsatisfying for the same reason as above. The HTC Vive headset can be configured to use its front-facing camera(s) to fade seamlessly from the game world to the real world as the player gets close to the boundaries of their play space. This is a safety feature, but it doesn’t have to be: there’s no reason that a HTC Vive couldn’t be adapted to function as what Milgram would describe as a “class 4” device, which is functionally the same as a headset-mounted AR device. So what’s the difference?

You might argue that the difference between AR and VR is content-based: that is, it’s the thing that you’re expected to focus on that dictates which is which. If you’re expected to look at the “real world”, it’s an augmentation, and if not then it’s a virtualisation. But that approach fails to describe Google’s tech demo of putting artefacts in your living room via augmented reality (which I’ve written about before), because your focus is expected to be on the artefact rather than the “real world” around it. The real world only exists to help with the interpretation of scale: it’s not what the experience is about and your countertop is as valid a real world target as the Louvre: Google doesn’t care.

Milgram et al. (1994)'s Table 1: Some major differences between classes of Mixed Reality (MR) displays, showing seven classes: 1. monitor-based video with CG overlays, 2. headset-based video with CG overlays, 3. headset-based see-through display with CG overlays, 4. headset-based see-through video with CG overlays, 5. monitor/CG world with video overlays, 6. headset-based CG world with video overlays, and 7. CG-based world with real object intervention.
Categories 3 and 4 would probably be used to describe most contemporary AR; categories 6 and 7 to describe contemporary VR.

Many researchers echo Milgram’s idea that what turns AR into VR is when the computer-generated content completely covers your vision.

But even if we accept this explanation, the definition gets muddied by the wider field of “extended reality” (XR). Originally an umbrella term to cover both AR and VR (and “MR“, if you believe that’s a separate and independent thing), XR gets used to describe interactive experiences that cover other senses, too. If I play a VR game with real-world “props” that I can pick up and move around, but that appear differently in my vision, am I not “augmenting” reality? Is my experience, therefore, more or less “VR” than if the interactive objects exist only on my screen? What about if – as in a recent VR escape room I attended – the experience is enhanced by fans to simulate the movement of air around you? What about smell? (You know already that somebody’s working on bridging virtual reality with Smell-O-Vision.)

Dan and Robin concluding a VR Escape Room.
Not sure if you’re real or if you’re dreaming? Don’t ask an XR researcher; they don’t know either.

Increasingly, then, I’m beginning to feel that XR itself is a spectrum, and a pretty woolly one. Just as it’s hard to specify in a concrete way where the boundary exists between being asleep and being awake, it’s hard to mark where “our” reality gives way to the virtual and vice-versa.

It’s based upon the addition of information to our senses, by a computer, and there can be more (as in fully-immersive VR) or less (as in the subtle application of AR) of it… but the edges are very fuzzy. I guess that the spectrum of the visual experience of XR might look a little like this:

Comic showing different levels of "XR-ness", from the most to the least: The Matrix has you now, fully immersive VR, headset AR, "Magic Window" AR, conventional videogaming, HUDs, using google maps while walking, imagining playing Tetris, reality.

Honestly, I don’t know any more. But I don’t think my sister does either.

A man wearing a VR headset while writing on a whiteboard.× Spectrum showing AR view of the Louvre on the left (with 5% of the view occluded by UI) and VR view of a 3D model of the Louvre on the right (with 100% of the view occluded). The middle of the spectrum remains undefined.× Milgram et al. (1994)'s Table 1: Some major differences between classes of Mixed Reality (MR) displays, showing seven classes: 1. monitor-based video with CG overlays, 2. headset-based video with CG overlays, 3. headset-based see-through display with CG overlays, 4. headset-based see-through video with CG overlays, 5. monitor/CG world with video overlays, 6. headset-based CG world with video overlays, and 7. CG-based world with real object intervention.× Dan and Robin concluding a VR Escape Room.× Comic showing different levels of "XR-ness", from the most to the least: The Matrix has you now, fully immersive VR, headset AR, "Magic Window" AR, conventional videogaming, HUDs, using google maps while walking, imagining playing Tetris, reality.×

Note #11210

Notes from #musetech18 presentations (with a strong “collaboration” theme). Note that these are “live notes” first-and-foremost for my own use and so are probably full of typos. Sorry.

Matt Locke (StoryThings, @matlocke):

  • Over the last 100 years, proportional total advertising revenue has been stolen from newspapers by radio, then television: scheduled media that is experienced simultaneously. But we see a recent drift in “patterns of attention” towards the Internet. (Schedulers, not producers, hold the power in radio/television.)
  • The new attention “spectrum” includes things that aren’t “20-60 minutes” (which has historically been dominated by TV) nor “1-3 hours” (which has been film), but now there are shorter and longer forms of popular medium, from tweets and blog posts (very short) to livestreams and binging (very long). To gather the full spectrum of attention, we need to span these spectra.
  • Rhythm is the traditions and patterns of how work is done in your industry, sector, platforms and supply chains. You need to understand this to be most-effective (but this is hard to see from the inside: newcomers are helpful). In broadcast television as a medium, the schedules dictate the rhythms… in traditional print publishing, the major book festivals and “blockbuster release” cycles dominate the rhythm.
  • Then how do we collaborate with organisations not in our sector (i.e. with different rhythms)? There are several approaches, but think about the rhythmic impact.

Lizzy Bullock (English Heritage, @lizzybethness):

  • g.co/englishheritage
  • Partnered with Google Arts & Heritage; Google’s first single-partner project and also their first project with a multi-site organisation.
  • This kind of tech can be used to increase access (e.g. street view of closed sites) and also support curatorial/research aims (e.g. ultra-high-resolution photography).
  • Aside from the tech access, working with a big company like Google provides basically “free” PR. In combination, these benefits boost reach.
  • Learnings: prepare to work hard and fast, multi-site projects are a logistical nightmare, you will need help, stay organised and get recordkeeping/planning in place early, be aware that there’ll be things you can’t control (e.g. off-brand PR produced by the partner), don’t be afraid to stand your ground where you know your content better.
  • Decide what successw looks like at the outset and with all relevant stakeholders involved, so that you can stay on course. Make sure the project is integrated into contributors’ work streams.

Daria Cybulska (Wikimedia UK, @DCybulska):

  • Collaborative work via Wikimedians-in-residence not only provides a boost to open content but involves engagement with staff and opens further partnership opportunities.
  • Your audience is already using Wikipedia: reaching out via Wikipedia provides new ways to engage with them – see it as a medium as well as a platform.
  • Wikimedians-in-residence, being “external”, are great motivators to agitate processes and promote healthy change in your organisation.

Creative Collaborations ([1] Kate Noble @kateinoble, Ina Pruegel @3today, [2] Joanna Salter, [3] Michal Cudrnak, Johnathan Prior):

  • Digital making (learning about technology through making with it) can link museums with “maker culture”. Cambridge museums (Zoology, Fitzwilliam) used a “Maker in Residence” programme and promoted “family workshops” and worked with primary schools. Staff learned-as-they-went and delivered training that they’d just done themselves (which fits maker culture thinking). Unexpected outcomes included interest from staff and discovery of “hidden” resources around the museums, and the provision of valuable role models to participants. Tips: find allies, be ambitious and playful, and take risks.
  • National Maritime Museum Greenwich/National Maritime Museum – “re.think” aimed to engage public with emotive topics and physically-interactive exhibits. Digital wing allowed leaving of connections/memories, voting on hot issues, etc. This leads to a model in which visitors are actively engaged in shaping the future display (and interpretation) of exhibitions. Stefanie Posavec appointed as a data artist in residence.
  • SoundWalk Strazky at Slovak National Gallery: audio-geography soundwalks as an immersive experiential exhibition; can be done relatively cheaply, at the basic end. Telling fictional stories (based on reality) can help engage visitors with content (in this case, recreating scenes from artists’ lives). Interlingual challenges. Delivery via Phonegap app which provides map and audio at “spots”; with a simple design that discourages staring-at-the-screen (only use digital to improve access to content!).

Lightning talks:

  • Maritime Museum Greenwich: wanted to find out how people engage with objects – we added both a museum interpretation and a community message to each object. Highly-observational testing helped see how hundreds of people engage with content. Lesson: curators are not good judges of how their stuff will be received; audience ownership is amazing. Be reactive. Visitors don’t mind being testers of super-rough paper-based designs.
  • Nordic Museum / Swedish National Heritage Board explored Generous Interfaces: show first, don’t ask, rich overviews, interobject relationships, encourage exploration etc. (Whitelaw, 2012). Open data + open source + design sprints (with coding in between) + lots of testing = a collaborative process. Use testing to decide between sorting OR filtering; not both! As a bonus, generous interfaces encourage finding of data errors. bit.ly/2CNsNna
  • IWM on the centenary of WWI: thinking about continuing the crowdsourcing begun by the IWM’s original mission. Millions of assets have been created by users. Highly-collaborative mechanism to explore, contribute to, and share a data space.
  • Lauren Bassam (@lswbassam) on LGBT History and co-opting of Instagram as an archival space: Instagram is an unconventional archival source, but provides a few benefits in collaboration and engagement management, and serves as a viable platform for stories that are hard to tell using the collections in conventional archives. A suitably-engaged community can take pride in their accuracy and their research cred, whether or not you strictly approve of their use of the term “archivist”. With closed stacks, we sometimes forget how important engagement, touch, exploration and play can be.
  • Owen Gower (@owentg) from Dr. Jenner’s House Museum and Garden: they received EU REVEAL funding to look at VR as an engagement tool. Their game is for PSVR and has a commercial release. The objects that interested the game designers the most weren’t necessarily those which the curators might have chosen. Don’t let your designers get carried away and fill the game with e.g. zombies. But work with them, and your designers can help you find not only new ways to tell stories, but new stories you didn’t know you could tell. Don’t be afraid to use cheap/student developers!
  • Rebecca Kahm @rebamex from Pelagios Commons (@Pelagiosproject): the problem with linked data is that it’s hard to show its value to end users (or even show museums “what you can do” with it). Coins have great linked data, in collections. Peripleo was used to implement a sort-of “reverse Indiana Jones”: players try to recover information to find where an artefact belongs.
  • Jon Pratty: There are lots of useful services (Flickr, Storify etc.) and many are free (which is great)… but this produces problems for us in terms of the long-term life of our online content, not to mention the ethical issues with using services whose business model is built on trading personal data of our users. [Editor’s note: everything being talked about here is the stuff that the Indieweb movement have been working on for some time!] We need to de-siloise and de-centralise our content and services. redecentralize.org? responsibledata.io?

In-House Collaboration and the State of the Sector:

  • Rosie Cardiff @RosieCardiff, Serpentine Galleries on Mobile Tours. Delivered as web application via captive WiFi hotspot. Technical challenges were significant for a relatively small digital team, and there was some apprehension among frontline staff. As a result of these and other problems, the mobile tours were underused. Ideas to overcome barriers: report successes and feedback, reuse content cross-channel, fix bugs ASAP, invite dialogue. Interesting that they’ve gained a print guides off the back of the the digital. Learn lessons and relaunch.
  • Sarah Younaf @sarahyounas, Tyne & Wear Museums. Digital’s job is to ask the questions the museum wouldn’t normally ask, i.e. experimentation (with a human-centric bias). Digital is quietly, by its nature, “given permission” to take risks. Consider establishing relationships with (and inviting-in) people who will/want to do “mashups” or find alternative uses for your content; get those conversations going about collections access. Experimental Try-New-Things afternoons had value but this didn’t directly translate into ideas-from-the-bottom, perhaps as a result of a lack of confidence, a requirement for fully-formed ideas, or a heavy form in the application process for investment in new initiatives. Remember you can’t change everyone, but find champions and encourage participation!
  • Kati Price @katiprice on Structuring for Digital Success in GLAM. Study showed that technical leadership and digital management/analysis is rated as vital, yet they’re also underrepresented. Ambitions routinely outstrip budgets. Assumptions about what digital teams “look” like from an org-chart perspective don’t cover the full diversity: digital teams look very different from one another! Forrester Research model of Digital Maturity seems to be the closest measure of digital maturity in GLAM institutions, but has flaws (mostly relating to its focus in the commercial sector): what’s interesting is that digital maturity seems to correlate to structure – decentralised less mature than centralised less mature than hub-and-spoke less mature than holistic.
  • Jennifer Wexler, Daniel Pett, Chiara Bonacchi on Diversifying Museum Audiences through Participation and stuff. Crowdsourcing boring data entry tasks is sometimes easier than asking staff to do it, amazingly. For success, make sure you get institutional buy-in and get press on board. Also: make sure that the resulting data is open so everybody can explore it. Crowdsourcing is not implicitly democratisating, but it leads to the production of data that can be. 3D prints (made from 3D cutouts generated by crowdsourcing) are a useful accessibility feature for bringing a collection to blind or partially-sighted visitors, for example. Think about your audiences: kids might love your hip VR, but if their parents hate it then you still need a way to engage with them!

Geohashing expedition 2018-10-16 52 -0

This checkin to geohash 2018-10-16 52 -0 reflects a geohashing expedition. See more of Dan's hash logs.

Location

Alongside a lane that runs through the Quinton Green Business Park, South of the village of Quinton.

Participants

Plans

I’ve got an exam in Milton Keynes in the afternoon, so it’d be only a minor diversion for me to come and try to visit this roadside hashpoint. I hope to be there about 10:30.

Expedition

Failed to turn on the tracklogger on my GPS, but I remembered to get photos at least. This was a quick and easy run, although I did get accosted by a local who saw me hanging around near the wind farm and putting up a sign… I think that after the controversy these epic windmills caused he might have thought that I was putting up a planning notice to erect some more or something. Once I explained what I was doing he seemed happy enough.

Used my new 360° full-panoramic camera to take a picture at the hashpoint; I’ll put a VR-ready version on my website and link it here when I get the chance.

Panoramic 360° VR-ready wraparound of the hashpoint

Photos

Panoramic 360° VR-ready wraparound of the hashpoint×

Note #10601

Oh, go on, one more 360° photo: view from @NuffieldCollege’s (library) tower, ten stories above the #Oxford castle quarter.

View from Nuffield College library tower.

Note #10596

Walking around @NuffieldCollege at lunchtime: love this long pond.

Note #10594

Lunchtime at @NuffieldCollege gave me another excuse to snap some 360° photos; this one’s part of the quad.

Nuffield College quad

Note #10589

The @bodleianlibs Comms Team Away Day at @NuffieldCollege provides yet another excuse to play with 360° photography.

IndieWebCamp Oxford

This weekend, I attended part of Oxford’s first ever IndieWebCamp! As a long (long, long) time proponent of IndieWeb philosophy (since long before anybody said “IndieWeb”, at least) I’ve got my personal web presence pretty-well sorted out. Still, I loved the idea of attending and pushing some of my own tools even further: after all, a personal website isn’t “finished” until its owner says it is! One of the things I ended up hacking on was pretty-predictable: enhancements to my recently-open-sourced geocaching PESOS tools… but the other’s worth sharing too, I think.

Hacking and learning at IndieWebCamp Oxford
Some of IndieWebCamp Oxford’s attendees share knowledge and hack code together.

I’ve recently been playing with WebVR – for my day job at the Bodleian, I swear! – and I was looking for an excuse to try to expand some of what I’d learned into my personal blog, too. Given that I’ve recently acquired a Ricoh Theta V I thought that this’d be the perfect opportunity to add WebVR-powered panoramas to this site. My goals were:

  • Entirely self-hosted; no external third-party dependencies
  • Must degrade gracefully (i.e. even if you’re using an older browser, don’t have Javascript enabled, etc.) it should at least show the original image
  • In plain-old browsers should support mouse (or touch) control to pan the scene
  • Where accelerators are available (e.g. mobiles), “magic window” support to allow twist-to-explore
  • And where “true” VR hardware (Cardboard, Vive, Rift etc.) with WebVR support is available, allow one-click use of that
IndieWebCamp Oxford attendees at the pub
It wouldn’t be a geeky hacky camp thingy if it didn’t finish at a bar.

Hopefully the images above are working for you and are “interactive”. Try click-and-dragging on them (or tilt your device), try fullscreen mode, and/or try WebVR mode if you’ve got hardware that supports it. The mechanism of operation is slightly hacky but pretty simple: here’s how it works:

  1. The image is inserted into the page as normal but with an extra CSS class of “vr360” and a data attribute pointing to the full-resolution image, e.g.:
    <img class="vr360" src="/uploads/2018/09/R0010005_20180922182210-1024x512.jpg" alt="IndieWebCamp Oxford attendees at the pub" width="640" height="320" data-vr360="/uploads/2018/09/R0010005_20180922182210.jpg" />
  2. Some Javascript swaps-out images with this class for an iframe of the same size, showing a special page and passing the image filename after the hash, e.g.:
    for(vr360 of document.querySelectorAll('.vr360')){
    const width = parseInt(vr360.width);
    const height = parseInt(vr360.height);
    if(width == 0) width = '100%'; // Fallback for where width/height not specified,
    if(height == 0) height = '100%'; // needed because of some quirks with Dan's lazy-loader
    vr360.outerHTML = `<iframe src="/q23-content/themes/q18/vr360/#${vr360.dataset.vr360}" width="${width}" height="${height}" class="aligncenter" class="vr360-frame" style="min-width: 340px; min-height: 340px;"></iframe>`;
    }
  3. The iframe page loads this Javascript file. This loads three.js (to make 3D things easy) and WebVR-polyfill (to fix browser quirks). Finally (scroll to the bottom of the code), it creates a camera in the centre of a sphere, loads the image specified in the hash, flips it, and paints it onto the inside surface of the sphere, sets up controls, and turns the user loose on it. That’s all there is to it!

You’re welcome to any of my code if you’d like a drop-in approach to hosting panoramic photographs on your own personal site. My solution’s pretty extensible if you want e.g. interactive hotspots or contextual overlays – in fact, that – plus an easy route to editing the content for less-technical users – is pretty-much exactly what I’m working on for my day job at the moment.

Daydream VR hands-on: Googles dumb VR headset is actually very clever

This article is a repost promoting content originally published elsewhere. See more things Dan's reposted.

Of all the products announced today at Google’s massive event, the Daydream View might be the best seller. At only $79, Daydream packs a “Good enough” controller and VR headset into a single box, allowing anyone with a brand new phone (for now only a brand new Google phone) to experience virtual reality. The Daydream opens up the Gear VR concept to the entire Android Ecosystem, with future Android devices expected to support the standard…