…
Wait, it isn’t supposed to be pronounced “clan via Pokemon Go garage city of cocoa”? I’ve been saying it wrong my whole life! 😅
Dan Q
This is a repost promoting content originally published elsewhere. See more things Dan's reposted.
…
Wait, it isn’t supposed to be pronounced “clan via Pokemon Go garage city of cocoa”? I’ve been saying it wrong my whole life! 😅
This is a repost promoting content originally published elsewhere. See more things Dan's reposted.
…
“The grid needs new electricity sources to support AI technologies,” said Michael Terrell, senior director for energy and climate at Google.
“This agreement helps accelerate a new technology to meet energy needs cleanly and reliably, and unlock the full potential of AI for everyone.”
The deal with Google “is important to accelerate the commercialisation of advanced nuclear energy by demonstrating the technical and market viability of a solution critical to decarbonising power grids,” said Kairos executive Jeff Olson.
…
Sigh.
First, something lighthearted-if-it-wasn’t-sad. Google’s AI is, of course, the thing that comes up with gems like this:
But here’s the thing: the optimist in me wants to believe that when the current fad for LLMs passes, we might – if we’re lucky – come out the other side with some fringe benefits in the form of technological advancements.Western nations have, in general, been under-investing in new nuclear technologies2, instead continuing to operate ageing second-generation reactors for longer and longer timescales3 while flip-flopping over whether or not to construct a new fleet. It sickens me to say so, but if investment by tech companies is what’s needed to unlock the next-generation power plants, and those plants can keep running after LLMs have had its day and go back to being a primarily academic consideration… then that’s fine by me.
Of course, it’s easy to also find plenty of much more-pessimistic viewpoints too. The other week, I had a dream in which we determined the most-likely identity of the “great filter”: a hypothetical resolution to the Fermi paradox that posits that the reason we don’t see evidence of extraterrestrial life is because there’s some common barrier to the development of spacefaring civilisations that most fail to pass. In the dream, we decided that the most-likely cause was energy hunger: that over time, an advancing civilisation would inevitably develop an increasingly energy-hungry series of egoistic technologies (cryptocurrencies, LLMs, whatever comes next…) and, fuelled by the selfish, individualistic forces that ironically allowed them to compete and evolve to this point, destroy their habitat and/or their sources of power and collapsing. I woke from the dream thinking that there’d be a potential short story to be written there, from the perspective of some future human looking back on the path of the technologies that lead them to whatever technology ultimately lead to our energy-hunger downfall, but never got around to writing it.
I think I’ll try to keep a hold of the optimistic viewpoint, for now: that the snake-that-eats-its-own-tail that is contemporary AI will fizzle out of mainstream relevance, but not before big tech makes big investments in next-generation nuclear, renewable, and energy storage technologies. That’d be nice.
1 Hilari-saddening: when you laugh at something until you realise quite how sad it is.
2 I’m a big fan of nuclear power – as I believe that all informed environmentalists should be – as both a stop-gap to decarbonising energy production and potentially as a relatively-clean long-term solution for balancing grids.
3 Consider for example Hartlepool Nuclear Power Station, which supplies 2%-3% of the UK’s electricity. Construction began in the 1960s and was supposed to run until 2007. Which was extended to 2014 (by which point it was clearly showing signs of ageing). Which was extended to 2019. Which was extended to 2024. It’s still running. The site’s approved for a new reactor but construction will probably be a decade-long project and hasn’t started, sooo…
This is a repost promoting content originally published elsewhere. See more things Dan's reposted.
…
VPNs have long been essential online tools that provide security, freedom, and most importantly, privacy. Each day, hundreds of millions of internet users connect to a VPN to prevent their online activities from being tracked and monitored so that they can privately access web resources. In other words, the very purpose of a VPN is to prevent the type of surveillance that Google engages in on a massive and unprecedented scale.
Google knows this, and in their whitepaper discussing VPN by Google One, Google acknowledges that VPN usage is becoming mainstream and that “up to 25% of all internet users accessed a VPN within the last month of 2019.” Increasing VPN usage unfortunately poses a significant problem for Google, by making it more difficult to track users across the internet, mine their data, and target them with advertisements. In short, VPNs undermine Google’s power.
…
So yeah, it turns out that Google are launching a VPN service. I just checked, and it’s not available to me anyway because it’s US-only (apparently nobody explained to Google the irony of having a VPN service that’s geofenced), but that’s pretty academic because I wasn’t going to touch it with a barge pole in the first place.
Google already collect data on your browsing habits if you use their products. And I’m not just talking about Chrome, which of course continues to track you using your Google Account even after you log out and clear your cookies, and Google’s ubiquitous Web tools, but also the tracking pixels hidden on every other website thanks to Google Analytics, AdWords, reCAPTCHA, Google Fonts, and the like. Sure, you can use e.g. uMatrix to stop all of these (although I’m in need of a replacement), but that’s not a solution for, y’know, normal people. Container tabs help and you should absolutely use them, but they don’t quite go far enough. It’s a challenge.
Switch to their VPN, though, and they’re suddenly able to track all of your browsing activity, in any browser on your device. And probably many of the desktop applications you run, too, as most of them “phone home” for updates or functionality. And because it’s a paid-for VPN service, this data can be instantly linked to your real-world identity. By a company that’s demonstrated its willingness to misuse that data for their own benefit (or for the benefit of overreaching law enforcement agencies). Yeah: no deal, Google.
Perhaps the only company I’d trust less to provide a VPN service would be Facebook, because you just know they’d be doing so exclusively to undermine individual privacy. Oh wait; that’s exactly what they did. Sigh.
Over the last six years I’ve been on a handful of geohashing expeditions, setting out to functionally-random GPS coordinates to see if I can get there, and documenting what I find when I do. The comic that inspired the sport was already six years old by the time I embarked on my first outing, and I’m far from the most-active member of the ‘hasher community, but I’ve a certain closeness to them as a result of my work to resurrect and host the “official” website. Either way: I love the sport.
But even when I’ve not been ‘hashing, it occurs to me that I’ve been tracking my location a lot. Three mechanisms in particular dominate:
If I could mine all of that data, I might be able to answer the question… have I ever have accidentally visited a geohashpoint?
Let’s find out.
To begin with, I needed to get all of my data into μLogger. The Android app syncs to it automatically and uploading from my GPSr was simple. The data from Google Takeout was a little harder.
I found a setting in Google Takeout to export past location data in KML, rather than JSON, format. KML is understood by GPSBabel which can convert it into GPX. I can “cut up” the resulting GPX file using a little grep-fu (relevant xkcd?) to get month-long files and import them into μLogger. Easy!
Well.. μLogger’s web interface sometimes times-out if you upload enormous files like a whole month of Google Takeout logs. So instead I wrote a Nokogiri script to convert the GPX into SQL to inject directly into μLogger’s database.
Next, I got a set of hashpoint offsets. I only had personal positional data going back to around 2010, so I didn’t need to accommodate for the pre-2008 absence of the 30W time zone rule. I’ve had only one trip to the Southern hemisphere in that period, and I checked that manually. A little rounding and grouping in SQL gave me each graticule I’d been in on every date. Unsurprisingly, I spend most of my time in the 51 -1 graticule. Adding (or subtracting, for the Western hemisphere) the offset provided the coordinates for each graticule that I visited for the date that I was in that graticule. Nice.
The correct way to find the proximity of my positions to each geohashpoint is, of course, to use WGS84. That’s an easy thing to do if you’re using a database that supports it. My database… doesn’t. So I just used Pythagoras’ theorem to find positions I’d visited that were within 0.15° of a that day’s hashpoint.
Using Pythagoras for geopositional geometry is, of course, wrong. Why? Because the physical length of a “degree” varies dependent on latitude, and – more importantly – a degree of latitude is not the same distance as a degree of longitude. The ratio varies by latitude: only an idealised equatorial graticule would be square!
But for this case, I don’t care: the data’s going to be fuzzy and require some interpretation anyway. Not least because Google’s positioning has the tendency to, for example, spot a passing train’s WiFi and assume I’ve briefly teleported to Euston Station, which is apparently where Google thinks that hotspot “lives”.
I assumed that my algorithm would detect all of my actual geohash finds, and yes: all of these appeared as-expected in my results. This was a good confirmation that my approach worked.
And, crucially: about a dozen additional candidate points showed up in my search. Most of these – listed at the end of this post – were 50m+ away from the hashpoint and involved me driving or cycling past on a nearby road… but one hashpoint stuck out.
In August 2015 we took a trip up to Edinburgh to see a play of Ruth‘s brother Robin‘s. I don’t remember much about the play because I was on keeping-the-toddler-entertained duty and so had to excuse myself pretty early on. After the play we drove South, dropping Tom off at Lanark station.
We exited Lanark via the Hyndford Bridge… which is – according to the map – tantalisingly-close to the 2015-08-22 55 -3 hashpoint: only about 23 metres away!
That doesn’t feel quite close enough to justify retroactively claiming the geohash, tempting though it would be to use it as a vehicle to my easy geohash ribbon. Google doesn’t provide error bars for their exported location data so I can’t draw a circle of uncertainty, but it seems unlikely that I passed through this very close hashpoint.
Pity. But a fun exercise. This was the nearest of my near misses, but plenty more turned up in my search, too:
This is a repost promoting content originally published elsewhere. See more things Dan's reposted.
A new email-based extortion scheme apparently is making the rounds, targeting Web site owners serving banner ads through Google’s AdSense program. In this scam, the fraudsters demand bitcoin in exchange for a promise not to flood the publisher’s ads with so much bot and junk traffic that Google’s automated anti-fraud systems suspend the user’s AdSense account for suspicious traffic.
…
The shape of our digital world grows increasingly strange. As anti-DoS techniques grow better and more and more uptime-critical websites hide behind edge caches, zombie network operators remain one step ahead and find new and imaginative ways to extort money from their victims. In this new attack, the criminal demands payment (in cryptocurrency) under threat that, if it’s not delivered, they’ll unleash an army of bots to act like the victim trying to scam their advertising network, thereby getting the victim’s site demonetised.
#TodayILearned that if you get a perfect 100-100-100-100 + #PWA score from @Google‘s site performance/accessibility/best practice/SEO tool “@____lighthouse” then you’re treated to an animated fireworks display alongside your scores. (Yes, this is for danq.me.)
This is a repost promoting content originally published elsewhere. See more things Dan's reposted.
Big news! This site is no longer using Google Analytics and I’ve switched to a self-hosted version of brand new analytics product Fathom.
Fathom is very simple. It only tracks 4 things:
Unique Visitors
,Page Views
,Time on Site
, andBounce Rate
. It shows me a chart of page views and visitors and then gives me a break down of referrers and top performing content. That’s it. And to be quite honest, that’s about all I need from my blog analytics.…
You know what, Dave:me too! I’ve been running Google Analytics since forever and Piwik/Matomo (in parallel with it) for about a year and honestly: I get more than enough of what I need from the latter. So you’ve inspired me to cut the line with Google: after all, all I was doing was selling them my friends’ data in exchange for some analytics I wasn’t really paying attention to… and I’d frankly rather not.
So: for the first time in a decade or so, there’s no Google Analytics on this site. Woop!
Update 2023-12-13: I eventually went further still and dropped all analytics, even self-hosted variants, and it feels great.
This is a repost promoting content originally published elsewhere. See more things Dan's reposted.
…
We here at unlike kinds decided that we had to implement Google AMP. We have to be in the Top Stories section because otherwise we’re punted down the page and away from potential readers. We didn’t really want to; our site is already fast because we made it fast, largely with a combination of clever caching and minimal code. But hey, maybe AMP would speed things up. Maybe Google’s new future is bright.
It isn’t. According to Google’s own Page Speed Insights audit (which Google recommends to check your performance), the AMP version of articles got an average performance score of 87. The non-AMP versions? 95. (Note: I updated these numbers recently with an average after running the test 6 times per version.)
…
I’ve complained about AMP before plenty – starting here, for example – but it’s even harder to try to see the alleged “good sides” of the technology when it doesn’t even deliver the one thing it was supposed to. The Internet should be boycotting this shit, not drinking the Kool-Aid.
This is a repost promoting content originally published elsewhere. See more things Dan's reposted.
Coming to Android this year: a third gender option for emojis such as Police Officer, Zombie, Person Facepalming, Construction Worker and People With Bunny Ears.
Revealed by Google in a submission to the Unicode Consortium last week, these changes signal a new direction from Google which has in recent years played ball with other vendors in overlooking Unicode guidelines, in favor of cross platform compatibility.
In giving public notice via Unicode, Google hopes that other vendors will join them in this effort to standardize many of the emoji which don’t specify a gender.
This builds on an initial few gender inclusive revisions made by Google in 2018.
…
This is a repost promoting content originally published elsewhere. See more things Dan's reposted.
Fusion Tables… Fabric… Inbox… Google+… goo.gl… Goggles… Site Search… Glass… Now… Code… Bump!… Gears… Desktop Search…
Just some of the projects and services that Google has offered and then killed; this site aims to catalogue them all. Some, like Wave, were given to the community (Wave lived on for a while as an Apache project but is now basically dead), but most, like Reader, were assassinated in a misguided attempt to drive traffic to other services (ultimately, Reader was killed perhaps to try to get people onto Google+, which was then also killed).
Google can’t be trusted to maintain the services of theirs that you depend upon (relevant XKCD?). That’s not a phenomenon that’s unique to Google, of course: it’s perhaps just that they produce so many new and often-experimental services that they inevitably cease supporting more of them than some of the many other providers who’ve killed the silos that people depended upon.
How could things be better? For a start, Google could make a better commitment to open-source and developing standards rather than platforms. But if you don’t think you can trust them to do that – and you can’t – then the only solution for individuals is to use fewer Google products to break the Google-monoculture. Encourage the competition to weaken their position, and break free from silos in general where it’s possible to do so.
148+ projects and services dead. But hey, we’re getting Stadia so everything’s okay, right? <sigh>
This is a repost promoting content originally published elsewhere. See more things Dan's reposted.
<script>
(function(a,b,c){var d=a.history,e=document,f=navigator||{},g=localStorage,
h=encodeURIComponent,i=d.pushState,k=function(){return Math.random().toString(36)},
l=function(){return g.cid||(g.cid=k()),g.cid},m=function(r){var s=[];for(var t in r)
r.hasOwnProperty(t)&&void 0!==r[t]&&s.push(h(t)+"="+h(r[t]));return s.join("&")},
n=function(r,s,t,u,v,w,x){var z="https://www.google-analytics.com/collect",
A=m({v:"1",ds:"web",aip:c.anonymizeIp?1:void 0,tid:b,cid:l(),t:r||"pageview",
sd:c.colorDepth&&screen.colorDepth?screen.colorDepth+"-bits":void 0,dr:e.referrer||
void 0,dt:e.title,dl:e.location.origin+e.location.pathname+e.location.search,ul:c.language?
(f.language||"").toLowerCase():void 0,de:c.characterSet?e.characterSet:void 0,
sr:c.screenSize?(a.screen||{}).width+"x"+(a.screen||{}).height:void 0,vp:c.screenSize&&
a.visualViewport?(a.visualViewport||{}).width+"x"+(a.visualViewport||{}).height:void 0,
ec:s||void 0,ea:t||void 0,el:u||void 0,ev:v||void 0,exd:w||void 0,exf:"undefined"!=typeof x&&
!1==!!x?0:void 0});if(f.sendBeacon)f.sendBeacon(z,A);else{var y=new XMLHttpRequest;
y.open("POST",z,!0),y.send(A)}};d.pushState=function(r){return"function"==typeof d.onpushstate&&
d.onpushstate({state:r}),setTimeout(n,c.delay||10),i.apply(d,arguments)},n(),
a.ma={trackEvent:function o(r,s,t,u){return n("event",r,s,t,u)},
trackException:function q(r,s){return n("exception",null,null,null,null,r,s)}}})
(window,"XX-XXXXXXXXX-X",{anonymizeIp:true,colorDepth:true,characterSet:true,screenSize:true,language:true});
</script>
This is cute: a Google Analytics code snippet that results in a payload about a fiftieth of the size of the one provided by Google but still provides most of the important features.
This is a repost promoting content originally published elsewhere. See more things Dan's reposted.
Microsoft is officially giving up on an independent shared platform for the internet. By adopting Chromium, Microsoft hands over control of even more of online life to Google.
This may sound melodramatic, but it’s not. The “browser engines” — Chromium from Google and Gecko Quantum from Mozilla — are “inside baseball” pieces of software that actually determine a great deal of what each of us can do online. They determine core capabilities such as which content we as consumers can see, how secure we are when we watch content, and how much control we have over what websites and services can do to us. Microsoft’s decision gives Google more ability to single-handedly decide what possibilities are available to each one of us.
From a business point of view Microsoft’s decision may well make sense. Google is so close to almost complete control of the infrastructure of our online lives that it may not be profitable to continue to fight this. The interests of Microsoft’s shareholders may well be served by giving up on the freedom and choice that the internet once offered us. Google is a fierce competitor with highly talented employees and a monopolistic hold on unique assets. Google’s dominance across search, advertising, smartphones, and data capture creates a vastly tilted playing field that works against the rest of us.
From a social, civic and individual empowerment perspective ceding control of fundamental online infrastructure to a single company is terrible. This is why Mozilla exists. We compete with Google not because it’s a good business opportunity. We compete with Google because the health of the internet and online life depend on competition and choice. They depend on consumers being able to decide we want something better and to take action.
Will Microsoft’s decision make it harder for Firefox to prosper? It could. Making Google more powerful is risky on many fronts. And a big part of the answer depends on what the web developers and businesses who create services and websites do. If one product like Chromium has enough market share, then it becomes easier for web developers and businesses to decide not to worry if their services and sites work with anything other than Chromium. That’s what happened when Microsoft had a monopoly on browsers in the early 2000s before Firefox was released. And it could happen again.
If you care about what’s happening with online life today, take another look at Firefox. It’s radically better than it was 18 months ago — Firefox once again holds its own when it comes to speed and performance. Try Firefox as your default browser for a week and then decide. Making Firefox stronger won’t solve all the problems of online life — browsers are only one part of the equation. But if you find Firefox is a good product for you, then your use makes Firefox stronger. Your use helps web developers and businesses think beyond Chrome. And this helps Firefox and Mozilla make overall life on the internet better — more choice, more security options, more competition.
Scathing but well-deserved dig at Microsoft by Mozilla, following on from the Edge-switch-to-Chromium I’ve been going on about. Chris is right: more people should try Firefox (it’s been my general-purpose browser on desktop and mobile ever since Opera threw in the towel and joined the Chromium hivemind in 2013, and on-and-off plenty before then) – not just because it’s a great browser (and it is!) but also now because it’s important for the diversity and health of the Web.
(Reprinted in full under a creative commons license.)
This is a repost promoting content originally published elsewhere. See more things Dan's reposted.
…
We used to have much more diversity in terms of browser engines years ago than we do today. This is easy to understand as the Web in 2018 is far more complex than it was in the early noughties. It is very costly to develop and maintain a Web engine and few companies have the necessary talent and cash to do it. Microsoft is one of those companies but the fact that it might be throwing in the towel on its engine signals a bad development for all of us.
…
Further evaluation of the dangers of the disappearing diversity on the Web, following in the theme of my thoughts the other day about Microsoft’s adoption of Chromium instead of EdgeHTML in its browser.
Andre raises a real point: how will we fight for a private and decentralised Web when it becomes “the Google Web”?
This is a repost promoting content originally published elsewhere. See more things Dan's reposted.
…
I don’t think Microsoft using Chromium is the end of the world, but it is another step down a slippery slope. It’s one more way of bolstering the influence Google currently has on the web.
We need Google to keep pushing the web forward. But it’s critical that we have other voices, with different viewpoints, to maintain some sense of balance. Monocultures don’t benefit anyone.
This essay follows-up nicely on my concerns about Microsoft’s move from EdgeHTML to Chromium in Edge, but goes further to discuss some of the bigger problems of a homogeneous web, especially one under Google’s influence.
Microsoft engineers have been spotted committing code to Chromium, the backend of Google Chrome and many other web browsers. This, among other things, has lead to speculation that Microsoft’s browser, Edge, might be planned to switch from its current rendering engine (EdgeHTML) to Blink (Chromium’s). This is bad news.
The younger generation of web developers are likely to hail this as good news: one fewer engine to develop for and test in, they’re all already using Chrome or something similar (and certainly not Edge) for development and debugging anyway, etc. The problem comes perhaps because they’re too young to remember the First Browser War and its aftermath. Let me summarise:
History looks set to repeat itself. Substitute Chrome in place of Internet Explorer and update the references to other web browsers and the steps above could be our future history, too. Right now, we’re somewhere in or around step #2 – Chrome is the dominant browser – and we’re starting to see the beginnings of step #3: more and more “Chrome only” sites. More-alarmingly this time around, Google’s position in providing many major Web services allows them to “push” even harder for this kind of change, even just subtly: if you make the switch from Chrome to e.g. Firefox (and you absolutely should) you might find that YouTube runs slower for you because YouTube’s (Google) engineers favour Google’s web browser.
Chrome is becoming the new Internet Explorer 6, and that’s a huge problem. Rachel Nabors wrote in her excellent article The Ecological Impact of Browser Diversity:
So these are the three browser engines we have: WebKit/Blink, Gecko, and EdgeHTML. We are unlikely to get any brand new bloodlines in the foreseeable future. This is it.
If we lose one of those browser engines, we lose its lineage, every permutation of that engine that would follow, and the unique takes on the Web it could allow for.
And it’s not likely to be replaced.
Imagine a planet populated only by hummingbirds, dolphins, and horses. Say all the dolphins died out. In the far, far future, hummingbirds or horses could evolve into something that could swim in the ocean like a dolphin. Indeed, ichthyosaurs in the era of dinosaurs looked much like dolphins. But that creature would be very different from a true dolphin: even ichthyosaurs never developed echolocation. We would wait a very long time (possibly forever) for a bloodline to evolve the traits we already have present in other bloodlines today. So, why is it ok to stand by or even encourage the extinction of one of these valuable, unique lineages?
We have already lost one.
We used to have four major rendering engines, but Opera halted development of its own rendering engine Presto before adopting Blink.
Three left. Spend them wisely.
As much as I don’t like having to work-around the quirks in all of the different browsers I test in, daily, it’s way preferable to a return to the dark days of the Web circa most of the first decade of this century. Please help keep browsers diverse: nobody wants to start seeing this shit –
Update: this is now confirmed. A sad day for the Web.