A letter I got recently and a question I was asked in another forum really got me to thinking. The question was: How did you come to realize that poly-amorous relationships were
right for you? Now …
A letter I got recently and a question I was asked in another forum really got me to thinking. The question was: How did you come to realize that poly-amorous relationships were right
for you? Now that you live this lifestyle, do you think that it’s for everyone, or more “natural” than monogamy? I answered:…
I was pleased to see that one of my favourite poly bloggers came out and said what I’ve always argued: that polyamory might well not be for everyone! I’m a big fan of the idea
that everybody can learn some useful relationship-negotiation and communication skills from studying the practice of polyamory, but I’m certainly not suggesting that my lifestyle ought
to be everybody else’s!
Oh my Goat! We just finished reading this awesome pick-a-path story that helps children learn the power of kindness. Have a go… #OatTheGoat
Discovered this fun interactive storybook; it tells the tale of a goat called Oat who endeavours to climb a mountain (making friends along the way). At a few points, it presents as a
“choose your own adventure”-style book (although the forks are artificial and making the “wrong” choice immediately returns you the previous page), but it still does a reasonable job at
looking at issues of bullying and diversity.
If you happened to flip through a PC gaming magazine in the late 1980s or early 1990s, you would’ve probably seen an ad for a game called Leisure
Suit Larry, or one of its many sequels. It was a graphic adventure game first released in 1987 with the primary goal of helping its protagonist get laid. Since most games then
leaned heavily into cartoon violence, Larry’s sexual innuendo stood out. To young boys at the time, it had the mystique of a shrink-wrapped Playboy in a convenience store.
For the past 9 months I have been presenting versions of this talk to AI researchers, investors, politicians and policy makers. I felt it was time to share these ideas with a wider
audience. Thanks to the Ditchley conference on Machine Learning in 2017 for giving me a fantastic platform to get early…
Summary: The central prediction I want to make and defend in this post is that continued rapid progress in machine learning will drive the emergence of a new kind of
geopolitics; I have been calling it AI Nationalism. Machine learning is an omni-use technology that will come to touch all sectors and parts of society. The transformation of both the
economy and the military by machine learning will create instability at the national and international level forcing governments to act. AI policy will become the single most
important area of government policy. An accelerated arms race will emerge between key countries and we will see increased protectionist state action to support national champions,
block takeovers by foreign firms and attract talent. I use the example of Google, DeepMind and the UK as a specific example of this issue. This arms race will potentially speed up the
pace of AI development and shorten the timescale for getting to AGI. Although there will be many common
aspects to this techno-nationalist agenda, there will also be important state specific policies. There is a difference between predicting that something will happen and believing this
is a good thing. Nationalism is a dangerous path, particular when the international order and international norms will be in flux as a result and in the concluding section I discuss
how a period of AI Nationalism might transition to one of global cooperation where AI is treated as a global public good.
Excellent inspiring and occasionally scary look at the impact that the quest for general-purpose artificial intelligence has on the international stage. Will we enter an age of “AI
Nationalism”? If so, how will we find out way to the other side? Excellent longread.
Underappreciated video presentation on how only a few small changes to the timeline of the Internet and early Web results in a completely different set of technologies and companies
becoming dominant today. Thought-provoking.
This white paper describes our current position on quantum key distribution (QKD). QKD is an approach to key distribution that relies on the properties of quantum mechanics to
provide security.
…
For all the practical, business and security reasons given above, at this point in time we:
do not endorse QKD for any government or military applications
advise against replacing any existing public key solutions with QKD for commercial applications
The UK should continue its research and development of QKD systems. But this should be balanced by a growing body of practical QKD vulnerability research, and accompanied by the
development of methods for quantifying and validating the security claims of real-world QKD systems. Responsible innovation should be accompanied by independent validation.
…
Wise words from the NCSC here:while QKD continues to depend upon conventional components that often lack battle-testing they may have vulnerabilities. Furthermore, current
implementations of quantum cryptography fail to address the bigger and harder problems of authentication and identity – key distribution, while not perfectly solved, is still
something that we understand very well… and many real-world attacks target other parts of the process (which QKD does not seek to solve).
Either I’m getting older or movies are getting longer… and longer … and longer. So which is it? I took a look at the numbers. I studied the running time of
the top 100 US-grossing films since 1994 (2,200 films in total) and all films shot in the UK 2005-14 (2,142 films). In summary… The median length of …
Either I’m getting older or movies are getting longer… and longer … and longer. So which is it? I took a look at the numbers.
I studied the running time of the top 100 US-grossing films since 1994 (2,200 films in total) and all films shot in the UK 2005-14 (2,142 films). In summary…
The median length of a top 100 US-grossing films between 1994 and 2015 was 110 minutes
Running times have increased in six of the past seven years
The longest films are historical and western films and the shortest are animations and documentaries.
Peter Jackson makes the longest movies in Hollywood, with a median running time of 169 minutes.
The median running time of UK feature films (2008-14) was 94 minutes.
Films with lower budgets have shorter running times
The complete extended Lord of the Rings and Hobbit trilogies are a combined 21 hours long
…
It’s reassuring to read that I wasn’t the only one to observe this trend in filmmaking, and to find that somebody else had done the research to save me from feeling the need to do so
myself! The full article also makes a number of other interesting observations; worth a read.
My home computer in 1998 had a 56K modem connected to our telephone line; we were allowed a maximum of thirty minutes of computer usage a day, because my parents — quite reasonably
— did not want to have their telephone shut off for an evening at a time. I remember webpages loading slowly: ten […]
My home computer in 1998 had a 56K modem connected to our telephone line; we were allowed a maximum of thirty minutes of computer usage a day, because my parents — quite reasonably —
did not want to have their telephone shut off for an evening at a time. I remember webpages loading slowly: ten to twenty seconds for a basic news article.
At the time, a few of my friends were getting cable internet. It was remarkable seeing the same pages load in just a few seconds, and I remember thinking about the kinds of the
possibilities that would open up as the web kept getting faster.
And faster it got, of course. When I moved into my own apartment several years ago, I got to pick my plan and chose a massive fifty megabit per second broadband connection, which I
have since upgraded.
So, with an internet connection faster than I could have thought possible in the late 1990s, what’s the score now? A story at the Hill took over
nine seconds to load; at Politico, seventeen seconds; at CNN, over thirty seconds. This is the
bullshit web.
But first, a short parenthetical: I’ve been writing posts in both long- and short-form about this stuff for a while, but I wanted to bring many threads together into a single document
that may pretentiously be described as a theory of or, more practically, a guide to the bullshit web.
A second parenthetical: when I use the word “bullshit” in this article, it isn’t in a profane sense. It is much closer to Harry Frankfurt’s definition in “On Bullshit”:
It is just this lack of connection to a concern with truth — this indifference to how things really are — that I regard as of the essence of bullshit.
In the year 1930, John Maynard Keynes predicted that, by century’s end, technology would have advanced sufficiently that countries like Great Britain or the United States would have
achieved a 15-hour work week. There’s every reason to believe he was right. In technological terms, we are quite capable of this. And yet it didn’t happen. Instead, technology has
been marshaled, if anything, to figure out ways to make us all work more. In order to achieve this, jobs have had to be created that are, effectively, pointless. Huge swathes of
people, in Europe and North America in particular, spend their entire working lives performing tasks they secretly believe do not really need to be performed. The moral and
spiritual damage that comes from this situation is profound. It is a scar across our collective soul. Yet virtually no one talks about it.
[…]
These are what I propose to call ‘bullshit jobs’.
What is the equivalent on the web, then?
…
This, this, a thousand times this. As somebody who’s watched the Web grow both in complexity and delivery speed over the last quarter century, it apalls me that somewhere along the way
complexity has started to win. I don’t want to have to download two dozen stylesheets and scripts before your page begins to render – doubly-so if those additional
files serve no purpose, or at least no purpose discernable to the reader. Personally, the combination of uMatrix and Ghostery is all the adblocker I need (and I’m more-than-willing to add a little userscript to “fix” your site if it tries to sabotage my use of these
technologies), but when for whatever reason I turn these plugins off I feel like the Web has taken a step backwards while I wasn’t looking.
After The Obsuritory – a blog providing reviews of old and less-well-known video games – published a
review of 1994’s Wolf, they followed-up with this additional review… written for a wolf.
Previous Page
Dark Science #85 is almost finished, but in the meantime enjoy this interlude comic with indispensable cyborg knowledge! Enjoy Dresden Codak? Become a Patreon subscriber today!
http://dresdencodak.com/wp-content/plugins/patron-button-and-widgets-by-codebard/images/patreo…
Dresden Codak is one of the most fabulous (but strange) webcomic series, and it’s great to see this quirky “aside” piece.
It's a psychological quirk that when something becomes rarer, people may spot it in more places than ever. What is the 'concept creep' that lets context change how we categorize the
world around us?
Why do many problems in life seem to stubbornly stick around, no matter how hard people work to fix them? It turns out that a quirk in the way human brains process information means
that when something becomes rare, we sometimes see it in more places than ever.
Think of a “neighborhood watch” made up of volunteers who call the police when they see anything suspicious. Imagine a new volunteer who joins the watch to help lower crime in the
area. When they first start volunteering, they raise the alarm when they see signs of serious crimes, like assault or burglary.
Let’s assume these efforts help and, over time, assaults and burglaries become rarer in the neighborhood. What would the volunteer do next? One possibility is that they would relax
and stop calling the police. After all, the serious crimes they used to worry about are a thing of the past.
But you may share the intuition my research group had – that many volunteers in this situation wouldn’t relax just because crime went down. Instead, they’d start calling things
“suspicious” that they would never have cared about back when crime was high, like jaywalking or loitering at night.
You can probably think of many similar situations in which problems never seem to go away, because people keep changing how they define them. This is sometimes called “concept creep,” or “moving the goalposts,” and it can be a frustrating experience. How can you know if you’re making progress
solving a problem, when you keep redefining what it means to solve it? My colleagues and I wanted to understand when this kind
of behavior happens, why, and if it can be prevented.
After violent crime starts going down, loiterers and jaywalkers may start to seem more threatening.Marc Bruxelle/Shutterstock.com
Looking for trouble
To study how concepts change when they become less common, we brought volunteers into our laboratory and gave them a
simple task – to look at a series of computer-generated faces and decide which ones seem “threatening.” The faces had been carefully
designed by researchers to range from very intimidating to very harmless.
As we showed people fewer and fewer threatening faces over time, we found that they expanded their definition of “threatening” to include a wider range of faces. In other words,
when they ran out of threatening faces to find, they started calling faces threatening that they used to call harmless. Rather than being a consistent category, what people
considered “threats” depended on how many threats they had seen lately.
This kind of inconsistency isn’t limited to judgments about threat. In another experiment, we asked people to make an even simpler decision: whether colored dots on a screen were
blue or purple.
As the context changes, so do the boundaries of your categories.David Levari, CC BY-ND
As blue dots became rare, people started calling slightly purple dots blue. They even did this when we told them blue dots were going to become rare, or offered them cash prizes to
stay consistent over time. These results suggest that this behavior isn’t entirely under conscious control – otherwise, people would have been able to be consistent to earn a cash
prize.
Expanding what counts as immoral
After looking at the results of our experiments on facial threat and color judgments, our research group wondered if maybe this
was just a funny property of the visual system. Would this kind of concept change also happen with non-visual judgments?
To test this, we ran a final experiment in which we asked volunteers to read about different scientific studies, and decide which were ethical and which were unethical. We were
skeptical that we would find the same inconsistencies in these kind of judgments that we did with colors and threat.
Why? Because moral judgments, we suspected, would be more consistent across time than other kinds of judgments. After all, if you think violence is wrong today, you should still
think it is wrong tomorrow, regardless of how much or how little violence you see that day.
But surprisingly, we found the same pattern. As we showed people fewer and fewer unethical studies over time, they started calling a wider range of studies unethical. In other
words, just because they were reading about fewer unethical studies, they became harsher judges of what counted as ethical.
The brain likes to make comparisons
Why can’t people help but expand what they call threatening when threats become rare? Research from cognitive psychology and neuroscience suggests that this kind of behavior is a
consequence of the basic way that our brains process information – we are constantly comparing what is front of us to its recent
context.
Instead of carefully deciding how threatening a face is compared to all other faces, the brain can just store how threatening it is compared to other faces it has seen recently, or compare it to some average
of recently seen faces, or the most and least threatening faces it has seen. This kind of comparison could lead directly to the
pattern my research group saw in our experiments, because when threatening faces are rare, new faces would be judged relative to mostly harmless faces. In a sea of mild faces, even
slightly threatening faces might seem scary.
It turns out that for your brain, relative comparisons often use less energy than absolute measurements. To get a sense
for why this is, just think about how it’s easier to remember which of your cousins is the tallest than exactly how tall each cousin is. Human brains have likely evolved to use relative comparisons in many situations, because these comparisons often provide enough information to safely navigate our
environments and make decisions, all while expending as little effort as possible.
Being consistent when it counts
Sometimes, relative judgments work just fine. If you are looking for a fancy restaurant, what you count as “fancy” in Paris, Texas, should be different than in Paris, France.
But a neighborhood watcher who makes relative judgments will keep expanding their concept of “crime” to include milder and milder transgressions, long after serious crimes have
become rare. As a result, they may never fully appreciate their success in helping to reduce the problem they are worried about. From medical diagnoses to financial investments,
modern humans have to make many complicated judgments where being consistent matters.
How can people make more consistent decisions when necessary? My research group is currently doing follow-up research in the lab to develop more effective interventions to help
counter the strange consequences of relative judgment.
One potential strategy: When you’re making decisions where consistency is important, define your categories as clearly as you can. So if you do join a neighborhood watch, think
about writing down a list of what kinds of transgressions to worry about when you start. Otherwise, before you know it, you may find yourself calling the cops on dogs being walked
without leashes.
I remember when this advertisement did its original run and I loved it. I was delighted to accidentally stumble across it on the Web recently; if you haven’t seen it before (and even if
you have) you should give it a watch…