Friday, October 16, 2020

What Now?

When I spotted the BBC headline, "The grim fate that could be worse than extinction," I instinctively recoiled and thought, "fuck, what now?" I don't know about you but with this pandemic, the kaleidoscope of climate breakdown impacts, and the seeming "end days" of democracy, I'm up to my ass in alligators.

Cut to the chase. I'll bite, what could be worse than extinction?  Oh, I don't know. Ask George. George who? George Orwell, you pillock.

When we think of existential risks, events like nuclear war or asteroid impacts often come to mind. Yet there’s one future threat that is less well known – and while it doesn’t involve the extinction of our species, it could be just as bad.

It’s called the “world in chains” scenario, where ...a global totalitarian government uses a novel technology to lock a majority of the world into perpetual suffering. If it sounds grim, you’d be right. But is it likely? Researchers and philosophers are beginning to ponder how it might come about – and, more importantly, what we can do to avoid it.

Toby Ord, a senior research fellow at the Future of Humanity Institute (FHI) at Oxford University, believes that the odds of an existential catastrophe happening this century from natural causes are less than one in 2,000, because humans have survived for 2,000 centuries without one. However, when he adds the probability of human-made disasters, Ord believes the chances increase to a startling one in six. He refers to this century as “the precipice” because the risk of losing our future has never been so high.

Researchers at the Center on Long-Term Risk, a non-profit research institute in London, have expanded upon x-risks with the even-more-chilling prospect of suffering risks. These “s-risks” are defined as “suffering on an astronomical scale, vastly exceeding all suffering that has existed on Earth so far.” In these scenarios, life continues for billions of people, but the quality is so low and the outlook so bleak that dying out would be preferable. In short: a future with negative value is worse than one with no value at all.

This is where the “world in chains” scenario comes in. If a malevolent group or government suddenly gained world-dominating power through technology, and there was nothing to stand in its way, it could lead to an extended period of abject suffering and subjugation. A 2017 report on existential risks from the Global Priorities Project, in conjunction with FHI and the Ministry for Foreign Affairs of Finland, warned that “a long future under a particularly brutal global totalitarian state could arguably be worse than complete extinction”.

Big Brother is already here. 

In the past, surveillance required hundreds of thousands of people – one in every 100 citizens in East Germany was an informant – but now it can be done by technology. In the United States, the National Security Agency (NSA) collected hundreds of millions of American call and text records before they stopped domestic surveillance in 2019, and there are an estimated four to six million CCTV cameras across the United Kingdom. Eighteen of the 20 most surveilled cities in the world are in China, but London is the third. The difference between them lies less in the tech that the countries employ and more in how they use it.

What if the definition of what is illegal in the US and the UK expanded to include criticising the government or practising certain religions? The infrastructure is already in place to enforce it, and AI – which the NSA has already begun experimenting with – would enable agencies to search through our data faster than ever before.

In addition to enhancing surveillance, AI also underpins the growth of online misinformation, which is another tool of the authoritarian. AI-powered deep fakes, which can spread fabricated political messages, and algorithmic micro-targeting on social media are making propaganda more persuasive. This undermines our epistemic security – the ability to determine what is true and act on it – that democracies depend on.

“Over the last few years, we've seen the rise of filter bubbles and people getting shunted by various algorithms into believing various conspiracy theories, or even if they’re not conspiracy theories, into believing only parts of the truth,” says Haydn Belfield, academic project manager at the Centre for the Study of Existential Risk at the University of Cambridge. “You can imagine things getting much worse, especially with deep fakes and things like that, until it's increasingly harder for us to, as a society, decide these are the facts of the matter, this is what we have to do about it, and then take collective action.”

Tucker Davey, a writer at the Future of Life Institute in Massachusetts agrees. “We need to decide now what are acceptable and unacceptable uses of AI,” he says. “And we need to be careful about letting it control so much of our infrastructure. If we're arming police with facial recognition and the federal government is collecting all of our data, that's a bad start.”

Yesterday an acquaintance in the States emailed me with a provocative defence of Donald Trump she had found online. She likes Trump and is trying to find some justification for voting for him - again. I mentioned how Trump had tuned up his base until they've become insensate. "All you're left with is the sound of one hand clapping. There are people who imagine they can hear that. Trump counts on that."


Anonymous said...

"In the United States, the National Security Agency (NSA) collected hundreds of millions of American call and text records before they stopped domestic surveillance in 2019." Huh? I don't recall this being reported. Seems very out of character for a Republican administration.



The Disaffected Lib said...

Cap, I'm sure you'll remember Admiral John Poindexter and the ill-fated Office of Total Information Awareness. Even Congress couldn't stomach OTIA so they nixed it whereupon the core functions were simply allocated to different offices to carry on.

The saving grace is that they siphon up so much data that they haven't the means to process it to any great effect. AI will probably remove that hurdle.

Anonymous said...

As I recall, 9/11 provided the pretext for Republicans to implement Poindexter's plan, with a great deal of Democratic support. The NYT uncovered the wholesale hijacking of the phone system, but sat on the story so as not to hurt Dubya's chances of reelection. After that, the story just died out and the surveillance continued. If Trump indeed put an end to it, and I can see the necessary self-interest, that would be the first good thing that he's done for his country. But, as you say, the NSA may just have spun the operation off to another agency. Once surveillance had started, it's near impossible to stop it.


The Disaffected Lib said...

I don't imagine Trump would champion privacy unless it was his own. Recall this was the guy who, in his pre-inaugural briefings on America's nuclear arsenal, asked what was the point of having all those nukes if he couldn't use them.

Then again, if Vlad wanted them shut down and asked nicely, Trump might have been willing to accommodate him.

Trailblazer said...

You don't need AI when you have an audience that lives and loves the lie!

Canada also dabbles in misinformation.