If you're not into dystopia you would do well to steer clear of the likes of James Lovelock, Martin Rees and, now, Steven Hawking. They're all looking to our future and you might not like to hear what they see.
James Lovelock, the 95-year old English scientist who, in the 1970s, formulated the Gaia Theory, at first the subject of ridicule but now pretty widely accepted, thinks mankind will indeed hit 9-billion in number but will be down to under one billion by the end of the century.
Then there's Britain's Astronomer Royal, venerable head of the Royal Society, lord (Baron) Martin Rees, author of "Our Final Hour" in which, back in his optimistic days, he gives mankind no better than a 50/50 chance of surviving this century due to either bio-terror or bio-error. After all, who knows what's going on in those corporate laboratories now that so much science has been privatized?
Now, keeping company with Lovelock and Rees, is the also brilliant, Stephen Hawking. He believes mankind's best and only chance rests with getting off Earth, finding new planets to colonize, before something as inevitable as an asteroid snuffs out life on terra firma.
Colonizing space isn't going to happen soon. Hawking figures it could happen within a few centuries, probably closer to a millennium. The trick, he sees, lies in our ability as a species to run the gauntlet of existential threats that we've created for ourselves. Right at the top of his threat list is artificial intelligence wiping out human life on Earth.
“The real risk with AI isn't malice but competence,” Professor Hawking said. “A super intelligent AI will be extremely good at accomplishing its goals, and if those goals aren't aligned with ours, we're in trouble.
Hawking said that eventually robots might become cleverer than their creators. Our own intelligence is no limit on that of the things we create, he said: “we evolved to be smarter than our ape-like ancestors, and Einstein was smarter than his parents”.
If they become that clever, then we may face an “intelligence explosion”, as machines develop the ability to engineer themselves to be far more intelligent. That might eventually result in “machines whose intelligence exceeds ours by more than ours exceeds that of snails”, Hawking said.
On our side of the Atlantic, these AI concerns are widely shared in the computer science community and by Bill Gates and Elon Musk. The latter has compared development of true artificial intelligence as "summoning the demon."