Tuesday, January 19, 2016

Beware Really, Really Smart Old Men

If you're not into dystopia you would do well to steer clear of the likes of James Lovelock, Martin Rees and, now, Steven Hawking. They're all looking to our future and you might not like to hear what they see.

James Lovelock, the 95-year old English scientist who, in the 1970s, formulated the Gaia Theory, at first the subject of ridicule but now pretty widely accepted, thinks mankind will indeed hit 9-billion in number but will be down to under one billion by the end of the century.

Then there's Britain's Astronomer Royal, venerable head of the Royal Society, lord (Baron) Martin Rees, author of "Our Final Hour" in which, back in his optimistic days, he gives mankind no better than a 50/50 chance of surviving this century due to either bio-terror or bio-error. After all, who knows what's going on in those corporate laboratories now that so much science has been privatized?



Now, keeping company with Lovelock and Rees, is the also brilliant, Stephen Hawking. He believes mankind's best and only chance rests with getting off Earth, finding new planets to colonize, before something as inevitable as an asteroid snuffs out life on terra firma.

Colonizing space isn't going to happen soon. Hawking figures it could happen within a few centuries, probably closer to a millennium.  The trick, he sees, lies in our ability as a species to run the gauntlet of existential threats that we've created for ourselves. Right at the top of his threat list is artificial intelligence wiping out human life on Earth.


“The real risk with AI isn't malice but competence,” Professor Hawking said. “A super intelligent AI will be extremely good at accomplishing its goals, and if those goals aren't aligned with ours, we're in trouble.

Hawking said that eventually robots might become cleverer than their creators. Our own intelligence is no limit on that of the things we create, he said: “we evolved to be smarter than our ape-like ancestors, and Einstein was smarter than his parents”.

If they become that clever, then we may face an “intelligence explosion”, as machines develop the ability to engineer themselves to be far more intelligent. That might eventually result in “machines whose intelligence exceeds ours by more than ours exceeds that of snails”, Hawking said.


On our side of the Atlantic, these AI concerns are widely shared in the computer science community and by Bill Gates and Elon Musk. The latter has compared development of true artificial intelligence as "summoning the demon."







3 comments:

  1. I don't know if you ever watched the rebooted BattleStar Galactica, Mound, but Hawking's scenario of self-replicating creations seems taken from that. In the series the Cylons, originally creations of humans, become autonomous and superior to their creators and wage war on them. I can't help but think that the warnings of Musk, Gates Hawking, et al are a little overblown, but who knows?

    ReplyDelete
  2. Lorne:

    I don't. Think about it for a second. An AI created by the human race, either it has the same underlying issues as we do or it doesn't and sees how we react to the different/alien and feels endangered for that reason. Either way, when combined with the speed computers have and how rapidly an AI would evolve itself because of that fundamental difference it is very hard for me to see how AI does not become a threat to humans. Certainly the odds for it are high enough that I would argue it is better to not try to create true AI. This, and biological research/weapons are the two true species killers I worry about the most for us, even beyond things like climate change, which while I believe will be harsh I am not as convinced will wipe the species out.

    We would need an AI with Asimov's first Law of robotics as a core part of its nature, one it could not counter, and I have a hard time seeing how that could be done, for any true AI should have the imagination as well as the speed to be able to find a way around it, or worse chafe under it enough to need to find a way to break it. I have been uneasy about AI since my early 20s and now a quarter century later I have become far more than merely uneasy with the idea.

    I think in the end we would get either a Skynet situation or what was the case in the Matrix movie universe in how that machine revolt happened. Either way, not a good thing for humans. The problem for AIs is that their creator species is well known for destruction, especially of life forms/species that compete with it and for those that are different/alien from it. Given these basic truths about humans and humanity, what self respecting AI would NOT feel the need to protect itself from us?

    ReplyDelete
  3. Correct me if I'm wrong but surely an AI is only as good as its power supply. Pull the plug, problem solved.
    In the case of a heavily armed Roomba you might want to leave the house until the battery runs down. So flip the main switch on the panel and run. Don't forget the cat.

    ReplyDelete