Wednesday, April 24, 2013
How Do You Like Your Odds?
A decade ago, Britain's Astronomer Royal, cosmologist Martin Rees, wrote "Our Final Hour, A Scientist's Warning: How Terror, Error, and Environmental Disaster Threaten Humankind's Future In This Century - On Earth and Beyond." In it he makes compelling arguments that, with bio-terror and bio-error and similar man-made hazards that confront us today, mankind has a no better than 50/50 chance of surviving this century.
Lord Rees is convinced we are now on the verge of a post-humanity era that will see mankind undergo various forms of geo-engineering of our minds and bodies, various combinations of mind and computer, body and robot. Gradually the percentages will shift and that won't be in favour of the human component either.
Now a team of the best and brightest have gathered at Oxford to explore whether we have reached the point of becoming a self-extinguishing species.
An international team of scientists, mathematicians and philosophers at Oxford University's Future of Humanity Institute is investigating the biggest dangers.
And they argue in a research paper, Existential Risk as a Global Priority, that international policymakers must pay serious attention to the reality of species-obliterating risks.
Last year there were more academic papers published on snowboarding than human extinction.
The Swedish-born director of the institute, Nick Bostrom, says the stakes couldn't be higher. If we get it wrong, this could be humanity's final century.
Bostrom thinks mankind could probably survive most of the major threats we commonly worry about - from nuclear war to asteroid strikes - based on our sheer numbers and our history of surviving major catclysms such as these in the past. It's events we haven't experienced and can't properly foresee that worry him.
Dr Bostrom believes we've entered a new kind of technological era with the capacity to threaten our future as never before. These are "threats we have no track record of surviving".
Likening it to a dangerous weapon in the hands of a child, he says the advance of technology has overtaken our capacity to control the possible consequences.
Experiments in areas such as synthetic biology, nanotechnology and machine intelligence are hurtling forward into the territory of the unintended and unpredictable.
Synthetic biology, where biology meets engineering, promises great medical benefits. But Dr Bostrom is concerned about unforeseen consequences in manipulating the boundaries of human biology.
Nanotechnology, working at a molecular or atomic level, could also become highly destructive if used for warfare, he argues. He has written that future governments will have a major challenge to control and restrict misuses.
There are also fears about how artificial or machine intelligence interact with the external world.
Such computer-driven "intelligence" might be a powerful tool in industry, medicine, agriculture or managing the economy.
But it also can be completely indifferent to any incidental damage.
We are already running risks of technology that's racing ahead of us, artificial intelligence nearing the point of turning autonomous. There was a time, just a few decades ago, where much leading-edge scientific research was conducted in government labs in a sufficiently open setting where errors and hazards could be identified and contained or corrected. Now much research has been privatized and is conducted behind closed doors. Important good developments may remain cloaked because the funding corporation hasn't found a way to commercialize them or fears competition or some other factor. Likewise, hazards are at greater risk of going undetected or being concealed, leaving the public at risk.
Lord Rees, along with Cambridge philosopher Huw Price and economist Sir Partha Dasgupta and Skype founder Jaan Tallinn, wants the proposed Centre for the Study of Existential Risk to evaluate such threats.
So should we be worried about an impending doomsday?
This isn't a dystopian fiction. It's not about a cat-stroking villain below a volcano. In fact, the institute in Oxford is in university offices above a gym, where self-preservation is about a treadmill and Lycra.
Dr Bostrom says there is a real gap between the speed of technological advance and our understanding of its implications.
"We're at the level of infants in moral responsibility, but with the technological capability of adults," he says.
"There is a bottleneck in human history. The human condition is going to change. It could be that we end in a catastrophe or that we are transformed by taking much greater control over our biology.
"It's not science fiction, religious doctrine or a late-night conversation in the pub.
"There is no plausible moral case not to take it seriously."
If this has piqued your interest, here's a video of Lord Martin Rees giving one of his now classic addresses.
Footnote - Among his achievements, Lord Rees won the Templeton Prize in 2011. The prize, worth about 1.2-million quid to the lucky winner, is awarded to a person who "has made an exceptional contribution to affirming life's spiritual dimension, whether through insight, discovery, or practical works."
You might think that an odd award for a man who has no religious beliefs of any sort. Lord Rees says he does go to church because it's part of his culture, not because he believes in the existence of god.
Is our replacement with computer technology just the natural order of things, just as surely as intelligence is the natural outcome of life?
ReplyDeleteAfter all, Friendship is Optimal.
DOC, I don't know whether we're facing replacement or assimilation. My guess is that we'll gradually, over many generations, modify the human lifeform to merge with technology. Isn't that what artificial joints represent at the most primitive level?
ReplyDeleteWe now have man-made life, self-reproducing organisms, made entirely by human construction. We have proven that we can make life.
We have learned to store immense amounts of data in DNA strands. There's a post on that in this blog.
Scientists have produced computer chips made of organic material instead of silicon. That opens the link to implanting knowledge and intelligence modules into the human brain and nervous system. Why go through the imperfect and often unsuccessful, not to mention costly, teaching process if you can implant vast amounts of knowledge?
None of us can know where this is going inasmuch as it is already outpacing our understanding and control of it. This is extremely powerful stuff and, as Lord Rees regularly cautions, this sort of advancement can just as readily lend itself to harmful purposes as useful.