First of all, my knowledge of science is very limited. Secondly, I do tend to trust scientists - to a point. Thirdly, the scientific community has been doing some cutting edge experiments lately that carry small risks of mass destruction that are being conducted without the benefit of public awareness or debate.
For example, in 1997 NASA launched the Cassini satellite on a mission to Saturn. Aboard the space probe was a fuel cell consisting of a staggering 72-pounds of astonishingly lethal, weapons-grade plutonium. The concern was that an explosion aboard Cassini could send that plutonium fallout over a huge area and kill an awful lot of people. Given how concerned authorities get when a spy satellite with 20-grams of plutonium comes crashing back to earth, the notion of 72-pounds of the stuff being transformed into high-altitude litter was actually pretty scary.
Flash forward 12-years to the sunny hills of California, home of the Laurence Livermore National Laboratory. Sometime this spring, scientists there are hoping to create an artificial star. An artificial star, how quaint. Think about this, from The Telegraph:
"Its goal is to generate temperatures of more than 100 million degrees Celsius and pressures billions of times higher than those found anywhere else on earth, from a speck of fuel little bigger than a pinhead. If successful, the experiment will mark the first step towards building a practical nuclear fusion power station and a source of almost limitless energy.
At a time when fossil fuel supplies are dwindling and fears about global warming are forcing governments to seek clean energy sources, fusion could provide the answer. Hydrogen, the fuel needed for fusion reactions, is among the most abundant in the universe. Building work on the £1.2 billion nuclear fusion experiment is due to be completed in spring.
Scientists at the National Ignition Facility (NIF) in Livermore, nestled among the wine-producing vineyards of central California, will use a laser that concentrates 1,000 times the electric generating power of the United States into a billionth of a second.
The result should be an explosion in the 32ft-wide reaction chamber which will produce at least 10 times the amount of energy used to create it."
If this giant physics experiment works and if it leads to a source of almost limitless, clean energy then maybe it's worthwhile. Maybe, but far from definitely or even probably. The question is how do we do a risk/benefit analysis of this and other projects and should it be done behind closed doors without our knowledge or consent?
Sir Martin Rees is England's Astronomer Royal as well as a Royal Society Professor at Cambridge, a Fellow of King's College and the author of an array of academic papers and books. In his 2003 book, apocalyptically entitled "Our Final Hour" he discusses in brilliant logic the threats posed by man to mankind through terror, error and environmental disaster. His discussion of scientific error is particularly compelling. He focused on experimental projects like the CERN accelerator in Geneva:
"However even if one accepted their reasoning completely, the level of confidence they offered hardly seemed enough. They estimated that if the experiment were run for ten years, the risk of a catastrophe was no more than one in fifty million.
These might seem impressive odds; a chance of disaster smaller than the chance of winning the UK's national lottery with a single ticket, which is about one in fourteen million. However, if the downside is destruction of the world's population, and the benefit is only to "pure" science, this isn't good enough.
The natural way to measure the gravity of a threat is to multiply its probability by the number of people at risk, to calculate the "expected number" of deaths. The entire world's population would be at risk, so the experts were telling us that the expected number of deaths (in that technical sense of "expected") could be as high as 120 (the number obtained by taking the world's population to be six billion and dividing by fifty million).
Obviously, nobody would argue in favour of doing a physics experiment knowing that its "fallout" would kill up to 120 people. This is not, of course, quite what we were told in this case: we were told instead that there could be up to one chance in fifty million of killing six billion people. Is this prospect any more acceptable? Most of us would, I think, still be uneasy....
My Cambridge colleague Adrian Kent has emphasised a second factor: the finality and completeness of the extinction that this scenario would entail. It would deprive us of the expectation - important to most of us - that some biological or cultural legacy will survive our deaths; it would dash the hope that our lives and works may be part in some continuing progress. It would, worse still, foreclose the existence of a (perhaps far larger) total number of people in all future generations....
Rees goes on to argue that anytime the risk of extinction is even possible someone, other than those who intend to expose us to that risk, needs to evaluate the threat. He insists that we cannot afford to blithely accept scientific candour in these matters. He's not advocating an end to science or some reversion to the Dark Ages, merely the need for open analysis and informed consent.
Wouldn't it be nice if we had a bit of those evaluation thingees before the California kids start building stars on planet earth? Just asking.
And remember. As Rees points out, if we are truly the only intelligent life in the universe, a catastrophic extinction would wipe out the only intelligence in all of creation. Yikes.
For example, in 1997 NASA launched the Cassini satellite on a mission to Saturn. Aboard the space probe was a fuel cell consisting of a staggering 72-pounds of astonishingly lethal, weapons-grade plutonium. The concern was that an explosion aboard Cassini could send that plutonium fallout over a huge area and kill an awful lot of people. Given how concerned authorities get when a spy satellite with 20-grams of plutonium comes crashing back to earth, the notion of 72-pounds of the stuff being transformed into high-altitude litter was actually pretty scary.
Flash forward 12-years to the sunny hills of California, home of the Laurence Livermore National Laboratory. Sometime this spring, scientists there are hoping to create an artificial star. An artificial star, how quaint. Think about this, from The Telegraph:
"Its goal is to generate temperatures of more than 100 million degrees Celsius and pressures billions of times higher than those found anywhere else on earth, from a speck of fuel little bigger than a pinhead. If successful, the experiment will mark the first step towards building a practical nuclear fusion power station and a source of almost limitless energy.
At a time when fossil fuel supplies are dwindling and fears about global warming are forcing governments to seek clean energy sources, fusion could provide the answer. Hydrogen, the fuel needed for fusion reactions, is among the most abundant in the universe. Building work on the £1.2 billion nuclear fusion experiment is due to be completed in spring.
Scientists at the National Ignition Facility (NIF) in Livermore, nestled among the wine-producing vineyards of central California, will use a laser that concentrates 1,000 times the electric generating power of the United States into a billionth of a second.
The result should be an explosion in the 32ft-wide reaction chamber which will produce at least 10 times the amount of energy used to create it."
If this giant physics experiment works and if it leads to a source of almost limitless, clean energy then maybe it's worthwhile. Maybe, but far from definitely or even probably. The question is how do we do a risk/benefit analysis of this and other projects and should it be done behind closed doors without our knowledge or consent?
Sir Martin Rees is England's Astronomer Royal as well as a Royal Society Professor at Cambridge, a Fellow of King's College and the author of an array of academic papers and books. In his 2003 book, apocalyptically entitled "Our Final Hour" he discusses in brilliant logic the threats posed by man to mankind through terror, error and environmental disaster. His discussion of scientific error is particularly compelling. He focused on experimental projects like the CERN accelerator in Geneva:
"However even if one accepted their reasoning completely, the level of confidence they offered hardly seemed enough. They estimated that if the experiment were run for ten years, the risk of a catastrophe was no more than one in fifty million.
These might seem impressive odds; a chance of disaster smaller than the chance of winning the UK's national lottery with a single ticket, which is about one in fourteen million. However, if the downside is destruction of the world's population, and the benefit is only to "pure" science, this isn't good enough.
The natural way to measure the gravity of a threat is to multiply its probability by the number of people at risk, to calculate the "expected number" of deaths. The entire world's population would be at risk, so the experts were telling us that the expected number of deaths (in that technical sense of "expected") could be as high as 120 (the number obtained by taking the world's population to be six billion and dividing by fifty million).
Obviously, nobody would argue in favour of doing a physics experiment knowing that its "fallout" would kill up to 120 people. This is not, of course, quite what we were told in this case: we were told instead that there could be up to one chance in fifty million of killing six billion people. Is this prospect any more acceptable? Most of us would, I think, still be uneasy....
My Cambridge colleague Adrian Kent has emphasised a second factor: the finality and completeness of the extinction that this scenario would entail. It would deprive us of the expectation - important to most of us - that some biological or cultural legacy will survive our deaths; it would dash the hope that our lives and works may be part in some continuing progress. It would, worse still, foreclose the existence of a (perhaps far larger) total number of people in all future generations....
Rees goes on to argue that anytime the risk of extinction is even possible someone, other than those who intend to expose us to that risk, needs to evaluate the threat. He insists that we cannot afford to blithely accept scientific candour in these matters. He's not advocating an end to science or some reversion to the Dark Ages, merely the need for open analysis and informed consent.
Wouldn't it be nice if we had a bit of those evaluation thingees before the California kids start building stars on planet earth? Just asking.
And remember. As Rees points out, if we are truly the only intelligent life in the universe, a catastrophic extinction would wipe out the only intelligence in all of creation. Yikes.
Maybe thats why we're the only intelligent life in the universe. As soon as a planet develops to the point where the inhabitants can wipe themselves out, they usually do.
ReplyDeleteTell you what : gather up all the liberals in Ellesmere Island and nuke them !
ReplyDeleteAnon 5:32, what a profoundly stupid thing to say. You're a credit to all rightwing nutjobs. Good to see you're not letting your side down.
ReplyDeleteThat is very interesting information - & very unsettling. I agree that there needs to be more openness & input from other sources as well, not just those doing these experiments.
ReplyDeleteIs there an actual name for this "star" experiment? Or did I somehow miss it?
Curiously enough there doesn't seem to be any name for it. It's always referred to only by its team of scientists, the National Ignition Facility. I wonder if that's something like the National Electrocution Society? Let's just hope it doesn't turn out to be the Global Ignition Facility.
ReplyDeleteSeriously though, I do troll the online versions of the major American papers and news magazines and I've yet to find among them any report on this.
Too much science going on behind closed doors. In his book, Rees stresses that the privatization of leading-edge research has had some disturbing repercussions. For example, a company's scientists may stumble across something wonderful but it's not what they're looking for so it just gets shelved instead of being released to others for development (the government research model). Worse yet is what he calls bio-error, scientists with no one looking over their shoulder simply going too far one day.
Rees, by the way, has a thousand dollar bet that there'll be a bio-terror or bio-error event by 2020 that will kill a million people or more. Nice thought, eh?
"Rees, by the way, has a thousand dollar bet that there'll be a bio-terror or bio-error event by 2020 that will kill a million people or more. Nice thought, eh?"
ReplyDeleteTerrifying thought. Everything is so secretive & does not bode well at all.
Interesting thought about scientists finding something wonderful but shelve it as it doesn't fit with the "experiment" they're working on at the moment. Beneficial is unnecessary? *shaking head*