Monday, March 04, 2013

Because Machines Never Fail, Right?

The debate over "KillBots", autonomous hunter-killer robots, continues but never quite so quickly as the technology itself.  It won't be long before the more powerful nations field their own legions of air, sea and land, even space-based armed, autonomous robots.

Why would governments unleash this sort of technology?   For the most powerful of all reasons, the fear that, if they don't, the other guy will.   That fear will eclipse the ethical questions, render them irrelevant.   We will because we can and because we fear if we don't, they will.  Barring some international treaty, the technology is almost irresistible.

The one true, Permanent Warfare State is at the forefront of KillBot development.   It operates the world's only space drone, a small-scale air/spacecraft that resembles the Space Shuttle and can operate for a year or more doing what precisely only the Americans know.  Full points if you figured that they're not telling either.

The U.S. navy is developing an unmanned, stealthy surface ship designed to detect and follow submarines for up to 80-days at a stretch.   Even the most advanced conventional subs can't stay submerged nearly that long.   Unmanned subs are in the works.

Air and land-based KillBot technology invites miniaturization that then lends itself to "swarm" tactics.   They are, after all, autonomous so why launch one when you can launch them in hundreds?   Let them go out using their own GPS, activating their onboard sensors to look for the right signature, the target, and then either track or attack.

And what if it all goes wrong?   Well, to err is human, to forgive is divine.   And, besides, we've gotten very, very good at "overlooking" our collateral damage, the corpses of innocents that litter our high-tech battlefields.

Is it ethical for machines to autonomously hunt and kill humans?  Of course not but there's a fix for that too.   Ethical KillBots, just the thing.

[Georgia Tech prof Ron Arkin] has put forward the concept of a weapons system controlled by a so-called "ethical governor".

It would have no human being physically pulling the trigger but would be programmed to comply with the international laws of war and rules of engagement.

"Everyone raises their arms and says, 'Oh, evil robots, oh, killer robots'," but he notes, "we have killer soldiers out there. Atrocities continue and they have continued since the beginning of warfare."

I believe that's called the "shit happens" retort.

Peter Singer, a "future warfare" expert at the Brookings Institution, says KillBot technology is a battlefield game changer. 

"Every so often in history, you get a technology that comes along that's a game changer," he says. "They're things like gunpowder, they're things like the machine gun, the atomic bomb, the computer… and robotics is one of those."

"When we say it can be a game changer", he says, "it means that it affects everything from the tactics that people use on the ground, to the doctrine, how we organise our forces, to bigger questions of politics, law, ethics, when and where we go to war."

And Singer points out, the Genie is already out of the bottle.

"The reality is that besides the United States there are 76 countries with military robotics programmes right now," he says.

"This is a rapidly proliferating technology with relatively low barriers to entry.

"You can, for a couple of hundred dollars, purchase a small drone that a couple of years ago was limited to militaries. This can't be a situation that you interpret through an American lens. It's of global concern."

Well-intentioned efforts are already underway calling for an international convention to ban KillBot technology.   The effort is doomed to failure.

KillBots haven't actually killed anyone just yet.   The accords banning land mines and cluster bombs, flawed as they have been, only arose from public outrage at the death of innocents, usually children, from ordinance that had been abandoned by the forces that had deployed them.

 KillBot technology is in its infancy, mostly developmental which usually means secrecy.   Countries will be reluctant to ban something they won't admit to having.   And a lot of KillBot technology is "dual use" stuff that inevitably lends itself to non-violent, civilian applications.   That Genie is out of the bottle and it will not go back in.


doconnor said...

The question is, will machines fail more often then humans?

The Mound of Sound said...

I suppose that depends on how many there are, how reliable they are, just what they're programmed to do and who is actually doing the programming. Let's put it this way, DOC, how does the U.S. today define "torture"? In a world where reality is regularly stood on its head, failure can become more than an option. It can become a goal.

LeDaro said...
This comment has been removed by a blog administrator.
LeDaro said...

It is like drones just more advanced technology. There is going to be no safe place on the earth or in its atmosphere. Looks like self-destruct technology. Someday only "KillBots" may survive.

The Mound of Sound said...

Your concerns, LD, are widely shared. We're at a point where the world truly needs a strong, international consensus about advanced technology, generally, including KillBots.

KillBots show that we have arrived at a point where our slavish quest for technology causes us to jettison badly needed exploration of the social, political and ethical dimensions of the latest and greatest. That may be the ultimate "slippery slope". Once we begin to take decisions based on "we must do it because we can" we risk giving up control over our lives, our societies and our future.