What can I say? We are now having this discussion, and it blows me away. And as I speak, drones are killing folks in Iraq, Afghanistan, and Pakistan, and have been doing so for awhile. The only safety measure is that they are so far not really autonomous. But what happens when we cut the cord and let some robot or drone operate on it’s own? What happens if that robot has a glitch and accidently kills the good guys? Do you charge a robot with manslaughter, are they covered by the Geneva Convention, do we give them full burial honors at Arlington Cemetery when they pass?
On a side note, I did get a chance to pick up Peter Singer’s book, and I read through it a little. I will not give a full review, but there were some parts that were interesting. Especially the section that discussed ground robots, and the first ever drawing of blood in this war by a ground robot. Basically some soldiers put a Claymore mine on a MARCbot, and drove it into a pack of insurgents and blew them up. The total cost for that kill, about $8,000, plus whatever it costs the military for a claymore. I think I could make or buy a cheaper Claymore carriage at a hobby store, but still, that field expedient weapon is a whole lot cheaper than launching a Javelin at the enemy. (and if it hurt the enemy and/or saved lives, bravo!)
The insurgents came up with a similar type deal using a skateboard according to the book. I guess they made an explosive laden skateboard with motors on the wheels. The insurgents powered it up, and set it rolling slowly towards a patrol, thinking the patrol would not pay attention to a slow rolling toy. Luckily the patrol locked on to the thing, because it was moving against the wind. The total cost of this weapon was way cheaper than the MARCbot, but could have easily succeeded if used properly.
So with these humble beginnings of ground combat robots, will we one day see a robot that thinks on it’s own? I do know that the desire for these things is driving the market big time. With a highly competitive robotics market and a war that is not going away anytime soon, we will begin to see these kinds of autonomous war robots that science fiction, and now academics are talking about. Good or bad, the future is now. –Matt
——————————————————————-
Military robots must be taught a warrior code
16 Feb 2009
Autonomous military robots must be taught a strict warrior code or they could turn on their human masters, a US report warns.
I, Robot: Military robots must be taught a warrior code
The warnings of a potential revolt, as envisaged by the science writer Isaac Asimov in his chilling I, Robot series of stories, appear in the first major report on robot ethics Photo: 20TH CENTURY FOX
The warnings of a potential revolt, as envisaged by the science writer Isaac Asimov in his chilling I, Robot series of stories, appear in the first major report on robot ethics.
The report, by researchers from the Ethics and Emerging Technologies Group at California Polytechnic State University, was funded by the US navy office of naval research.
Mindful of the US deployment in two major theatres of war, the military is keen to pursue alternatives to manpower, including Terminator-style armed robots.