What can I say? We are now having this discussion, and it blows me away. And as I speak, drones are killing folks in Iraq, Afghanistan, and Pakistan, and have been doing so for awhile. The only safety measure is that they are so far not really autonomous. But what happens when we cut the cord and let some robot or drone operate on it’s own? What happens if that robot has a glitch and accidently kills the good guys? Do you charge a robot with manslaughter, are they covered by the Geneva Convention, do we give them full burial honors at Arlington Cemetery when they pass?
On a side note, I did get a chance to pick up Peter Singer’s book, and I read through it a little. I will not give a full review, but there were some parts that were interesting. Especially the section that discussed ground robots, and the first ever drawing of blood in this war by a ground robot. Basically some soldiers put a Claymore mine on a MARCbot, and drove it into a pack of insurgents and blew them up. The total cost for that kill, about $8,000, plus whatever it costs the military for a claymore. I think I could make or buy a cheaper Claymore carriage at a hobby store, but still, that field expedient weapon is a whole lot cheaper than launching a Javelin at the enemy. (and if it hurt the enemy and/or saved lives, bravo!)
The insurgents came up with a similar type deal using a skateboard according to the book. I guess they made an explosive laden skateboard with motors on the wheels. The insurgents powered it up, and set it rolling slowly towards a patrol, thinking the patrol would not pay attention to a slow rolling toy. Luckily the patrol locked on to the thing, because it was moving against the wind. The total cost of this weapon was way cheaper than the MARCbot, but could have easily succeeded if used properly.
So with these humble beginnings of ground combat robots, will we one day see a robot that thinks on it’s own? I do know that the desire for these things is driving the market big time. With a highly competitive robotics market and a war that is not going away anytime soon, we will begin to see these kinds of autonomous war robots that science fiction, and now academics are talking about. Good or bad, the future is now. –Matt
——————————————————————-
Military robots must be taught a warrior code
16 Feb 2009
Autonomous military robots must be taught a strict warrior code or they could turn on their human masters, a US report warns.
I, Robot: Military robots must be taught a warrior code
The warnings of a potential revolt, as envisaged by the science writer Isaac Asimov in his chilling I, Robot series of stories, appear in the first major report on robot ethics Photo: 20TH CENTURY FOX
The warnings of a potential revolt, as envisaged by the science writer Isaac Asimov in his chilling I, Robot series of stories, appear in the first major report on robot ethics.
The report, by researchers from the Ethics and Emerging Technologies Group at California Polytechnic State University, was funded by the US navy office of naval research.
Mindful of the US deployment in two major theatres of war, the military is keen to pursue alternatives to manpower, including Terminator-style armed robots.
But unforeseen issues have already arisen. A semi-autonomous robotic cannon deployed by the South African army malfunctioned in 2007, killing nine “friendly” soldiers and wounding 14 others.
The report said: “To whom would we assign blame – and punishment – for improper conduct and unauthorised harms caused by an autonomous robot (whether by error or intentional): the designers, robot manufacturer, procurement officer, robot controller/supervisor, field commander, president of the United States… or the robot itself?”
Patrick Lin, one of the authors of the report, Autonomous Military Robotics: Risks Ethics and Design, said: “There are significant driving forces towards this trend.
“Congress actually mandated that by 2010, supposedly one-third of aerial vehicles need to be unmanned. By 2015 all ground vehicles need to be unmanned.”
“These deadlines apply increasing pressure to develop and deploy robotics, including autonomous vehicles; yet a ‘rush to market’ increases the risk for inadequate design or programming.”
Story Here
Cal Poly Researchers Report on
Risks, Ethics Related to New Trend in Warfare Here
——————————————————————
Wired for War
By Peter Singer
The Robotics Revolution and Conflict in the 21st Century
What happens when science fiction becomes battlefield reality?
An amazing revolution is taking place on the battlefield, starting to change not just how wars are fought, but also the politics, economics, laws, and ethics that surround war itself. This upheaval is already afoot — remote-controlled drones take out terrorists in Afghanistan, while the number of unmanned systems on the ground in Iraq has gone from zero to 12,000 over the last five years. But it is only the start. Military officers quietly acknowledge that new prototypes will soon make human fighter pilots obsolete, while the Pentagon researches tiny robots the size of flies to carry out reconnaissance work now handled by elite Special Forces troops.
Wired for War takes the reader on a journey to meet all the various players in this strange new world of war: odd-ball roboticists working in latter-day “skunk works” in the midst of suburbia; military pilots flying combat mission from their office cubicles outside Las Vegas; the Iraqi insurgents who are their targets; journalists trying to figure out just how to cover robots at war; and human rights activists wrestling with what is right and wrong in a world where our wars are increasingly being handed over to machines.
If issues like these sound like science fiction, that’s because many of the new technologies were actually inspired by some of the great sci-fi of our time from Terminator and Star Trek to the works of Asimov and Heinlein. In fact, Singer reveals how the people who develop new technologies consciously draw on such sci-fiction when pitching them to the Pentagon, and he even introduces the sci-fi authors who quietly consult for the military.
But, whatever its origins, our new machines will profoundly alter warfare, from the frontlines to the home front. When planes can be flown into battle from an office 10,000 miles away (or even fly themselves, like the newest models), the experiences of war and the very profile of a warrior change dramatically. Singer draws from historical precedent and the latest Pentagon research to argue that wars will become easier to start, that the traditional moral and psychological barriers to killing will fall, and that the “warrior ethos” the code of honor and loyalty which unites soldiers will erode.
Paradoxically, these new unmanned technologies will also seemingly bring war closer to our doorsteps, including even with videos of battles downloaded for entertainment. But Singer also proves that our enemies will not settle for fighting our high-tech proxies on their own turf. He documents, for instance, how Hezbollah deployed unmanned aircraft in the Lebanese war of 2006, and how America may even fall behind in this revolution, as its adversaries gain knockoffs of our own technology, or even develop better tech of their own invention.
While his predictions are unnerving, there’s an irresistible gee-whiz quality to what Singer uncovers and the people he meets along the way. It is packed with cutting edge research and hard to get interviews of everyone from four star Army generals and Middle East leaders to reclusive science fiction authors. Yet it also seamlessly weaves in pop culture and illuminating anecdotes to create a book that is both highly readable and accessible. In laying out where our technologies are taking us to next, WIRED FOR WAR is as fascinating as it is frightening.
Peter Singer’s website for the book here.
G,day there are many people who theorise that with the advent of artificial intelligence will come the concept of choice. To choose between good and evil. We of course will teach robots what we see as good, and so they will decide what is good or evil on those ground lines. What is possibly the problem is that many people are not always good. The robots decision to eradicate what it considers bad could really give us problems at some later date.
Religious people also consider the possibility of the advent of intelligence leading to such things as possesion by evil entities over a robots programming. Things such as the movie The Poltergiest could easily soon involve artificial intelligence once they become more complex.
I write sci-fi and have a new novel called Doom Of The Shem, it is a military science fiction novel and involves automated fighting platforms being used in desert situations.
Comment by yeremenko — Sunday, February 22, 2009 @ 11:45 AM
Excellent points. I think the pace of robotic development and artificial intelligence is pushing forth at an alarmingly rapid pace. I sometimes wonder if we are putting the cart in front of the horse on this one.
But to expand on your points. Machines are built by man, and man is fallible. Worse yet, we can do pretty awful things to one another, and our machines are an extension of that. As we build these war machines, and they become more lethal, intelligent and autonomous, we must always be wary that it is still a man building these things. For that reason, I don't discount any ideas on what could happen with these things.
Ten years ago, I would have never have guessed that we would be in a major war in which we would be using thousands of robots and drones to hunt and kill the enemy. Ten years from now, what could be next?
Comment by headjundi — Sunday, February 22, 2009 @ 1:24 PM