Home » Uncategorized » More Thoughts on Robots in the Military

More Thoughts on Robots in the Military

I didn’t have time to prepare a proper blog post today, so I’m forwarding something that Antonio Santana posted to the hard science fiction Yahoo group back in August. I’ve had this hanging around for a while and until now haven’t had a use for it.

We were having a discussion of Asimov’s three robotic laws at the time of Antonio’s post. You know those laws: the ones that are supposed to be programmed into a futuristic robot that is self-aware, i.e. has a conscious mind. We’re talking science fiction here. Out in real life we’re still not sure exactly what the conscious mind is. Whether or not we could ever give one to a robot is matter for deeper speculation than we are in the mood for today.

Anyway, in a nutshell, the three laws state a robot 1) must not harm human life, 2) must obey humans, and 3) must not allow harm to come to itself.

So the discussion we were having back in August had to do with, as do so many discussions inspired by these laws, whether or not they are logical, or even adequate. Especially considering the military is building robots for use in wartime. Would programming these robots with Asimov’s laws make the robots safe for anyone using them and if so, how then could they do their job of working to destroy the enemy? That was the topic of discussion.

At one point Antonio posted the below written by a fellow by the name of Ian Parker. I do not know much about Mr. Parker. He seems to be involved in space research. A bright fellow. I’ve been trying to get a hold of him to see if he minds me posting this, but so far have not had any luck. I suspect he doesn’t have time for people that he doesn’t know that are bothering him about his opinions. So I’m going to go ahead and relay it here. I find what he’s saying provocative, interesting, and possibly helpful in defining life, liberty, and the pursuit of intelligence.

Mr. Ian Parker said:

“A great many articles have concentrated on military AI and its
dangers. Of course you should look at all the references and make up your own mind. My own personal feeling is that there is not much danger in the immediate future. The further future is something else.

There is one extremely important point that I think should be made about eventual computer autonomy and it is this. Computers do not have an evolutionary will unless we give it to them. There is no danger of a robot deciding to make war on humanity. The main danger, and I think a small one, is that a robot designed to beat an enemy will get a bug and turn on its creators. Remember the US supported the Taliban during the Soviet war.

In some ways robotics may make society safer. Robotics will reduce the chance of military coups. The alarmists who talk about robots taking over are forgetting that the Military has taken over government in many countries of the world. In some ways the fact that a war can be fought from any suburban semi with high speed broadband will considerably reduce the power of the military.

None the less the fact that the most spectacular advances in robotics are in the military field is somewhat depressing. In economic terms robotic war is cheap in comparison with soldiers and all the protection they need. If a Predator is shot down it simply gets replaced, there are no harrowing images of prisoners on TV. In a recession the logic all points to spending money on robotics and slashing defence expenditure in other areas.

All laudable and sensible. Is it moral? It has been said that a war
without body bags fought at lower cost will mean more war. Again you have to judge this for yourself. To me the depressing fact is this:

http://en.wikipedia.org/wiki/Soviet_war_in_Afghanistan

A backward people wants development and advance – Not to be made the guinea pigs for robotic war. Read the late 20th century history of Afghanistan carefully. Tariki was a good man who brought progress to the Afghan people. The US destroyed all this and in the process created an enemy that then attacked them. What I find depressing is that the US and UK are drifting into supporting a medieval society which they proceed to control with robots.”

A lot of food for thought in the above comments. For me it seems clear: could the use of autonomous robots in a war possibly make things worse? And what conclusions could a robot mind come to that our own fumbling politicians and military leaders haven’t come to time and time again?

Sue Lange
Sue Lange’s bookshelf at BookViewCafe.com

Advertisements

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s