# Ethical Killer Robots?



## Drachir (Dec 1, 2008)

Interesting article.  It claims that Terminator type soldiers might be more ethical than human soldiers.  Hmmmm...

http://www.nytimes.com/2008/11/25/science/25robots.html?_r=1&ref=science


----------



## dustinzgirl (Dec 1, 2008)

Yeah, cuz it worked out so well in Terminator....and Blade Runner...and The Matrix....and Robocop....see where I'm going with this? What they forget is that if they can build it...so can someone without ethics.

"true robots operating autonomously, on their own"

Blame my robotophobia, but unless its the maid robot from the Jetsons, I'm a little creeped out.


----------



## sloweye (Dec 1, 2008)

I stand in the 'i hope not' corner on this one to. the reasons are listed above


----------



## Vladd67 (Dec 1, 2008)

In a similar vein Persistent Munition Technology Demonstrator - Wikipedia, the free encyclopedia . This I found worrying 





> Designed for either air or ground launch and high loiter times, the PMTD has the ability to operate completely autonomously. During its maiden flight, the UAV autonomously flew to 14 programmed locations, changed altitude four times, and met all programmed speeds. The initial set of flight tests focused solely on validation of the autonomous flight mode, while future tests will include sensor integration, weapon guidance systems, munitions dispensing systems and in-flight refueling.


In other words it will fly round on its own and attack things?


----------



## Interference (Dec 1, 2008)

The thing here is that we may be repeating disasters we have no knowledge of from the past.  Surely, it was the dinosaurs' fantastic technology that led to the development of tiny organic robots and which, in turn, led to their own demise.  Or perhaps it was the Atlanteans.  My genetic memory isn't what it used to be


----------



## ktabic (Dec 1, 2008)

I personally believe that if they start deploying robots as soldier, we will see a massive increase in wars. Because our glorious leaders won't have to worry about human causalities (on their side anyway), they will become more prone to using force as a solution.


----------



## QSR Joshua (Dec 1, 2008)

Only someone who has nefer read any SF would think robots are a substitute for humans in combat. Even the totally atrocious I Robot movie made that clear.


----------



## Ursa major (Dec 1, 2008)

Yes, humans have emotions; but software has bugs.




And wouldn't this bring a whole new meaning to the phrase, Blue Screen of Death?


----------



## Vladd67 (Dec 2, 2008)

The march of the robot has begun
First Armed Robots on Patrol in Iraq (Updated) | Danger Room from Wired.com


> The SWORDS -- modified versions of bomb-disposal robots used throughout Iraq -- were first declared ready for duty back in 2004. But concerns about safety kept the robots from being sent over the the battlefield.  The machines had a tendency to spin out of control from time to time.  That was an annoyance during ordnance-handling missions; no one wanted to contemplate the consequences during a firefight.


A little more than an annoyance I should think.
And coming to a block near you
Armed Robots Pushed to Police | Danger Room from Wired.com
Oh and if you fancy one yourself?
iRobot Corporation: Robots


----------



## dustinzgirl (Dec 2, 2008)

I'm just not ready to accept a world of armed robots. 

Armed robots, learning robots, self-sustaining robots, self-repairing robots, self-programming robots...

ALL VERY BAD IDEAS!!!!


----------



## Drachir (Dec 2, 2008)

Robots are already being used in war.  We just haven't gotten around to using autonomous robots as yet.  But there is an argument for using such machines.  First of all robots don't hold grudges and don't get angry.  As such, unless their masters tell them to, they are not likely to engage in the sorts of atrocities that are commonly found in most wars.  In the past human armies have committed acts of great brutality and it is hard to imagine robots being more brutal than humanity has already shown itself to be.  

In addition, what mother or father would wish a son or daughter pushed into a combat situation if a machine could be used instead?  Given that, robots will not replace humans any time soon.  However, it is probably only a matter of time given the amount of money the US military is devoting to robot soldiers.   Whether these machines will actually be more ethical or less brutal than the human soldiers they replace will depend entirely on their creators.


----------



## dustinzgirl (Dec 2, 2008)

Drachir said:


> Robots are already being used in war.  We just haven't gotten around to using autonomous robots as yet.  But there is an argument for using such machines.  First of all robots don't hold grudges and don't get angry.  As such, unless their masters tell them to, they are not likely to engage in the sorts of atrocities that are commonly found in most wars.  In the past human armies have committed acts of great brutality and it is hard to imagine robots being more brutal than humanity has already shown itself to be.
> 
> In addition, what mother or father would wish a son or daughter pushed into a combat situation if a machine could be used instead?  Given that, robots will not replace humans any time soon.  However, it is probably only a matter of time given the amount of money the US military is devoting to robot soldiers.   Whether these machines will actually be more ethical or less brutal than the human soldiers they replace will depend entirely on their creators.



The robots currently being used are just really fancy remote control toys.

Autonomous robots made to wage war? Bad idea. 

If they are remote control robots that is one thing, but we are not talking about remote control, we are talking about self-adapting robots that are made for warfare.


----------



## Drachir (Dec 3, 2008)

dustinzgirl said:


> The robots currently being used are just really fancy remote control toys.
> 
> Autonomous robots made to wage war? Bad idea.
> 
> If they are remote control robots that is one thing, but we are not talking about remote control, we are talking about self-adapting robots that are made for warfare.


 
I don't think the article says anything about self-adapting robots and even if such robots were built they would still be subject to the original commands programmed into them.  For example auto-makers already have computers in cars that enable the cars to adapt almost instantly to different driving conditions.  That would be a sort of self-adapting robot, but no one is concerned that their car might suddenly decide to kill them or take them someplace where they don't want to go, like the car wash.  

Similarly we have computers on board aircraft that help fly the plane.  In fact most modern fighter aircraft would not be flyable without computers contolling them.  Once again the pilots are not concerned about the computer taking over and flying the plane someplace they don't want to go or dropping bombs on friendly forces (something that many soldiers have found human pilots are somewhat prone to do).  

I am more concerned about this section of the article:
*In a report to the Army last year, Dr. Arkin described some of the potential benefits of autonomous fighting robots. For one thing, they can be designed without an instinct for self-preservation and, as a result, no tendency to lash out in fear. They can be built without anger or recklessness, Dr. Arkin wrote, and they can be made invulnerable to what he called “the psychological problem of ‘scenario fulfillment,’ ” which causes people to absorb new information more easily if it agrees with their pre-existing ideas.*
*His report drew on a 2006 survey by the surgeon general of the Army, which found that fewer than half of soldiers and marines serving in Iraq said that noncombatants should be treated with dignity and respect, and 17 percent said all civilians should be treated as insurgents. More than one-third said torture was acceptable under some conditions, and fewer than half said they would report a colleague for unethical battlefield behavior. *

Robots would certainly not be affected by the emotions described above.  Paradoxically this would actually make war more humane (if that sort of term makes sense), and  prevent such situations as the Mai Li Massacre or similar incidents.


----------



## dustinzgirl (Dec 3, 2008)

But, terminator type robots are self-adapting and thinking and creative and that's what would have to be to have the robots be effective and autonomous, me thinks.


----------



## Drachir (Dec 3, 2008)

dustinzgirl said:


> But, terminator type robots are self-adapting and thinking and creative and that's what would have to be to have the robots be effective and autonomous, me thinks.


 

They would be machines.  Just pull out the plug.  
In fact such machines would be limited in the way that all machines are.  If you have ever played a computer game you know that the AI can only act a certain way.  It would be easy to program military robots to act only in the way the user intended.  Acting any other way would be like expecting your toaster to wash clothes.  It simply could not happen.  

The only problem is the human element.  If the programmer decided to have the robot act in a manner considered unethical then that is what it would do, but it still could not decide to change the program on its own.

Simply stated if it comes to sending robots in to fight groups like the Taliban instead of risking the lives of Canadian soldiers then I am in favour of the robots.


----------



## skeptic_heptic (Dec 22, 2008)

The greatest evidence of this is the T800 Terminator that destroyed itself in order make sure that killer robots would not remain.  It is true that he was wrong about the future, however, it was still the noble thing to do and truly an ethical decision that benefitted humans.


----------



## ManTimeForgot (Dec 22, 2008)

Gentlemen and Ladies,

Why are we basing our assessments of the validity of Semi-Autonomous Fighting Robots on sci-fi depictions?  Does anyone here live in a Jetson's style Saucer house?  Or how about a flying car?  All of those were supposed to have come true by now.  Anyone seriously think ET is going to be a little green human?  Has anyone found themselves confronted with gigantic spider walking robots ala Wild Wild West?  Ever seen a laser gun that makes a "pew pew" sound when fired?


Semi-Autonomous Fighting Robots is going to be a part of the future of warfare just as surely as Semi-Autonomous Robots (from macro to micro scale) are going to be a part of numerous parts of the commercial sector in the future.  The important distinction here is the one between AI and Semi-Autonomous.  A computer programis semi-autonomous.  When you run an executable you do not have to manually verify each and every step in the program.  Semi-Autonomous robots would be functionally similar.  They would be programmed with attack patterns, maneuvers, contingencies, etc and allowed to perform their programmed functions.

AI (whether artificial or actual; not sure if we will be able to tell the difference anyway due to thalience) is a different story, however.  Intelligence is independent of man's desires, even if a core set of instructions form the basis for determining what is or is not a correct action.  For humans the basis for determining correctness of action is pleasure and pain (stimulus response gauges the greatest pleasure and seeks to avoid the most pain).  But for an artificially constructed intelligence there could conceivably be any basis for determining stimulus response.  The problem comes when an AI is free to perform assessments of humans.  Could an AI determine that a human was too much of an obstacle to its goals to be allowed to continue to live?  The answer I think is yes.


It is for this reason that I believe that AI's will not be placed in positions of strategic importance for quite some time, until the science of AI's has matured a lot, at which point much of strategic thinking will likely be left to AI's (as there will come a point where AI's will be able to construct other AI's of greater complexity).

MTF


----------



## kyektulu (Dec 22, 2008)

dustinzgirl said:


> Yeah, cuz it worked out so well in Terminator....and Blade Runner...and The Matrix....and Robocop....see where I'm going with this? What they forget is that if they can build it...so can someone without ethics.



*My thoughts exactly.

Not a good idea imo... some frontiers should not be crossed.
*


----------



## Urlik (Dec 22, 2008)

dustinzgirl said:


> The robots currently being used are just really fancy remote control toys.
> 
> Autonomous robots made to wage war? Bad idea.
> 
> If they are remote control robots that is one thing, but we are not talking about remote control, we are talking about self-adapting robots that are made for warfare.


 
even the remote controlled weapon platforms in use are a step in the wrong direction.

even though the operators will be out of danger they won't want to risk damaging or, even worse, losing one of their robots so they might be more inclined to shoot first and friendly fire incidents will rise along with collateral damage* 


*for collateral damage read civilian casualties


----------



## Drachir (Dec 23, 2008)

Come on.  What could possibly go wrong?  YouTube - The Happy Dalek Song!!!


----------



## Scifi fan (Dec 23, 2008)

My problem with AI is that it's still in its infancy, and no one can really predict where it will go, because technology is changing so fast. I mean, did anyone ten years ago predict that AOL would be obsolete? Or that Google would overtake Microsoft as the most important software company around? Is Google even ten years old?


----------



## ManTimeForgot (Dec 23, 2008)

Every ten years the body of scientific knowledge doubles (or so it has been for some time now and is predicted to continue doing in the future); correlated with Moore's law (or whatever passes for Moore's Law in a future with atomtronic and quantum computing) AI will almost certainly take on new shapes.

So one should rightly be cautious of implementing AI's in those areas of human existence that are of paramount strategic importance, _while AI is still in its infancy_.  The problem with maintaining that short-sightedness past infant AI is that it would curtail "human" research into areas requiring computation beyond human faculty but also requiring "creativity."  It would also pose problems for the governance of solar system or semi-galactic civilizations.  Exactly how will you (even with an advent of communication capable of spanning such distances in a reasonable time frame) manage such a large number of people and such a large volume of resources without computers capable of making heuristic decisions about virtually every facet of human/planetary existence?

MTF


----------

