Asimov's Three Laws of Robotics

cyborg_cinema

Well-Known Member
Joined
Aug 5, 2005
Messages
389
In 1942, Asimov's Three Laws of Robotics were published in a story titled Runaround. Now, more than 60 years later, does there seem to be anything missing from the laws? would you change anything? perhaps a fourth law?

1. A robot may not injure a human being, or, through inaction, allow a human being to come to harm.

2. A robot must obey the orders given it by human beings except where such orders would conflict with the First Law.

3. A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.
 
I think Asimov just about covered every possibility when he put these laws together. I just can't think of any way to improve upon them.

I suppose the one thing the laws don't cover is the fact that a robot could still be tricked into harming a human - ie. that it was not aware that its action had caused harm.
 
The laws are rubbish
How about... A robot must follow its programming...

The truth of the matter is that we'll be using robots for war, we'll want a robot to self destruct or to intentionally degrade.
I'm a realist...
Each robot should have its own specific programming, and follow it, sure, it's likely that in a lot of instances, the same rules will be used over again. I wouldn't want my $50million stealth plane to have a logic conflict mid flight causing it not shoot down an enemey bomber or missile.
The real question is whether artificial inteligence can be used to kill humans, whether it be offensive or defensive, how would you distingush between different humans.
And coupled with the third law (which is necessary in a combat situation) it would cause the Terminator senario, where they'd target all humans as a logical conclusion to self preservation....
 
Asimov himself came up with several ways to subvert these 3 laws.
He later included another revised set of laws.
I'll have to see if I can dig them out...
 
dreamwalker said:
...whether it be offensive or defensive....
...I think robots used as medics—rather than soldiers—would be compatible with the laws. The unarmed "medic robot" would dive on a grenade for a soldier. The "soldier robot" would be incompatible with all three laws.
 
I doubt you'd want a specialist robot in any situtation to self sacrifice it's self in most situations when it has the potencial of saving so many more lifes if it is still working...
 
just because laws have been written in fantasy novels doesnt meen they have to be true to life.. current war robots (missiles etc) already subvert the laws in that they can kill. but then on the other hand, they havent been given the choice of wether they can kill or not.. if a missile had the choice of wether or not it could kill, it might chose not too. so maybe we are better off have unintelligent robots because we'd always have control over them, making them do what we want them to do, and preventing them from doing things we dont want them to do (think terminator) :cool:
 
The laws are rubbish
How about... A robot must follow its programming...

I don't think it's necessarily about programming - all you need to do is create a law saying that all robots must carry these laws within their software/firmware/hardware or whatever. Then it becomes mandatory to have it in the programming.

I work in an industry controlled by a regulating body (and regularly audited) - if we fail to meet expectations we will be closed down. The same could apply to Robot manufacturers.
 
ravenus said:
Fourth law: A robot should never mess with one that gets jiggy with it :p
...I wonder how far that will go, in reality. I mean, how sexy will robots get? A bit pricy, compared to a blowup doll.
 
that the laws don't work is obvious. and a concept that has been used to advantage in more things than just i, robot. i've even found references to it in Red Dwarf.
 
The first and most obvious flaw in the three laws is that humans do lots of things that will bring harm upon themselves. (have wars, carry guns, eat unhealthy foods etc etc), and First Law would forbid a Robot permitting a human to do these kinds of things. Furthermore, since the laws are hierarchial this would override second and third law. So a robot would be obligated to make you eat your spinach even if you told it not to.

The 3 laws were made so that Asimov could have robots that exactly mimicked human behavior without being bothered by what he called the 'Frankenstein Complex' in his fiction. He really needn't have bothered. In actual fact we simply don't have machines that look or act human at all. Rather we now have robots imitating humans who themselves were imitating machines in the first place. The robots that make cars, frex, cannot possibly uproot themselves and attack us even if they do become possessed by alien zombies.
 
Last edited:
Well the Movie I, Robot had a frankensteinian subversion of the three laws, thanks Vicki, Bicentennial Man the movie also had a fast switch at the end. A human order does not supercede first law, EVER.

However, there is a Law that Asimov wrote that has not been quoted in this forum. from Robots and Empire, and continued in later books.

0th law of Robotics - A robot may not harm humanity, or through inaction allow humanity to come to harm.

it freed up action, but still had the robots working in a servile manner.
 
The first and most obvious flaw in the three laws is that humans do lots of things that will bring harm upon themselves. (have wars, carry guns, eat unhealthy foods etc etc), and First Law would forbid a Robot permitting a human to do these kinds of things. Furthermore, since the laws are hierarchial this would override second and third law. So a robot would be obligated to make you eat your spinach even if you told it not to.

The obvious problem with the Laws involves problems with contradictions in the interpretation of the First Law. For instance is simply carrying a gun harming a human or do you have to actually use it to be considered harmful. Same question the US Congress is weighing. So maybe the robot would do something harmful to the person to stop them from carrying a gun. Which then is more harmful - taking the negative consequential action to stop them or allow them to continue carrying the gun. Or consider war as you mention. Was it harmful to invade Germany to remove Hitler? Certainly a lot of people died but the end result may have been beneficial. So the question for the robot is can I take a harmful action that prevents future harm. Asimov had plenty of fun with these kinds of questions in the books.
 
The Movie I Robot was based about as loosely as it could get and still manage to use the title.

As for carrying a gun, as long as it isn't used to harm someone then, in Asimov terms, no it wouldn't be considered harmful.

And don't forget, these were and still are fiction... the positronic brain hasn't been developed... yet!
 
I am pretty sure he got these laws from philosophy anyway (regardless of the story with him and the other writer). I remember hearing about 3 laws in political science class and when the professor told them, another student and I simultaneously exclaimed that they sounded like the 3 laws of robotics (but the writing predated them significantly). I didn't bother to write down the source as I thought this would be easy to verify and would be mentioned in any article about the 3 laws of robotics. Unfortunately, I have not found that info since.
 

Similar threads


Back
Top