The Importance of Issac Asimov's Three Laws of Robotics

Philip M. Wells


Many science fiction authors have considered the idea that one day, "intelligent," mechanical beings could be physically, as well as mentally, superior to humans. These authors also often wonder what would happen if these robot beings simply decide that humans are unnecessary.

To help alleviate this problem, Issac Asimov proposed the Three Laws of Robotics, which state: 1) A robot may not injure a human being, or, through inaction, allow a human being to come to harm. 2) A robot must obey the orders given it by human beings except where such orders would conflict with the First Law. 3) A robot must protect its own existence so long as such protection does not conflict with the First or Second Laws. Asimov's idea is that these rules are so deeply embedded into the "brain" of every robot made, that if a robot were to break one of the rules, its circuitry would actually be physically damaged beyond repair. Assuming this is technically possible, and was embedded in every robot made, these rules are the only thing that would be sufficient to keep robots from taking over the control of the world from humans.

Consider a robot that is physically superior to humans. It can move faster, is far stronger, won't "break" as easily, and doesn't tire. It is also quite aware of its surroundings via sensory devices similar to human's, but potentially much more accurate. These robots could communicate by a very fast wireless network, and be solar powered. The thought of such a machine is not that far off, a decade or two at most.

Now consider that this robot has been programmed by some deranged person to kill every human that it sees. There is little a single human could do to stop it. A group of humans could defeat a few machines, but the machines would have access to all the same tools as humans would, such as guns and atomic weapons. In the end, if there were enough machines, people might stand little chance of survival, unless they were armed with robots of their own.

The only area where humans would really hold the upper hand would be in intelligence. The robots could not really "think" for themselves, and would not have the ability to adapt to new human techniques that would eventually be discovered to destroy the robots.

If the deadly robots were programmed to consider it nearly as important to keep from being destroyed as to kill people, and were programmed to look for deficiencies in themselves and their tactics, then it would turn into a battle of who could think and adapt faster.

Today, humans easily have the advantage as far as sheer brain power over that of silicon. However, because of the rapid rate at which computers' power increases, it has been hypothesized that super-computers will surpass the performance of the highly parallel human brain in as little as 20 years. Even considering a more conservative estimate of twice that, 40 years is not a long time to wait for a computer that is as powerful physically as a human mind.

That is not to say that these computers would be superior to humans mentally. Humans would still have the ability to "think" that the computers wouldn't. However, given a good program that would allow the robots to adapt to new situations and the sheer processing power of these machines, humans would have a distinct disadvantage. A large number of machines such as these could easily take over control of the Earth.

There certainly are a huge number of factors that haven't been considered, but the point is that the controversial idea of robots actually thinking for themselves is not even relevant. In this example, well programmed, but non-thinking robots could potentially take over the Earth.

So, consider what happens if man could create an "intelligent" computer that is more or less modeled after humans. It could be "aware" of its existence, have a "desire" to survive, a desire to "reproduce," and be in a mechanical shell that is physically superior to humans. This computer might not be "conscious," nor does it have to have a "soul." It just has to be programmed with these and other characteristics. This computer will know it's capabilities and those of man, and will know the weaknesses as well.

These computers as a collective unit may decide that humans have mucked up the Earth enough. If they (the robots) are going to survive for any length of time, humans must be removed. To put it bluntly, if this happened, we'd be screwed.

Though the idea of thinking robots, or even non-thinking ones, taking over the Earth may seem far-fetched, the idea of robots programmed to be malicious is not. Even the ability of a robot to kill a few people should be a concern.

This is where Asimov's rules of robotics come into play. The prospect of hard-coding these laws as deeply into these robots as Asimov talks about may be technically difficult to achieve, but I am sure that there would be a way of implementing something similar. Doing this ensures that robots would be the slaves of man, rather than the other way around.

One concern of Asimov's laws is that these slave robots would physically create other robots where the laws were not embedded into their circuitry. However, this is not possible, since these slave robots could not have the "desire" to create robots that could potentially harm humans. If they did, according to Asimov's first law, they would be damaged themselves. Knowing that they would be damaged, they couldn't go through with it, because this would violate the third law.

The biggest problem of Asimov's laws, though, is that they can only be completely effective if every robot or computer was deeply embedded with them. The prospect of some humans creating a robot that did not abide by Asimov's laws is a matter of real concern, as much as the concern of humans creating some other weapon of mass destruction.

But humans will be humans no matter what anyone does. There is simply no way to keep humans from killing themselves, no matter what tools they have at their disposal. Surely there would have to be severe penalties for the person that attempts to create a robot without these laws. But, this doesn't solve the problem.

The importance of Asimov's laws is clear, nonetheless. A slightly deranged computer that is mentally more powerful than a human could create an even more powerful and deranged computer much faster than humans could create something in defense. By implementing Asimov's laws, a deranged computer couldn't exist. And a "good" computer would only create other, better, "good" computers.