A set of three laws written by Isaac Asimov, which most robots appearing in his fiction have to obey:

  1. A robot may not harm a human being, or, through inaction, allow a human being to come to harm.
  2. A robot must obey the orders given to it by the human beings, except where such orders would conflict with the First Law.
  3. A robot must protect its own existence, as long as such protection does not conflict the First or Second Law.

Asimov attributes the Three Laws to John W. Campbell from a conversation made on December 23, 1940. However, Campbell claims that Asimov had the Laws already in his mind, and they simply needed to be stated explicitly.

Although Asimov pins the Laws creation on one date, their appearance in his literature happened over a period of time. Asimov wrote two stories without the Three Laws mentioned explicitly ("Robbie" and "Reason"); Asimov assumed, however, that robots would have certain inherent safeguards. "Liar", Asimov's third robot story makes the first mention of the First Law, but none of the others. All three laws finally appeared together explicitly in "Runaround". When these stories and several others were compiled in the anthology I, Robot, "Reason" was updated to acknowledge the Three Laws.

The Three Laws are often used in science fiction novels written by other authors, but tradition dictates that only Dr. Asimov would ever quote the Laws explicitly.

A trilogy situated within Asimov's fictional universe was written in the 1990s by Roger MacBride Allen with the prefix "Isaac Asimov's ---" on each title (Caliban, Inferno and Utopia). In it, a set of new laws is introduced. According to the introduction of the first book, these were devised by the author in discussion with Asimov himself.

Some amateur roboticists have evidently come to believe that the Three Laws have a status akin to the laws of physics; i.e., a situation which violates these laws is inherently impossible. This is incorrect, as the Three Laws are quite deliberately hardwired into the positronic brains of Asimov's robots. Asimov in fact distinguishes the class of robots which follow the Three Laws, calling them Asenion robots. The robots in Asimov's stories, all being Asenion robots, are incapable of knowingly violating the Three Laws, but there is nothing to stop any robot in other stories or in the real world from being non-Asenion.

This is strikingly opposite to the nature of Asimov's robots. At first, the Laws were simply carefully engineered safeguards, however in later stories Asimov clearly states that it would take a significant investment in research to create robots without these laws because they were the mathematical basis that all the robots were based on.

In the real world, not only are the laws optional, but significant advances in artificial intelligence would be needed for robots to easily understand them. Also since the military is a major source of funding for research it is unlikely such laws would be built into the design.

The Three Laws are sometimes seen as a future ideal by those working in artificial intelligence - once an intelligence has reached the stage where it can comprehend these laws, it is truly intelligent.

None of the robot stories written by Asimov complimented the Three Laws of Robotics. On the contrary, they showed flaws and misconceptions through very serious glitches. Asimov once wondered how he could create so many stories in the few words that made up these laws. For a few stories, the only solution was to change the laws. A few examples:

The Three Laws were extended by a fourth law, the 'Zeroth Law', so named to continue the pattern of lower-numbered laws superseding higher-numbered laws. It was supposedly invented by R. Daneel Olivaw and R. Giskard Reventlov in Robots and Empire, although it was mentioned earlier in "The Evitable Conflict" by Susan Calvin. In Robots and Empire, Giskard was the first robot to act according to the Zeroth Law, although it proved destructive to his positronic brain, as he violated the First Law. Daneel, over the course of many thousand years, was able to adapt himself to be able to fully obey the Zeroth Law.

0. A robot may not injure humanity, or, through inaction, allow humanity to come to harm

A condition stating that the Zeroth Law must not be broken was added to the original Laws.

Several NS-2 robots (Nestor robots) were created with only part of the First Law. It read:

1. A robot may not harm a human being.
This solved the original problem of robots not allowing anyone to be subject to necessary radiation even for proper time limits (robots were rendered inoperable in doses reasonably safe for humans, and were being destroyed attempting to rescue the humans). However, it caused much other trouble as detailed in "The Little Lost Robot".

The Solarians eventually created robots with the Three Laws as normal but with a warped meaning of "human". Similar to a short story in which robots were capable of harming aliens, the Solarians told their robots that only people speaking the Solarian language were human. This way, their robots did not have any problem harming non-Solarian human beings (and actually, they had specific orders about that).

The problem of robots considering themselves human has been alluded to many times. Humaniform robots make the problem more noticeable. Examples can be found in the novel The Robots of Dawn and the short stories "Evidence" and "The Bicentennial Man".

After a murder on Solaria in The Naked Sun, Elijah Baley claimed that the Laws had been deliberately misrepresented because robots could unknowingly break any of them.

A parody of the Three Laws was made for Susan Calvin by Gerald Black:

  1. Thou shalt protect the robot with all thy might and all thy heart and all thy soul.
  2. Thou shalt hold the interests of US Robots and Mechanical Men, Inc. holy provided it interfereth not with the First Law.
  3. Thou shalt give passing consideration to a human being provided it interfereth not with the First and Second laws.

Gaia, the planet with combined intelligence in the Foundation novels, adopted a law similar to the First as their philosophy:

Gaia may not harm life or, through inaction, allow life to come to harm.

The laws are not considered absolutes by advanced robots. In many stories, like "Runaround", the potentials and severity of all actions are weighed and a robot will break the laws as little as possible rather than do nothing at all. In another story, problems with the first law were noted - for example, a robot could not function as a surgeon, which caused damage to a human; nor could it write game plans for american football since that would lead to the injury of humans.

See also: