Skip to main content
Mythos

Three Laws of Robotics were formulated by science fiction writer 📝Isaac Asimov in 1942 as guiding principles for 📝Artificial Intelligence (AI) and autonomous machines. The Three Laws of Robotics establish a hierarchy of rules: robots must not harm humans, must obey human orders unless these conflict with the first law, and must protect their own existence as long as this does not violate the first two laws. These principles were introduced in Asimov’s short story “Runaround” and subsequently became central themes across his Robot series, influencing discussions in science fiction, ethics, and robotics research. Scholars have noted that while the laws are fictional, they represent an early attempt to codify machine ethics and anticipate the risks of automation. The Three Laws of Robotics have been referenced in academic fields such as computer science, philosophy, and legal studies, often as a framework for examining moral dilemmas and responsibility in human-machine interactions. They continue to inspire debate over the limits of artificial intelligence and the governance of autonomous systems in contemporary society.

Contexts

Created with 💜 by One Inc | Copyright 2026