My favourite author of all time is Isaac Asimov: he crafted the wonderful Foundation series, he penned (or type-writered) countless essays on the importance of science literacy and education, and he’s the mastermind behind I, Robot (remember: the plot of the Will Smith movie had very little to do with the original and amazing book of robot short stories). Robots feature prominently in many of his stories and although he did not invent the term “robot” (that honour goes to the Czech playwright Karel Capek), he did coin “robotics”, as in the study of robots. Today I saw a talk about Artificial Life and it got me thinking about robots and I thought I might expound on the work of Isaac Asimov and his contribution to robotics.
One of the Asimov’s best-known contributions is the Three Laws of Robotics:
- A robot may not harm a human being or, through inaction, allow a human being to come to harm.
- A robot must obey a human being, except where such orders would conflict with the First Law.
- A robot must protect its own existence as long as such protection does not conflict with the First or Second Law
I think these laws are incredibly interesting. Obviously they require robots to be much more developed cognitively than they are at the moment, but they might create a behavioural framework for thinking robots of the future.
There are of course incredible ethical considerations that come with a thinking machine. If we define humanity by consciousness, how do we know these machines are neither conscious nor have feelings? In the case where they are conscious or can feel, the Second Law about obeying might be quite contentious.
In many of Asimov’s stories, robots either malfunction and disregard the First Law, or they find ways to bend the definition of “harm”. This causes disruption, especially because humanity, in many of his stories, has come to rely on the Three Laws and do not expect a harmful robot. Asimov was by no means the first nor the last writer to contemplate a robot apocalypse. Mary Shelley, in 1818, had Dr. Frankenstein consider the outcome if he created a wife for his monster. The theme of robot/computer/artificial life domination is very common in science fiction: just look at Terminator, The Matrix, and 2001 a Space Odyssey for a hearty dose of robot-fear.
Does a non-human sentient being always turn on its creator or on competing sentient beings? If we look to fiction, the answer is yes, but I have a feeling that’s because fiction necessarily requires conflict and when non-human life is involved, it becomes an interesting and touching source of conflict.
While robots of the present are nowhere near being able to understand Asimov’s Three Laws, he has provoked thought about robot ethics. I doubt even Asimov would approve of his Laws, in fact his stories seem to illustrate the various problems inherent in their construction. Defining a “human being” and a “robot”, and “harm” is difficult, as well as favouring individual humans over the well-being of the whole of humanity or vice versa. There is obviously much work to be done in outlining the ethical use of thinking robots, but luckily the technology is still a long way off.