A new book reviewed in Nature (January 29) asks if machines can have a conscience: Moral Machines: Teaching Robots Right From Wrong, by ethicist Wendell Wallach and philosopher Colin Allen. The book will be worth reading when I get back to my college library nook, but for the moment the theme sparked a lively dinner table conversation with my son-in-law, a robotic engineer, and my daughter, an editor with a fictional work on her forthcoming list that deals with exactly this question.
Apparently, Wallach and Allen focus on a functional definition of morality: "Moral agents monitor and regulate their behavior in light of the harms their actions may cause or the duties they may neglect." From my point of view, the most interesting reason to address the question of machine morality is for the light it will throw on human conscience. What part of human morality is hardwired? What part is programming? Is self-awareness a prerequisite for moral action? Free will? What role do emotions play? Do we have moral obligations to ourselves? To non-human organisms? To inanimate nature?
A housebroken dog presumably regulates its behavior in light of the harms its actions might cause, and similar elementary functional behaviors might easily be programmed into machines, but is it morality? Collies presumably have different behavior-regulating hardware than pit bulls; are collies more moral? An automatic pilot that wrests control from a human pilot in an emergency has made a functional decision of sorts; does it have a conscience?
After thousands of years of philosophical discussion and hundreds of decades of relevant science, we still don't understand the depths and dimensions of human morality, which is why literature can still bring us face-to-face with the shattering complexity of our own moral selves. The bewildering anguish of a Sophie's choice should give us pause before we contemplate giving the same agency to machines.