Peter Singer, one of the most controversial intellectual figures in the world, and yet also the most respected moral philosopher, lends his gravitas to the question that concerns us all the most : How soon will we be humping humanoids and should the femi-nazis and bible bashers allow us to?
In his book Love and Sex with Robots , David Levy goes further, suggesting that we will fall in love with warm, cuddly robots, and even have sex with them. (If the robot has multiple sexual partners, just remove the relevant parts, drop them in disinfectant, and, voilà, no risk of sexually transmitted diseases!) But what will the presence of a “sexbot” do to the marital home? How will we feel if our spouse starts spending too much time with an inexhaustible robotic lover?
Singer is a ‘consequentalist’ philosopher, which means he measures the moral rightness of an action strictly according to the good or bad consequences it has. His ability to apply this utilitarian logic on to everyday practical moral problems such as abortion, animal rights, and euthanasia, has gained him both his respect as a philosopher and the hatred and fear that his name provokes amongst traditional religious moralists. For example, whilst arguing that abortion is generally morally acceptable (because the unborn baby is not yet a person and that birth is not a morally significant dividing line), he isn’t afraid to draw out the full implications of his reasoning – that infanticide is also, in certain circumstances at least, also justified. He has also claimed on similar grounds that killing animals is sometimes more wrong than killing a handicapped infant – an intelligent animal, such as a chimpanzee, might be closer to being a person (a ‘rational’ being) than the mentally handicapped infant ever will, and could experience a meaningful and happy life more than the human infant. According to Singer, it would be ‘speciest’ to deny otherwise.
In his article ‘Rights for Robots’ that appeared at Project Syndicate, Singer talks about how our propensity to unfairly promote our own species interests above that of others, may lead us to abusing and exploiting even sentient robots :
For the moment, a more realistic concern is not that robots will harm us, but that we will harm them. At present, robots are mere items of property. But what if they become sufficiently complex to have feelings? After all, isn’t the human brain just a very complex machine?
If machines can and do become conscious, will we take their feelings into account? The history of our relations with the only nonhuman sentient beings we have encountered so far – animals – gives no ground for confidence that we would recognize sentient robots not just as items of property, but as beings with moral standing and interests that deserve consideration.