A car is driving when suddenly everything changes. The choice is stark – plough ahead into a pedestrian or swerve and slam into a wall. It’s a nightmare scenario. Who should die, passenger or pedestrian?
This is the kind of dilemma that is being increasingly considered as machines arrive that will take split-second decisions for us.vehicles are of particular concern to many people. Heavy and potentially fast, they must mingle with the human world and make choices that may result in injury or death.
“For the first time in history, we are building devices endowed with the ability to make autonomous decisions that have moral consequences,” says Iyad Rahwan of the Media Lab at the Massachusetts Institute of Technology, one of the authors of a study in Science today that addresses this scenario.
The greater good
The team has carried out a series of surveys of hundreds of people showing that most want autonomous vehicles programmed to make choices for the greater good, that is to preserve the most lives – choosing to put the life of their occupant at risk instead of ploughing into a group of school children, for instance.
But that view changes when people are asked whether they would buy such a “greater good” car. Most say they wouldn’t. It’s a classic case of what is good for others is not good for me.
It is tempting to suggest using laws to ensure that the greater good prevails, regulating all autonomous vehicles to make decisions that preserve the most human lives. …
More on these topics: