Part of Designing Self-Driving Cars Will Be Teaching Them to Kill

Audi RS 7 Autonomous Red 2
Audi RS 7 Autonomous Red 2

Any Star Trek fan out there knows the line from Spock, “The needs of the many outweigh the needs of the few.” Though that line comes from fiction, the notion of the “greater good” comes up throughout history, and it is one lesson that self-driving cars are going to have to learn.

As we move closer and closer to fully autonomous cars, the control of the driver will become less and less. It is inevitable that a self-driving car will encounter a scenario in which the loss of human life is unavoidable. So how do you train a car to make a decision in that situation that minimizes such a loss?

RELATED: Tesla Autopilot System Tackles Daily Commute, Shows Minor Flaws

A report form the MIT Technology Review examines this problem, and cites work from the Toulouse School of Economics in France, which states that public opinion will largely determine the hierarchical deicion making that will be built into the code of these self-driving cars. If a car cannot avoid a crowd of people, will it crash the car to avoid that crowd, thus potentially killing the occupants inside? What if the choice is between a single adult and a single child? What if the adult was holding a puppy?


According to the report, if you design a car that is capable of intentionally killing the driver (to spare the lives of others), less people are likely to buy that car. But based on crash statistics, the roads will continue to be dangerous, because of the numbers of crashes non-autonomous cars get in, and it’s the owners to blame.

RELATED: Teaching Self-Driving Cars to Break the Law

So a series of ethical dilemmas were put through Amazon Mechanical Turks to get a response, which is comprised of hundreds of workers who are posed these questions. The scenarios varied in terms of number of pedestrians that could be saved, or number of occupants in the car that would be sacrificed.

The results basically boiled down to this- they wanted the result that minimized the death toll. But on the other hand, users did not want to be in a car that was prepared to kill those inside if it meant minimizing the carnage. It creates a paradox in which people want the safest result possible, but do not want to be in a car that is designed to ensure that by potentially killing them.

Where do you stand on this? Tell us in the comments below.

RELATED: Mercedes, Volvo, and Google Accept Autonomous Car Liability