Cars
Self-driving cars could be equipped with a moral compass to choose who it kills in a crash
It’s like “Maximum Overdrive” but somehow worse.

Just a heads up, if you buy something through our links, we may get a small share of the sale. It’s one of the ways we keep the lights on here. Click here for more.
Do you remember last month, when our biggest AI fear was being enslaved and mined for resources by an army of parkouring robots? What innocent times those were because today brings the news that smart cars are now being given morality.
Yes, morality. Our AI partners will soon have the power to decide who lives and who dies. Sort of.
According to Peter Dizikes of the MIT News Office, “A massive new survey developed by MIT researchers reveals some distinct global preferences concerning the ethics of autonomous vehicles, as well as some regional variations in those preferences.
The survey has global reach and a unique scale, with over 2 million online participants from over 200 countries weighing in on versions of a classic ethical conundrum, the “Trolley Problem” … in which an accident involving a vehicle is imminent, and the vehicle must opt for one of two potentially fatal options. In the case of driverless cars, that might mean swerving toward a couple of people, rather than a large group of bystanders.
Ok, so they haven’t been given morality yet. But that’s always how it starts.
More about the study
Dubbed “The Moral Machine Experiment,” the study was actually the result of a game designed by researchers that asked participants to choose between dozens of these life-or-death scenarios, which were then grouped into several “preference groups.” In total, the study compiled almost 40 million individual responses from 233 countries.
The results were “to some degree universally agreed upon” according to Edmond Awad, a postdoc at the MIT Media Lab and lead author, and this is the point in the article where I would ask all “Executive Males” and “Large Women” to stop reading.
A self-driving car has a choice about who dies in a fatal crash. Here are the ethical considerations https://t.co/ZcEgDQfxhh #automation pic.twitter.com/XzLWQWDzcr
— World Economic Forum (@wef) November 3, 2018
Despite the inherently dark and just plain…I dunno…inhumane (?) premise behind the study, Awad insists that the outcomes are actually for the good of society.
What we have tried to do in this project, and what I would hope becomes more common, is to create public engagement in these sorts of decisions. The question is whether these differences in preferences will matter in terms of people’s adoption of the new technology when [vehicles] employ a specific rule.
Strange times, friends
So yes, in one of the most divisive times in history — where our political leanings can be revealed by an app and billionaires are literally buying up all of our water — it’s probably a good call to start a public discussion about which lives matter more than others. Nope, can’t see this being used to set a nefarious precedent at all.
How something like this could even be outfitted into a smart car’s AI remains to be seen, but if we’re allowing vehicles to make split-second decisions about who among us look most like “criminals” and not, then we’re too late. We’ve already lost. Good game, humanity. Don’t let the sociopathic Ford Taurus hit you on the way out.
What do you think? Is this something we should be looking at for AI cars or is this a discussion for humans? Let us know in the comments.
Editors’ Recommendations:
- I’m still rocking an iPhone 5S, and honestly, I don’t even care
- Voice Roulette is here and its definitely not being used for phone sex, you guys
- Elon’s dream of a giant RC car comes closer to fruition as Summon mode rolls out on Teslas
Follow us on Flipboard, Google News, or Apple News
