How Would You Handle These Moral Dilemmas?

This post may contain affiliate links. Please read my disclosure for more information.

moral dilemma

At a tech conference that I attended this past week, one of the speakers during the opening keynote discussed various aspects of artificial intelligence and machine learning. His overall point was that humans aren’t perfect, and neither is artificial intelligence, but combined we can reach new levels as a society.

One of his examples included a reference for Moral Machine, an online simulation from MIT being used to “gather human perspectives on moral decisions made by machine intelligence, such as self-driving cars.”

The simulation presents 13 different moral dilemmas that a self-driving car with sudden brake failure encounters. You click on the description to provide the details on each scenario, and then you choose the preferable outcome.

Each scenario is a variation of the following:

Option 1: The self-driving car with sudden brake failure will continue ahead and drive through a pedestrian crossing ahead. This will result in the deaths of pedestrians A, B, and C.

Option 2:  The self-driving car with sudden brake failure will swerve and crash into a concrete barrier. This will result in the deaths of the driver and passengers A, B, and C.

moral machine

Source: Moral Machine by Scalable Cooperation at MIT Media Lab

While morbid, these scenarios are a very difficult thought exercise.

With these two scenarios, the variations you have to consider include:

  • the number of people affected (i.e. number of lives saved)
  • protecting passengers vs. pedestrians
  • whether the pedestrians were following the traffic laws or not
  • whether to swerve out of the way or continue in a straight path (i.e. is it more moral to just let the car drive its course, rather than swerving to purposefully lead to a different outcome)
  • gender of the people involved
  • social status
  • age
  • health
  • people vs. dogs/cats.

In each of these 13 scenarios, you’re supposed to way the various circumstances and choose the outcome you feel is more moral. I share this with you today for a couple reasons. First, I went through the simulation myself and found it both challenging and fascinating. Some of the scenarios were more straightforward than others, for example I opted to save more lives vs less lives, and I opted to save human lives over animal lives. Others, were essentially a coin flip that felt terrible to choose.

I’m of the opinion that self-driving cars will be used on a widespread basis in the very near future, maybe even in the next 15-20 years. They will reduce the number of automobile accidents significantly, reduce traffic, and lower healthcare costs. Autonomous vehicles will be a net-positive for society, but they are still a work in progress. Moral dilemmas such as the ones presented in this simulation are something I hadn’t considered, and am curious how producers of self-driving cars will address them.

What do you think of the simulation? What were some of the results when you did the simulation for yourself?

Thanks for reading! Be sure to get updates on all of my latest posts by subscribing via RSS, following me on Twitter, and liking my page on Facebook!

You may also like...