[UPDATE] after reading this post friend of mine (Viktor Fonic) told me about test being conducted by MIT’s Media Lab, called the Moral Machine. As they say, it is “essentially a thought experiment that seeks answers from humans on how a driverless car with malfunctioning brakes should act in emergency situations.” More at http://qz.com/759298/mit-wants-humans-input-on-who-self-driving-cars-should-kill/
Every day we see advances in self-driving cars and related technology. Tesla is the company that is bringing that kind of technology to consumers every day and they are doing it quite well. But one of the most “problematic” thing about it is the moral dilemma, what should the system/car do in some edge cases. I was thinking about it and it might not be that difficult problem to solve.
Driver should pre-decide
When we drive the cars without self-driving technology and get into situation with moral dilemma (for example which lives to save in a obvious crash) we make a decision in a blink of a second. So why don’t we also make the decision with self-driving decision? When driver gets/buys the car, there should be a setup process. In that process driver should click thru possible scenarios/cases and decide what he/she thinks car should do. In that case responsibility would be transferred to driver, not the manufacturer of the car or the self-driving system.
Step-by-step
That solution is not “plug & play” but it looks like something good enough to get us all moving to that kind of technology. We all see the data that tells us about how many lives would be saved, etc. So why not solve that problems with easy solution.
What do you think about self-driving cars, technology related to it and moral dilemmas? I would really like to hear your opinion, maybe my ideas are not realistic, but it got me thinking and this looked like a simple solution for me. 🙂
Originally posted at https://chatbotslife.com/solution-for-self-driving-cars-moral-dilemma-ac251f791ddd