Sundered Peak

through the mind of kyle tolle

Will Self Driving Cars Sacrifice You?

Earlier today, I read an article entitled Self-Driving Cars and the Trolley Problem.

He mentions Asimov’s Laws of Robotics, which are designed to minimize the amount of harm robots could do to or allow to happen to humans or humanity.

The Trolley Problem

The main concern he raises is a philosophical issue about self-driving cars - The Trolley Problem. How they would an autonomous vehicle react in a lose-lose situation? If your car had was at risk of colliding with either a car with 5 people in it, or a car with one person in it, which should your car “allow” to happen?

At first, this seems an academic exercise, until you realize that it’s quite feasible. Many human drivers have already grappled with these kind of split-second decisions, though I’m sure it’s more instinctual or knee-jerk reaction for people. Not always though; people trained to learn how to steer out of a skid would fair better in some cases.

Who Should They Save?

There’s the utilitarian model of saving the most number of lives, or, alternatively, killing the fewest people. But then what about if the 5 people were criminals escaping from a bank robbery? And the one person in the car was a cancer scientist who just made a major breakthrough? Perhaps it’d be best, then, to wreck into the car of criminals, since the scientist has a higher value?

This line of thought is disturbing, in that it assumes we can assign any objective value to a human life. That we can compare two completely different individuals and know which one has more value. There’s no such thing as objective human value, and this reasoning feels feels like step toward eugenics.

What About Liars?

This hasn’t even considered more nefarious scenarios, like sending out incorrect data to other vehicles, which causes a chain-reaction of deadly activity. What about, what I’ll call the “Lying Trolley Problem”? A group of colluding cars broadcast a fake signal for “collision with 5 people right ahead”, so trailing cars divert and kill pedestrians on sidewalks or families playing in their yard, even though there was no real danger on the road.

Gaming the Value System

I sudder at the thought of an algorithm defining human worth, because there are so many ways to abuse it.

If cars had some algorithm to assign a “human value weight” to a car, a sole passenger could send fake data saying that they have 15 high-value passengers aboard at all times. Other cars would never impact this car, because its “human worth quotient” is so high. I could see an industry of hackers to bump up the value of your vehicle. This could then lead to “grade inflation”, trending toward infinity.

Perhaps there would even be a legal industry for “Human Life Value Optimization”, akin to Search Engine Optimization today. People pay for knowledge on what they should do to enhance their life’s value, at least where the algorithm is concerned. They could also embellish or outright lie about data to improve their score. People who can’t afford these services will be more likely to die than those who have the cash to spend.

Would a car assign your vehicle a higher value if you were on a road with many billboards? Google or Apple get more ad impressions from your car, so they have more incentive for you to live as opposed to someone who’s on a road with no revenue potential.

There Is No Objective Value

Back to the human worth issue: Social and moral values shift over time, as we can see with various human rights movements throughout time. Any value we assign to a human life now will not be the same value we assign it later.

Additionally, any value given to a person is only based on their past and current life. It cannot take into account things they may do in the future. Perhaps one of those criminals will reform and prove faster-than-light travel feasible? We’ll never know right now, because it hasn’t happened yet. I’m not sure how valid this argument is, but it adds weight to the notion that objective human worth/value is relative and short-sided.

Accidents Are Accidents

Will the first autonomous vehicles have some sort of Collision Choice Processor? They’d have to be well-informed with enormous amounts of data, in real-time, and have a good algorithm or learning capability to process this information. Learning isn’t flawless. Decisions are only as good as the information available, along with the way it’s processed. Decisions, information, and learning will get better in the future, but there’s still room for error.

Accidents are, by their very nature, not done on purpose. Computer-driven cars will still have accidents. There will be times when there was not enough available information to make a better decision. There will be “acts of god”, where a landslide carries vehicles away and there’s nothing the computer can do.

And Bugs Too

What about software bugs? It’s impossible that these self-driving cars are flawless. We humans made them and we are fallible. As a result, so are our creations. There’s a lot we don’t know, and we make mistakes. Who is responsible for injury or death when the crash was caused by a programming error, false-positive in learned knowledge, a faulty vision sensor, or a cosmic ray?

Would a better bet be to make the cars very defensive? With their array of sensors, current knowledge, past knowledge, and possible communication with other vehicles on the road, can we drastically reduce the number of crashes, particularly fatal ones? If the car couldn’t avoid collision, perhaps it could make the collision less severe? Instead of t-boning another car, it could impact the trunk instead.

Contradictory Choices

But, even here, there may be contradictory outcomes computed by the autonomous cars. They both try to act in a “beneficial” way that ends up being more disastrous to both cars and occupants. Again, that’s an accident. Your “best decisions” won’t always be globally optimal.

This is even trickier when you consider a collision between a self-driving car and a human-driven car. The software won’t always reliably forecast what the human will do. Perhaps the driver’s suffered a heart attack, and their hands spasm and send the car careening in various directions. Or they’re intoxicated and weaving across lanes. What does the car do here?

Conclusion

The issues raised in the original article are quite interesting, and it begets many more issues. I wonder how these issues are being handled at places like Google? How will we decide to handle them in our system of laws?

Part of me fears for the uncertainty behind self-driving cars, but humans don’t have a great track record either. Perhaps autonomous cars will be the lesser of two evils?