Sunday, June 1, 2014

The lesser evil

What should be the preset morality of autonomic cars?

For background, companies (sadly including Google) are developing systems to make cars that drive on their own. You just sit in, input the destination, and the car makes sure you get there by finding a route, adhering to traffic rules etc. How it reacts in situations where your normal GPS would lead you to a field or into a lake is of much musing, but for now let's just leave it there.

A car as such would have to be programmed. It would have to follow strict rules regardless of the situation. Avoid a crash if possible, if crash cannot be avoided maximize damage to the front, if there is no crash imminent, follow traffic laws, whatever. What has cause somewhat of a stir are questions about situations when a crash is inevitable. One of the examples is driving an SUV, a tire blows and you have two options. Either steer or let it swerve left into the oncoming lane and traffic... or sharply turn right to compensate, turning the SUV into a sliding box going off the road to the right. If there is a pasture on the right or no oncoming traffic this would not be a problem at all. But what if there is a small car coming on the opposite lane and a cliff on the right? A situation you would probably not happen to experience, but for the purposes of explaining the problem, it works.

A SUV hitting a small car head-on at great speed (for the purposes of crash damages, the speeds of both vehicles get added up) would probably mean a death sentence for anyone in the small car, but the SUV passengers would probably live, even walk away from the crash. On the other side, a cliff is a cliff and anyone going over would probably die. So, should a car aim to kill people in the wrong place at the wrong time to protect its owner... or should it kill its owner to avoid collateral damage? Either way it is kind of messed up, after all when we buy a car we would assume that the car is built so that it would protect us. But how far should it go to fulfill this aim?

The problem here is the fact that the car requires rules to follow. These rules must the wide enough to be applicable in most situations, but strict enough to be enforceable. One of these rules would either have to say 'protect the passengers at any cost' or 'protect other people at any cost'. Drawing a line between these extremes would be riddled with a plethora of ethical dilemmas. But the car would always have to act the same way.

Another situation that people have found troubling is yet another inevitable crash. A crash where the car has a choice - to hit a small car or a large car. If the speeds are low enough that the crash would not mean instant and definite death to all those involved, the basic logic would be 'a larger vehicle can absorb more energy of the crash, thereby decreasing actual damages. Should the car then be programmed to aim for larger vehicles (we are assuming here that damages inflicted to the passengers of the soon-to-be crashing car would sustain similar injuries either way - not going fast enough to kill them, but not slow enough for the choice not to matter; the problem is the jolt of the car hitting the other car that can cause neck and other injuries for the passengers of the other car)? Should car makers punish people driving SUVs? Or let fate take its choice and remove any person from responsibility?

These are the kinds of problems we didn't have before autonomous machines. But these are also the problems science fiction authors have been dealing with for decades. Even Isaac Asimov's proposed rules seem perfect, but leave a lot of gray area between them. Then again, the problems arise from perceived morality which is always subjective. It is not about the cars or computers, it is about the people.


No comments:

Post a Comment