Josh Weinstein ’18

Many of the people working on cars that can drive themselves aren’t just engineers and software developers.  In fact, many teams developing self-driving cars include philosophers. Why would philosophers be required to develop autonomous cars? Well, there are many ethical debates that have formed around the subject of self-driving cars, and our lives may depend on the choices those philosophers make.

Image result for tesla self drivingMany papers on the ethics of self-driving cars cite the “infamous trolley problem.” Imagine you are the conductor of a trolley and see five people on the track.  You attempt to use the brakes, but they do not work.  There is, however, a branch in the track ahead, but one person is standing on that track. Do you make the active decision to switch tracks and kill one person to save five others? While this is more of a thought experiment in morals and ethics, it may be a very real decision that many cars will have to make.

It becomes even more complicated once you consider that a self-driving car would also have to take into account the lives of the passengers in the car. In a situation where there is one person in the car and five people crossing the road, and a crash is unavoidable, should the car avoid the pedestrians and crash, killing the person inside? What if there are five people in the car and one person crossing the street, should the car hit the person to spare its passengers? These questions are very hard to answer, and while it may be a morbid subject to discuss, many philosophers have to  debate this issue, because if it is left unaddressed the consequences would be terrible.

With fully autonomous vehicles only a decade or two away, technology that would allow cars to prevent all accidents will not be available in time. The fact is, when they do arrive, these cars will not be perfect (nothing is) and some accidents will not be avoidable. As a result, this ethical dilemma definitely has the potential to affect our lives in the not too distant future. Since these fully autonomous cars are on the horizon, polls have been conducted to gauge the feelings of Americans on this issue. The findings only make the issue more complicated.  Science Magazine, for example, reported on a series of questions asked by computer scientists. When asked whether autonomous cars should choose the option in which the least lives are lost, most Americans responded with a yes, even if it meant that the passenger/s of the car were killed.

However, when asked if they would buy a car that in the case of an unavoidable accident would sacrifice its passengers for the greater good, they said no, they would not buy such a car. This shows human nature, we believe that the best decision is the one where the loss of life is minimized, but when that decision results in our death, we say no way, I’m not sacrificing myself. Most people agree that while sacrificing the passengers might be the morally correct choice in certain situations, they themselves would not want to ride in such a car. This complicates the issue further, since it would be very difficult to sell cars to people if they knew that those cars were programmed to kill them in the event of an unavoidable accident, even if it meant that the overall loss of life would be minimized.

Engineers and computer scientists are placed in a difficult position”  do they build cars that are overall more “ethical,” cars that would sacrifice their passengers to save the lives of others? Or do they build cars that people would be more comfortable with buying, cars that would always prioritize the passengers inside the car? They most chose one ideology or the other, and it is a incredibly difficult decision. There are arguments for both sides.  Some say that cars should always make the morally correct choice even at the expense of lower sales, because the saving of human lives is much more important. However, it is important to keep in mind that despite these ethical debates, self driving cars would significantly minimise the amount of accidents on the roads, so you could also argue that if there were fewer self-driving cars on the road due to low sales, there would overall be more accidents, even if they are programmed to always save the passengers in the car. They argue that the benefits of having more self-driving cars on the road would outweigh the harms caused by those cars only valuing their passengers, as there would be fewer accidents overall.

Image result for tesla self drivingSome propose that automakers offer a choice of different algorithms to the consumer, but then in the case of an accident, is the consumer responsible since they chose which way the car was programmed? These ethical debates may take years to be settled, if at all, but think about it, what choice would you want your autonomous car to make?    

During the transition from human operated cars to autonomous cars, more accidents may be caused due to another moral issue. Driverless cars being tested on the road today are programmed never to break the law, which means that they won’t exceed the speed limit. This at first seems like a good thing, but it becomes a problem if the car needs to merge onto a highway and cannot exceed the speed limit to merge safely. So is it wrong for autonomous cars to always follow the law even if it can result in more accidents between human controlled cars and driverless cars? Or should the autonomous cars be programmed to break the law if necessary? Would the government allow that? This is yet another issue that has caused legal and ethical debates.

While self-driving cars would overall lead to fewer accidents in the long run, when an unavoidable accident occurs, how should the car behave? The debate is far from over, but it will only be a matter of time before manufacturers have to make a decision.

Sources:

 

                   

Leave a Comment

Your email address will not be published. Required fields are marked *