Ethical dilemma on four wheels: How to decide when your self-driving car should kill you

Jun 25, 2016

By Karen Kaplan

Self-driving cars have a lot of learning to do before they can replace the roughly 250 million vehicles on U.S. roads today. They need to know how to navigate when their pre-programmed maps are out of date. They need to know how to visualize the lane dividers on a street that’s covered with snow.

And, if the situation arises, they’ll need to know whether it’s better to mow down a group of pedestrians or spare their lives by steering off the road, killing all passengers onboard.

This isn’t a purely hypothetical question. Once self-driving cars are logging serious miles, they’re sure to find themselves in situations where an accident is unavoidable. At that point, they’ll have to know how to pick the lesser of two evils.

The answer could determine whether self-driving cars become a novelty item for the adventurous few or gain widespread acceptance among the general public.

In other words, the stakes are huge.


Continue reading by clicking the name of the source below.

22 comments on “Ethical dilemma on four wheels: How to decide when your self-driving car should kill you

  • I think this is more a transitional problem that in the fullness of time may be solved substantially by technology. So the advice to pedestrians “always carry a working phone or 2G beacon” to be safe may become the norm. Hi Viz clothing can be self beaconed. All approved pedestrian areas in cities can use the people detectors in lamp posts that will be used to set night time brightness levels of the lamps will feed pedestrian warnings to vehicles. Sudden unreasonable actions from pedestrians must be the liability of the pedestrian, else pedestrians may have a means to deliberately harm drivers. A child’s game of “chicken” may become a new thing.

    Its just possible differential insurance policies favouring the ultimate self protection algorithms will have the highest premiums to manage being sued by injured pedestrians.



    Report abuse

  • The ethical requirement is simply that a pedestrian in a pedestrian space is the most innocent party in the most untransgressable space. The driver is the owner and user of the dangerous weapon and must bear the liability of it.



    Report abuse

  • the answers simple . provide a switch marked “mow down / swerve off road” . so that the owner can preselect which is the preferred option. May seem like a heartless cold option but is that worse than the present situation in which the driver makes just such a decision in the heat of the moment?



    Report abuse

  • 5
    Cairsley says:

    Much of this dilemma depends on how fast these self-driving vehicles will go, how good their forward-scanning is and to what extent they are programmed to slow down wherever any kind of possible danger is detected ahead in order to be able to stop before doing any harm. Mowing people down or swerving off the road are not the only two options in most cases; the best option, as anyone who has learnt defensive driving knows, is to be able to stop safely before harm is done. Obviously, it will be very important to regulate the speeds at which these vehicles are to travel.



    Report abuse

  • A starter list:

    Bad firmware (see smart phones as example)
    Hacking
    Defective/damaged sensors (have them at work all the time… and they’re very simple)
    E.M.P. / static electrical damage (latent) at manufacturing origin
    Entertainment systems, with apps, connected to core driving components… look at how apps affect cell phones and computers
    Third-Party / non-OEM modifications
    Wildlife (deer suck)
    The aforementioned problems involving accidents with other people… what logic will be programmed into the car for accident avoidance/minimization
    Property versus human damage minimization
    Black ice, white outs, hard rain, ice covered pot holes, hydroplaning…
    Loss of power (back-up)
    Loss of primary components (back-up)
    Intersections with damaged street lights, other drivers with non-functioning indicators



    Report abuse

  • If the car is working properly, it will be the pedestrians at fault. In most circumstances, this will only trigger hard braking or swerving to avoid.
    In the unlikely event of an unavoidable collision, the robot pilot should default to protecting the passengers above all else.
    The reason is this. The “pedestrians” could be something else. It could be a deer, a shadow, a person attempting to crash the car, a digital phantom, who knows what. It is unknown, and thus of indeterminate value.
    But we know that the passengers are real, unless someone sent an empty car, but listed it as containing a person.
    By the way, my car will alway be full of babies if the computer happens to ask.



    Report abuse

  • With self drive cars roads would have to become pedestrian ‘no go’ zones, as are railway tracks. It is currently accepted by the public that if people wander onto railway tracks they are likely to get killed. Railway tracks are fenced off and have a different status to roads.

    Roads populated with self drive cars would need to have a similar status, otherwise one lost child or a mischievous act would cause vast traffic jams – an unsolvable dilemma.

    Is this desirable? The disruption to the free flow of pedestrians and commerce caused by ‘no go’ roads would be unacceptable. Self drive cars will remain an amusing invention that has no practical application.

    rz



    Report abuse

  • The basic question is really can a machine be assigned the legal rights and responsibilities that people(adults) have? With the current technology this does not seem possible, or advisable. The self drive vehicle would have to be responsible for its own actions for such a system to work, both legally and commercially.

    rz



    Report abuse

  • Dan, where are you? 😉

    A train leaves Scotland at 10am. Meanwhile, a philosopher goes for a walk and at the same time as the train leaving the station in Scotland, he comes across a train line, near London, where the track splits into two. There is a lever 50 yards from the splitting of the track. There are men working on both tracks up ahead with five on one track and one on the other. The philosopher has only five hours to decide if the next train is going to be a runaway train and what would be the morally right thing to do if he had to pull the lever in order to divert the runaway train onto one track or the other. He sits and thinks, and thinks, and thinks. Before he knows it, the time has slipped by and the runaway train trundles past him towards the five and he has no time to pull the lever and save the many. When asked later why he did not pull the lever, he replies, “I just didn’t have enough time to think it through you know, my mind was a total blur”.

    A scientific analysis later found that, if he had pulled the lever half way, he could have derailed the train which would have made such a noise and given all those at risk enough warning to get out of the way. Any really slow person would perish under natural selection. Alternatively, the train driver could have blown the whistle. The fat man who was passing when the train was nearing the points and lever was relieved that the philosopher did not have the ability to calculate if he was indeed fat enough to stop a speeding train as that would have involved many calculations and could not be done in an instant.

    Seriously, do we really need to involve philosophical delusions for future cars? We solve these real issues with speed limits and constant checks on vehicles to make sure they are road worthy and by not allowing pedestrians on roads that have high speeds involved. Pile ups on motor ways will be a thing of the past as cars talk to each other (without road rage) and ‘twenty is plenty’ in built up areas. The parents or guardian are responsible for any wandering child and irrational pedestrians are responsible for them selves. I have a train track at the back of my house and when there are people working on the track, there is a person further back up the track with a very loud horn who warns workers that a train is coming.

    We have made these things as safe as we can even with the relatively slow reactions of humans compared to computers. Not having erratic driving and every self driving car obeys the speed limit to the max then they should in fact be much safer than anything we produce today and should avoid the (new) age old complaint about traffic jams, as someone already has done above (?).



    Report abuse

  • @Olgun #10

    I agree completely. This is just luddite nonsense. Human operated vehicles have a dreadful safety record, so anything that beats it is an improvement. There will doubtless be law suits. Jury to decide: Could the average human driver have done better? Payouts will ensue, sometimes, and the vehicles will improve, to cut the legal bills. Eventually we’ll look back in horror at the bad old days of humans at the controls.

    As for the protect the occupants vs protect those outside, that’s what a good random number generator is for.



    Report abuse

  • Those imagining a technological solution to the ethical problem, such as limiting speed, are doomed to disappointment. There will always be a tiny number of freak accidents, even though, as had been pointed out, very few accidents will be reduced to a choice of choosing who should be the casualty. But the almost certain order-of-magnitude improvement in overall transport along with all the other benefits of a driverless car infrastructure (vast reductions in cost, pollution, journey times, congestion, inconvenience, productivity, versatility etc.), it’s a no-brainer that it will supersed human driven vehicles sometime within the next 50 years.

    The ethical problems do have to be resolved beforehand, as they will have to be explicitly part of the prescribed built-in logic of self driving. It’s a hard problem, but not one we should shy away from.

    My own view is that the proposal that the car should always select for the lowest estimate of casualties should be tempered for the same reason that we don’t seize a healthy person’s organs and use them to save the lives of 5 other people, even though the overall suffering/wellbeing analysis appears to suggest it’s a good idea. The reason it’s not a good idea is that on the “suffering” side, one must factor in the feeling of unease that millions of people would feel all the time, living in a society where they can be seized and killed for their organs at any time. Likewise, it won’t be nice to travel in a vehicle that you know is programmed to kill you in the event of a “choice”, even though the likelihood is vanishingly small.

    I think I really like OHooligan’s idea of randomness, at least unless the situation involves many more casualties one way than the other (or unless perhaps there has been malice, recklessness or negligence on someone’s part, given Phil’s very valid point about vandals otherwise playing chicken!)

    Incidentally, the question of machines being assigned legal right and responsibilities is a red herring. As with all mechanical device, with intelligence or not, humans (programmers, operators, policy-makers or whoever) would be held liable if they are deemed to have failed in their duties, otherwise we use the experience to improve the system, so that similar accidents become less likely in future.



    Report abuse

  • Olgun

    Is limiting speed an ethical problem?

    Yes. Only zero metres per second is absolutely safe. The balance struck between harm and utility is mediated by technology and ethics.



    Report abuse

  • We will be disappointed that

    There will always be a tiny number of freak accidents.

    a wayward todder will be slowly rolled over at 10mph. (I apologise for the image.)

    Restricting road speeds increases the cost/capacity ratio of the infrastructure.

    Interestingly these concerns with autonomous vehicles will drive…er…the need to look at total costs of an investment in infrastructure. If, with a little extra investment, pedestrians and road vehicle users can be kept apart and autonomous vehicles can be directly instructed about limits, we may well see minimum speed limits applied and minimum acceleration/deceleration rates applied to enhance capacity.



    Report abuse

  • Thanks Phil. Infrastructure can be physical two foot high kerbs or virtual. I think you said once that you are involved in this kind of technology so I bow to your knowledge. Can it be possible to have three, say, independent safety systems that cannot be corrupted unintentionally or not all at once. Can algorithms be used to ‘book’ a time for departure which prevents traffic jams so you could leave half an hour later and still get there on time? Another model I need to put together in my head so please excuse the ignorance.



    Report abuse

  • Automatic vehicles can, for example, increase the carrying capacity of existing roads. Tailgating (when all the vehicles are automated) becomes safe. Traffic lights would not be needed, as all interconnected vehicles can negotiate for fuel-optimum behavior, though I suspect those with a Premium account will get priority when it comes down to it. And any non-compliant (e.g. human-operated) vehicle could be quickly and safely neutralised by being boxed in and halted by a swarm of automated ones.

    Expect the “no manual driving allowed” signs to go up on a motorway/freeway entrance near you, far sooner than you might imagine. Also at other choke-points, such as the inner city congestion zone of London.

    Sadly, it’ll mean farewell to another two endangered species — the cabbie with his distinctive black taxi, and the Knowledge, and the city bus driver.



    Report abuse

  • 19
    bonnie2 says:

    @ OP – before they can replace the roughly 250 million cars on U.S. roads

    Will it become mandatory? Good luck with that, as vehicles have become an extension of our persona.

    endangered species – limiting speed

    Low – ri- der, drives real slow yeah“; no more cool car shows? Nooooooooo…

    the lesser of two evils

    Watch for Amish horse and buggy on the road.



    Report abuse

  • Bonnie

    I do hope we can grow out of this ‘extention’ of our persona and act like aldults (this comes from a Ford Capri owning boy racer from the late 70’s….me)

    I quite like the world OHooligan paints. The picture Phil described of the baby (which he apologised for painting) looks timid to the reality of speeding parents of today. Where I live, there are many rich 4×4 drivers that don’t seem to care about other people’s children after they have dropped their little darlings off at school. The traffic in London barely reaches double figures anyway. I hope you were joking but you can always visit a museum to see the death machines of today. It would be better to go slow and enjoy the world we might save from global warming and not having to drive means taking your eyes off the road won’t be a problem.



    Report abuse

  • While these are interesting questions – shouldn’t we also be asking how would a robot car get into such a situation in the first place? A human driver can doze off, or be distracted by their phone, but how does that apply to a driverless car?

    In the case of mechanical failure, the car’s not going to have much choice where it goes, just like a human driver. If something emerges from a concealed location within minimum breaking distance, the car’s again not going to have a choice to prevent the accident.



    Report abuse

Leave a Reply

View our comment policy.