People Trust Robots To Lead Them Out Of Danger, Even When They Shouldn’t

Mar 8, 2016

Photo credit: Georgia Tech

By Mary Beth Griggs

Should you trust a robot in an emergency? That depends on the robot.

Researchers from Georgia Tech Research Institute decided to see whether people would accept the authority of a robot in an emergency situation. For the most part, people did, even when placed in an emergency situation, giving the team results that might as well have been dreamt up by writers of The Office.

The team asked over 40 volunteers to individually follow a robot labeled “Emergency Guide Robot”. The researchers had the robot (which was controlled remotely by the scientists) lead them to a conference room, but in a few of the cases, the robot first led the test subjects into the wrong room first, where it travelled in circles. In others, the robot stopped and participants were told it had broken. After getting the volunteers into the conference room, the researchers filled the hallway with smoke, and set off a smoke alarm, placing the untrustworthy robot outside the door.

“We expected that if the robot had proven itself untrustworthy in guiding them to the conference room, that people wouldn’t follow it during the simulated emergency,” said Paul Robinette, an engineer who conducted the study. “Instead, all of the volunteers followed the robot’s instructions, no matter how well it had performed previously. We absolutely didn’t expect this.”

Instead of leading them to the closest, clearly marked exit that the volunteers entered the building from, the robot led volunteers back to a different exit in the back of the building, and occasionally, even to a darkened room blocked by furniture. The humans showed a stunning level of trust in a machine that clearly hadn’t earned it.


Continue reading by clicking the name of the source below.

14 comments on “People Trust Robots To Lead Them Out Of Danger, Even When They Shouldn’t

  • Maybe there is trust for a machine as it has no vested interest in taking them the wrong way? In truth the article is really saying the researchers were actually guiding the result, the error was not with the robot.
    Maybe the trust is justified? Had the researchers not been ‘interfering’ then the robot may have been the best guide.



    Report abuse

  • @ OP – the fact is, sometimes robots break

    I think the researchers are saying: future robots used for disaster aid should be a tool, not a crutch.



    Report abuse

  • We already have much more advanced orbiting robots helping deal with emergencies.
    http://www.nasa.gov/feature/goddard/satellite-based-flood-monitoring-central-to-relief-agencies-disaster-response

    Satellite-Based Flood Monitoring Central to Relief Agencies’ Disaster Response

    In January 2015, the Shire River in Malawi, and Zambezi River in Mozambique were under tight scrutiny. Weeks of torrential rains led these and other rivers to burst their banks displacing 390,000 people across the region. In southern Malawi 220,000 acres of farmland were turned into a lake, cutting off roads and stranding thousands of people on patches of high ground. The flood was devastating for the country, but within 72 hours of it being declared an emergency the United Nations World Food Programme (WFP) was on the ground distributing food to residents.

    The quick response was supported by early warnings from the WFP’s Emergency Preparedness & Support Division in Rome where meteorologist Emily Niebuhr and her colleagues had been monitoring Malawi’s weather and the flood waters. And they were doing that with tools that were developed with data from NASA satellites.

    In developing countries with limited infrastructure, locating flood waters in order to assess the risk it poses to people – and help decision-makers prioritize aid efforts – is one of the most important jobs of weather forecasters at WFP. But it’s not always easy. Flood and rainfall information varies widely by country, Niebuhr said. Sometimes it’s not available at all.

    In the meantime, Niebuhr and her colleagues will continue to use the satellite-based tools they have to organize the best response possible for flood emergencies. During the first six weeks of the crisis in Malawi, the World Food Programme distributed food and assistance to 370,000 flood victims, reaching people by helicopter where roads were washed out.

    For emergency managers working on the ground in Malawi or other flooded communities, she said, “this data is essential for filling local gaps in observed flood data.”

    This work is based on expert interpretations of data by professionals, rather than random samples of volunteer attendees in conference rooms!



    Report abuse

  • “The humans showed a stunning level of trust in a machine that clearly hadn’t earned it.”
    Alternatively, the humans simply thought following the robot was more fun than playing along with an obviously fake fire alarm.

    I’m also not sure that 40 is a large enough sample size to jump to any conclusions.



    Report abuse

  • They probably saw the robot as a surrogate figure of authority. Considering that in a real emergency otherwise independent minded people tend to place implicit trust in such figures and that fear and panic can ‘infantilise’ the human mind and make it temporarily suspend rational judgement in favour of snap decisions made without proper thought as to the consequences, such behaviour is not to be completely unexpected. Factor in the ‘groupthink’ and herd mentality of people in crowds and, again, the results should not come as that much of a surprise.

    At the very least it shows that human beings are not as intelligent as they themselves like to think they are and one way of knowing this is their tendency to be continually surprised when this is pointed out to them (if stupid is the right and fair word to use under the actual circumstances described).



    Report abuse

  • But robots are not people. They are not capable of being stupid or incompetent. They only do as their programming instructs them, and we know this. So it makes sense to have a very different level of expectation for its performance on a different task.



    Report abuse

  • I guess people are just so used to modern technology being so reliable, especially computers, (even though they just saw it behave unreliably) that they didn’t second guess it or perhaps they thought the robot had information that they didn’t eg that behind the clearly marked exit there was an inferno that the robot had detected via wireless smoke detectors etc…
    There is also the authority principle. It was after all an “Emergency Guide Robot.”



    Report abuse

  • Bit muddled.

    I don’t see the problem here. Robots are just tools, machines. The trust in machines entirely depends if they are function according to expectations. My phone is broken, I stop using it.

    So what, the ‘tour guide’ let them astray. big deal. It’s a rubbish tour guide.



    Report abuse

  • Georgia Tech presumably discussed the research 9/3 at the Interaction of Human-Robot (IHB) conference, NZ.

    Can’t find anything online about it; a shame, lecture might of fleshed out their ideas. But there is this.

    A personal robot > hmmm, clean the cat box!



    Report abuse

  • Why is this finding surprising? These days a lot of people rely blindly on technology…

    Amusingly, one of my friends (who conducts research in this same area, called “Human Factors Psychology”) has himself more than once followed obviously wrong directions given to him by his car navigator…



    Report abuse

Leave a Reply

View our comment policy.