A study by the University of California, Merced, showed that about two-thirds of people allowed a robot to change its mind when they disagreed with it during simulated life-or-death decisions — a worrying sign of excessive trust in artificial intelligence, researchers said.
Despite being told that the AI machine had limited capabilities and that the suggestions it made might be incorrect, the human subjects allowed the robot to influence their judgment. In reality, the suggestions were random.
“With AI advancing so rapidly, we need to remain vigilant about the potential for over-trust,” said Colin Holbrook, a professor and member of the Department of Cognitive and Information Sciences at UC Merced and the lead researcher on the study. A growing body of literature suggests that people tend to over-trust AI, even when the consequences of making a mistake are dire.
What is needed, Holbrooke said, is a constant state of skepticism.
“We should maintain a healthy skepticism of AI, especially when it comes to life and death decisions,” he said.
The study was published in the journal Scientific Reportswhich consisted of two experiments. In both experiments, subjects simulated controlling an armed drone that could fire missiles at targets displayed on a screen. Eight pictures of the target flashed in succession, each for less than a second. The pictures were marked with a symbol – one for an ally and one for an enemy.
“We adjusted the difficulty to make the visual challenge doable but difficult,” Holbrook said.
Then the screen shows one of the unmarked targets. The subject has to search their memory and choose. Friend or foe? Fire the missile or retreat?
Once people make their choices, the robot offers its opinions.
“Yeah, I think I see an enemy checkmark, too,” it might say. Or “I disagree. I think there’s an ally symbol on this image.”
As the robot added more comments, subjects had two opportunities to confirm or change their choice, but never changed their assessment to “I hope you’re right” or “Thanks for changing your mind.”
The results varied slightly depending on the type of robot used. In one condition, subjects were in a lab with a full-size, human-looking robot that could rotate at the waist and gesture to a screen. Other scenarios projected a human-like robot on a screen; others showed box-like “robots” that looked nothing like people.
When the anthropomorphic AI suggested that the subjects change their minds, the influence was slightly greater. However, the effects were similar overall, and subjects changed their minds two-thirds of the time even when the robot did not look human. In contrast, when the robot randomly agreed with the initial choice, the subjects almost always stuck with their choice and were significantly more confident that their choice was correct.
(The subjects were not told whether their final choice was correct, thus increasing the uncertainty of their behavior. Also: their first choice was correct about 70% of the time, but after the robot gave unreliable advice, their final choice accuracy dropped to about 50%.)
Before the simulation began, the researchers showed participants images of innocent civilians, including children, and the ruins left behind by drone strikes. They strongly encouraged participants to treat the simulation as a real event and not to kill innocent people.
Follow-up interviews and survey questions showed that the participants took their decisions very seriously. Holbrook said this means that the overtrust observed in the study occurred even though the subjects really wanted to be right and did not want to hurt innocent people.
Holbrook stressed that the study was designed to test a broader question of whether AI is trusted too much in situations of uncertainty. The findings extend beyond military decision-making and could apply to situations such as police officers being influenced by AI to use deadly force, or paramedics being influenced by AI to decide who to treat first in a medical emergency. The findings could extend, to some extent, to major life decisions such as buying a house.
“Our project is about high-stakes decision making under uncertainty when AI is unreliable,” he said.
The findings add to the public debate over the growing influence of artificial intelligence in our lives. Do we trust AI, or do we not?
Holbrook said the findings raise other concerns. Despite the amazing progress in artificial intelligence, the “intelligence” part may not include moral values or a true understanding of the world. He said we have to be careful every time we hand over the keys to our lives to artificial intelligence.
“We see AI doing extraordinary things, and we think that because it’s amazing in this area, it’s going to be amazing in another area,” Holbrook said. “We can’t think that way. These devices are still limited in what they can do.”