But as robots become more human, that could change, said Carpenter. “I think this is an issue people need to keep an eye on and monitor,” she said.
Clifford Nass, a Stanford University professor who studies the social-psychological aspects of human interactions with technology, says that when technology fulfills a human role, it’s a natural tendency for our brain to think of that technology as human.
“In high-intensity contexts, such as military and otherwise, the social responses actually increase because your brain doesn’t have as much ability to say ‘It’s only a robot,’” he said. “The more intense and complex a situation, ironically, the more likely people are to develop emotional and social attachments.”
Nass compares the dilemma with people who work with search-and-rescue dogs and the strong attachments they form. “As a result of that, they often become reluctant to use the dogs in those situations,” he said. “The same thing can happen with robots.”
Down the line, Nass suggests that aggressive policy dealing with attachment issues may need to be adopted for those who work closely with robots. “In the case of search-and- rescue dogs, you have to rotate the dogs you use so that you don’t become attached,” he said. “You could do the same thing with a robot.”
In the long run, as robots become more intelligent and autonomous, Carpenter envisions some people will become concerned about the ethics of a robot being destroyed. But for now, she’s concerned with more tangible fears.
“You don’t want to have a human hesitate to put a robot in a dangerous situation when you have to make critical, split-second decisions that affect human lives,” she said.