Squirrels, Birds Teach Robots To Deceive

//

Deception is something that people do all the time — and it

plays an important role in military strategy. Now some researchers are trying

to figure out how to get robots to do it, by looking at the behavior of

squirrels and birds.

At Georgia Tech, a team led by Ronald Arkin, a professor at the

School of Interactive Computing, studied the literature on squirrels hiding

their caches of acorns. Squirrels will hide their food in a certain place, but

when they see other squirrels trying to steal from them, they attempt to fool

the thieves by running to a false location.

Brain in a Dish Flies Plane

Ronald Arkin, a professor in Georgia Tech's School of Interactive

Computing, and his Ph.D. student Jaeeun Shim, used that as a model for robot

behavior. They programmed the robot into tricking a "predator" machine by doing what a squirrel does: showing the adversary a false location for an important resource.

DNEWS VIDEO: Who’s Your Daddy? Sparrow Gets Around

The team also looked at how other animals -– in this case, a species of

bird called an Arabian babbler –- drive off predators. Babblers will make an

alarm call when they see a predator and other babblers will join the bird and

make more calls. They then mob the predator, all the while flapping wings and

making noise. The babblers don't ever actually fight the animal they want to

drive off; they just make enough noise and flap around enough that attacking a

babbler seems like it isn't worth it.

Arkin and and Ph.D. student Justin Davis found that the deception works when the group reaches a certain

size — essentially, when enough backup arrives to convince the adversary

that it's best to back off. Davis modeled that behavior in software using a military scenario and found that it worked even if the group didn't have the firepower to confront the enemy directly.

Robo-Bee To Get Brain For Autonomous Flight

The military is interested in this because a robot that can fool an

opponent is a valuable tool. It could lead an enemy down a false trail or make

itself look more dangerous than it actually is.

The work is an extension of similar

research Arkin started in 2009, developing a kind of 'ethical governor' for

robots. In 2010 he worked with Alan Wagner to develop deception

algorithms using a kind of hide-and-seek game.

If robots can fool other robots – or people – that does

raise interesting ethical problems. When does fooling people become dangerous?

How do you tell the robot when the right time to do that is? We won't be seeing

anything like the Terminator anytime soon, but we already have drones, and the

military has explored the use of autonomous

supply vehicles. Human Rights Watch has expressed

concern over robots that can make targeting decisions — the ability to

deceive would complicate that.

via Georgia Tech

Credit: Tetra Images/Corbis