The two suspects in the Boston Marathon bombings were found by a relatively old-fashioned method: a witness told cops he made eye contact with a man at the scene who was wearing a baseball cap and who left behind a backpack. That identification allowed investigators to narrow their search when combing through the myriad videos and photographs of the crime scene.
But eventually, computers could help sort through thousands of images to find a suspect. Researchers are learning how to better program computers to not only recognize a face in a crowd but also match a figure wearing dark glasses and a baseball cap in one image to a similarly dressed figure in a Facebook posting, for example. The hope is that one day, law enforcement officials could sort through images from surveillance cameras and posted to social media sites and find useful details in real time.
"The state of the art is progressing," said Takeo Kanade, a professor at the Robotics Institute of Carnegie Mellon University and one of the early pioneers in computer vision.
In the past, getting a computer to recognize an object as simple as a chair from another object was a tough problem. But then programmers realized that rather than trying to define "chair-ness," it was better to have computers compare thousands of pictures of chairs and recognize things that looked similar. The same principle applies to recognizing people.
"When it comes to more standard recognition, like looking at a mug shot, are probably better than humans." He noted that computers are already quite good in that they can pick out a face 85 percent of the time when searching among a million photos. That is much better than humans can do.
But take a computer out of its "comfort zone," and the situation changes. "When it comes to more difficult situations -- not a full frontal face shot, or a smaller picture, it's harder," Kanade said.