A world filled with mobile devices capable of instantly recognizing anyone's face can seem both empowering and scary. It's empowering because ordinary consumers can expect to eventually wield such power in their handheld and wearable devices; it's scary because the government, corporations and strangers on the street could use the same devices.
The merest hint of such a future prompted eight members of the U.S. Congress to help pressure Google into blocking facial-recognition technology on its "Google Glass" smart glasses. But for years, the technology has already helped law enforcement and casinos to identify wanted -- or unwanted -- individuals captured on surveillance cameras. Facial-recognition capability has also begun appearing on the smartphones of police officers and even ordinary consumers.
"On the low end, laptops can provide a face unlock feature, similar to what is done with smartphones," said Joshua Klontz, a research scientist at Michigan State University. "On the opposite end, full server racks can be used to conduct searches against millions of face images, as I suspect will be the case when the FBI's NGI (Next Generation Identification) system becomes operational."
Even the best facial-recognition technology still has problems identifying a person based on crummy photos and videos -- and it's completely out of luck if an individual's identification is not available in computer databases or at least publicly visible on Facebook. But a new generation of algorithms, a rising number of online and offline databases, and swarms of cameras in consumer devices all aim to make facial recognition a growing part of daily life.
"It's not like CSI or NCIS or Minority Report, but it's getting there," said Paul Schuepp, president and CEO of Animetrics.
Now You See Me
Facial recognition can already identify people with 99 percent accuracy under the best circumstances, Schuepp said. "Best" circumstances for facial-recognition technology mean having an ideal "probe" image and a database of similarly ideal images for comparison as possible matches, headshot photos similar to those seen in passport photos or mug shots.
But images taken by a surveillance camera or iPhone camera rarely come out so perfectly in the real world.
"Human faces are susceptible to all sorts of lighting problems; they're always changing angles," Schuepp told TechNewsDaily. "They change over the years — beards or no beards, glasses or no glasses. They're not like a fingerprint that's always the same."
Facial-recognition algorithms work the same way on both photos and video, except that video can provide the equivalent of many still images to choose from in getting a sense of a person's face. Yet video-camera footage of real-life events, such as the Boston Marathon bombing, can still present a big challenge for facial-recognition technology, a Michigan State University study shows.
In the study, Klontz and Anil Jain, a professor of computer science and engineering at Michigan State, tested three facial-recognition systems on law-enforcement video footage of the Boston Marathon bombing. Just one of the three systems came up with a solid identification for the suspect Dzhokhar Tsarnaev. The second suspect, Tamerlan Tsarnaev, could not be identified, in part, because he wore sunglasses.
Facing Problems a Different Way
Sunglasses or a profile shot with just one eye visible can frustrate even the best facial-recognition software, but a badly angled photo or video-camera image does not mean all is lost. Schuepp's company, Animetrics, has developed one possible solution while selling facial-recognition technology to U.S. law enforcement and the military for almost a decade. The company created proprietary software that turns 2-D images into simulated 3-D models of a person's face and allows users to change the person's pose.
The 3D models won't precisely match the real person's face, but they can turn an unusable photo into a standard headshot image that any facial-recognition algorithm can analyze. That's because the algorithms can work most efficiently with an image that more closely resembles the standard professional headshot pose.
"If you take the angulated face and submit it as is to standard facial-recognition systems, you get no results," Schuepp explained. "Our algorithms can boost identification success rates from 35 percent up to 85 percent."
The Animetrics algorithms already power law-enforcement tools -- such as the Mobile Offender Recognition and Information System (MORIS), developed by BI2 technologies -- that can run on police officers' iPhones. Animetrics also sells ForensicaGPS software, the FaceR MobileID system for iPhones and Android phones, and even an ID-Ready online service that allows anyone to upload a 2-D image and run the 3-D conversion.
Making Facial Recognition Mobile
Facial-recognition software can do its work almost instantly in some cases -- the Animetrics algorithm takes just one second to do the 2-D to 3-D image conversion and less than one second to search for a particular face among 1 million faces when running on a higher-end laptop. But the job becomes trickier for mobile devices that have limited computing power and must upload images to a central computing hub that runs the facial-recognition software.
Most mobile facial-recognition technology offloads the computing power burden by using just the smartphones or smart glasses to take the picture before it is uploaded. That means the central computers running the software just needs enough computing power to handle the hundreds or thousands of images being uploaded by police officers or ordinary smartphone users.
The facial-recognition algorithms can be pared down and streamlined to run on the limited computing power of an iPhone or Android phone. But that means the software's capabilities will be more limited, Schuepp said.