Facial Recognition 101: Your Face Is Your New Fingerprint

Facial recognition is a blossoming field of technology that is at once exciting and problematic. If you’ve ever unlocked your iPhone ($1,000 at Amazon) by looking at it, or asked Facebook or Google to go through an unsorted album and show you pictures of your kids, you’ve seen facial recognition in action.

Whether you want it to or not, facial recognition (sometimes called simply “face recognition”) is poised to play an ever-growing role in your life. Your face could be scanned at airports or concerts with or without your knowledge. You could be targeted by personalized ads thanks to cameras at shopping malls. Facial recognition has plenty of upside. The tech could help smart home gadgets get smarter, sending you notifications based on who it sees and offering more convenient access to friends and family. 

facial-recognition-promo

This is part of a CNET special report exploring the benefits and pitfalls of facial recognition.

James Martin/CNET

But at the very least, facial recognition raises questions of privacy. Experts have concerns ranging from the overreach of law enforcement, to systems with hidden racial biases, to hackers gaining access to your secure information.

Over the next few weeks, CNET will be diving into facial recognition with in-depth pieces on a wide variety of topics, including the science that allows it to work and the implications, both positive and negative, for many of its applications. To get you up to speed, here’s a brief overview including what facial recognition is, how it works, where you’ll find it in use today, as well as a few of the implications of this rapidly expanding corner of technology.

What is facial recognition?

Facial recognition is a form of biometric authentication, which uses body measurements to verify your identity. Facial recognition is a subset of biometrics that identifies people by measuring the unique shape and structure of their faces. Different systems use different techniques, but at its core, facial recognition uses the same principles as other biometric authentication techniques, such as fingerprint scanners and voice recognition.

How does facial recognition work?

Now playing: Watch this: Facial recognition: Get to know the tech that gets to…

5:11

All facial recognition systems capture either a two- or three-dimensional image of a subject’s face, and then compare key information from that image to a database of known images. For law enforcement, that database could be collected from mugshots. For smart home cameras, the data likely comes from pictures of people you’ve identified as relatives or friends via the accompanying app. 

Woodrow “Woody” Bledsoe first developed facial recognition software at a firm called Panoramic Research back in the 1960s using two-dimensional images, with funding for the research coming from an unnamed intelligence agency.

Even now, most facial recognition systems rely on 2D images, either because the camera doesn’t have the ability to capture depth information — such as the length of your nose or the depth of your eye socket — or because the reference database consists of 2D images such as mugshots or passport photos.

2D facial recognition primarily uses landmarks such as the nose, mouth and eyes to identify a face, gauging both the width and shape of the features, and the distance between the various features of the face. Those measurements are converted to a numerical code by facial recognition software, which is used to find matches. This code is called a faceprint.

This geometric system can struggle with different angles and lighting. A straight-on shot of a face will show a different distance from nose to eyes, for instance, than a shot of a face turned to the side. The problem can be somewhat mitigated by mapping the 2D image onto a 3D model and undoing the rotation.

Apple uses a 3D facial recognition system called Face ID. It’s good, but not perfect. 

Morgan Little/CNET

Adding a third dimension

3D facial recognition software isn’t as easily fooled by angles and light and doesn’t rely on average head size to guess at a faceprint. With cameras that sense depth, the faceprint can include the contours and curve of the face as well as depth of the eyes and distances from points like the tip of your nose.

Most cameras gauge this depth by projecting invisible spectrums of light onto a face and using sensors to capture the distance of various points of this light from the camera itself. Even though these 3D sensors can capture much more detail than a 2D version, the basis of the technology remains the same — turning the various shapes, distances and depths of a face into a numerical code and matching that code to a database.

If that database consists of 2D images, software needs to convert the 3D faceprint back to a 2D faceprint to get a match.

Apple’s Face ID uses 30,000 infrared dots that map the contours of your face. The iPhone then remembers the relative location of those dots the next time you try to unlock your phone.

Even these more advanced systems can be defeated by something as simple as different facial expressions, wearing glasses or scarves that obscure parts of your face. Apple’s Face ID can struggle to match your tired, squinting, just-woke-up face to your made-up, caffeinated, ready-for-the-day face.

Now playing: Watch this: The Property Brothers can trick Face ID

2:34

Reading your pores

A more recent development, called skin texture analysis, could help future applications overcome all of these challenges. Developed by Identix, a tech company focused on developing secure means of identification, skin texture analysis differentiates itself by functioning at a much smaller scale. Instead of measuring the distance between your nose and your eyes, it measures the distance between your pores. It then converts those numbers into a mathematical code. This code is called a skinprint.

This method could theoretically be so precise that it can tell the difference between twins. Currently, Identix is working to integrate into facial recognition systems alongside a more normal 3D face map. The company claims the tech increases accuracy by 25 percent.

face-id-recognition-6

face-id-recognition-6

Your face can be turned into a code. 

James Martin/CNET

Where is facial recognition being used?

While Bledsoe laid the groundwork for the tech, modern facial recognition began in earnest in the 1980s and ’90s thanks to mathematicians at MIT. Since then, facial recognition has been integrated into all manner of commercial and institutional applications with varying degrees of success.

The Chinese government uses facial recognition for large-scale surveillance in public CCTV cameras, both to catch criminals and monitor the behavior of all individuals with the intent of turning the data into a score. Seemingly harmless offenses like buying too many video games or jaywalking can lower your score. China uses that score for a sort of “social credit” system that determines whether the individual should be allowed to get a loan, buy a house or even much simpler things like board a plane or access the internet.

The London Metropolitan Police also use it as a tool when narrowing their search for criminals, though their system supposedly isn’t very accurate — with incorrect matches reported in a whopping 98 percent of cases. In the US, police departments in Oregon and Florida are teaming up with Amazon to install facial recognition into government-owned cameras.

Facial recognition is undergoing trials at airports to help move people through security more quickly. The Secret Service is testing facial recognition systems around the White House. Taylor Swift even used it to help identify stalkers at one of her concerts. Facial recognition famously led to the arrest of the Capital Gazette shooter in 2018 by matching a picture of the suspect to an image repository of mugshots and pictures from driver’s licenses. The upcoming 2020 Olympics in Tokyo will be the first to use facial recognition to help improve security.

Facial recognition could have large implications for retail outlets and marketers as well, beyond simply watching for thieves. At CES 2019, consumer goods giant Procter & Gamble showed a concept store where cameras could recognize your face and make personalized shopping recommendations.

Now playing: Watch this: Nest Hello video doorbell: Smarter than your average…

2:39

Bringing facial recognition home

Aside from large-scale installations, facial recognition has several uses in consumer products. Beyond iPhones, some phones with Google’s Android operating system like the Google Pixel 2 and the Samsung Galaxy S9 are capable of facial recognition, but the technology on Android isn’t yet secure enough to verify mobile payments. The next version of Android is expected to get a more secure facial recognition system closer to Apple’s Face ID, although Samsung did not incorporate any facial recognition into its newest phone, the Galaxy S10 ($1,000 at Amazon), as many industry watchers had expected.

Facebook has used facial recognition for years to suggest tags for pictures. Other photo applications, such as Google Photos, are getting better at doing the same.

In the smart home, after starting as a niche feature in connected cams such as the Netatmo Welcome, facial recognition is now built into several popular models, including the Nest Hello video doorbell. We saw a bunch of new gadgets with the tech on display at CES 2019.

Connected cams compare faces with others they’ve seen before so you can customize notifications based on who the camera sees. All the models we’ve tested take a while to learn faces, as they need to be able to recognize the members of your household at various angles and in various outfits. Once the cameras learn, you can use facial recognition to make your connected security system that much smarter by making your notifications more relevant to what you actually want to know.

Beyond the security uses in the home, even robots like Lovot and Sony’s Aibo robot dog can recognize faces. Aibo and others learn faces not to track who comes and goes, but to adapt to the specific preferences of different people over time.

Now playing: Watch this: My week living with Aibo, Sony’s robot dog

6:55

What are the implications?

Unlike other forms of biometric authentication, cameras can gather information about your face with or without your knowledge or consent. If you’re a privacy-minded person, you could potentially be exposing your data when in a public place without knowing it. 

Because the technology is so new, there aren’t any laws in the US limiting what companies can do with images of your face after they capture it. A bipartisan bill was recently introduced in the Senate to rectify the lack of regulation. 

The American Civil Liberties Union delivered a petition to Amazon last year asking it to stop giving its facial recognition technology to law enforcement agencies and the government, calling the prospect “a user manual for authoritarian surveillance.”

According to a report by Buzzfeed, the US Customs and Border Protection agency plans to implement facial recognition to verify the identity of passengers on international flights in airports across the country. The Electronic Privacy Information Center shared documents with Buzzfeed that suggested the UCB skipped over gathering public feedback before starting to implement these systems, and that they have a questionable accuracy rate and little established privacy regulations as far as what the airlines can do with this facial data after they collect it.

NBC News reported that the databases of pictures used to improve facial recognition often comes from social media sites without the consent of the subject or photographer. Companies like IBM have the stated goal of using these images to try and improve the accuracy of facial recognition, particularly among people of color. Theoretically, by ingesting the data from a large catalog of faces, the system can fine tune its algorithms to account for a larger variety of facial structures.

The Electronic Frontier Foundation notes that current facial recognition systems tend to produce a disproportionately high number of false positives when identifying minorities. NBC’s story also details how it can be tedious to impossible for private citizens to opt out of using their pictures in these databases.

ring-video-doorbell-two-1

ring-video-doorbell-two-1

The Ring Doorbell would have watched for suspicious individuals. 

Chris Monroe/CNET

Facebook faces a class action lawsuit over its own facial recognition technology, called DeepFace, which identified people in photos without their consent. Smart home company Ring, an Amazon subsidiary, also came under fire last year for filing patents based on facial technology that could have violated civil rights.

Ring’s video doorbells would have monitored neighborhoods for known sex offenders and those on “most wanted” lists and could then have automatically notified law enforcement. The idea was criticized as likely to target those unfairly deemed a threat and potentially even political activists.

The science behind facial recognition is certainly exciting, and the tech could lead to a safer and more personal smart home, but facial recognition could easily result in a loss of privacy, unjust profiling and violations of personal rights. While the impact of facial recognition is still being determined and debated, it’s important to recognize that facial recognition is no longer some distant concept reserved for science fiction. For better or worse, facial recognition is here now and spreading quickly. 

Check back throughout the month as CNET dives deeper into the implications of this developing technology.

Published March 18 at 5:00 a.m. PT
Update, 3:00 p.m. PT.

CNET Smart Home

READ MORE HERE