Researchers Show Vulnerabilities in Facial Recognition

The algorithms that check for a user’s ‘liveness’ have blind spots that can lead to vulnerabilities.

BLACK HAT USA 2019 – Las Vegas – The multifactor authentication that some have touted as the future of secure authentication is itself vulnerable to hacks as complex as injected video streams and as simple as tape on a pair of eyeglasses. That was the message delivered by a researcher at Black Hat USA earlier today.

Researchers Yu Chen, Bin Ma, and Zhuo (HC) Ma of Tencent Security’s Zuanwu Lab were scheduled to speak here at Black Hat USA, but Visa denials left HC Ma alone on the stage. He said his colleagues had begun the research to find out how biometric authentication was being implemented and, specifically, how the routines designed to separate a living human from a photo or other fake were put into practice.

“Previous studies focused on how to generate fake audio or video, but bypassing ‘liveness detection’ is necessary for a real attack,” Ma said, citing some of the techniques researchers and fiction authors have used to do so.

Most liveness detection is based on a variety of factors, from body temperature (for fingerprint scans) and playback reverberation (for voice recognition) to focus blur and frequency response distortion in facial recognition.

During his presentation, Ma focused on facial recognition as the most complex of the techniques. In the first demonstration, he showed a method the team developed for injecting a video stream into an authentication device between the optical sensor (camera) and processor. This technique, he said, had to consider issues like latency – too much will trigger the system’s defense mechanisms –information loss, and remaining sufficiently “transparent” to avoid detection by the system’s defenses.

While this injection is certainly possible, Ma said it is not a practical attack method because it involves so many pieces, from capturing video of the user to physical possession of the authentication device.

This realization led to further research; Ma said a breakthrough occurred when the team looked at the specifics of live facial recognition algorithms.

Part of the test for facial liveness involves checking for a 3D image — essentially, making sure the face is on a rounded skull. The researchers found that when glasses are worn, the area within the lens of the glasses is evaluated as a 2D image. And on that flat plane lay the vulnerability.

Eyes, it turns out, are merely white dots on a dark patch. The dark patch serves as the eye’s retinas, and the white dot represents highlights indicating the eyes are looking at the camera. If you put pieces of black tape on the center of eyeglass lenses, then put a small piece of white tape on the black, the facial recognition system sees attentive human eyes.

In a humorous demo, Ma showed someone gently sliding eyeglasses onto a supposedly sleeping victim, then picking up a phone and holding it up to the victim to unlock the device. Realistic practice would take more effort, but the point was made — the liveness test is vulnerable.

This vulnerability exists, Ma said, because system designers must walk a tightrope between tight security and user friendliness. He suggested that sliding down that tightrope in the direction of security may be necessary to prevent criminals from finding more easily implemented hacks to unlock devices secured by multifactor authentication.

Related Content:

Curtis Franklin Jr. is Senior Editor at Dark Reading. In this role he focuses on product and technology coverage for the publication. In addition he works on audio and video programming for Dark Reading and contributes to activities at Interop ITX, Black Hat, INsecurity, and … View Full Bio

More Insights

Read More HERE

Leave a Reply