Dark Reading is part of the Informa Tech Division of Informa PLC

This site is operated by a business or businesses owned by Informa PLC and all copyright resides with them.Informa PLC's registered office is 5 Howick Place, London SW1P 1WG. Registered in England and Wales. Number 8860726.

Attacks/Breaches

8/11/2020
07:00 PM
Connect Directly
Twitter
LinkedIn
RSS
E-Mail
50%
50%

Researchers Trick Facial-Recognition Systems

Goal was to see if computer-generated images that look like one person would get classified as another person.

Neural networks powered by recent advances in artificial intelligence and machine learning technologies increasingly have become adept at generating photo-realistic images of human faces completely from scratch.

The systems typically use a dataset comprised of millions of images of real people to "learn" over a period of time how to autonomously generate original images of their own.

At the Black Hat USA 2020 virtual event last week, researchers from McAfee showed how they were able to use such technologies to successfully trick a facial-recognition system into misclassifying one individual as an entirely different person. As an example, the researchers showed how at an airport an individual on a no-fly list could trick a facial-recognition system used for passport verification into identifying him as another person.

"The basic goal here was to determine if we could create a fake image, using machine learning models, which looked like one person to the human eye, but simultaneously classified as another person to a facial recognition system," says Steve Povolny, head of advanced threat research at McAfee.

To do that, the researchers built a machine-learning model and fed it training data: a set of 1,500 photos of two separate individuals. The images were captured from live video and sought to accurately represent valid passport photos of the two people.

The model then continuously created and tested fake images of the two individuals by blending the facial features of both subjects. Over hundreds of training loops, the machine-learning model eventually got to a point where it was generating images that looked like a valid passport photo of one of the individuals: even as the facial recognition system identified the photo as the other person.

Povolny says the passport-verification system attack scenario — though not the primary focus of the research — is theoretically possible to carry out. Because digital passport photos are now accepted, an attacker can produce a fake image of an accomplice, submit a passport application, and have the image saved in the passport database. So if a live photo of the attacker later gets taken at an airport — at an automated passport-verification kiosk, for instance — the image would be identified as that of the accomplice.

"This does not require the attacker to have any access at all to the passport system; simply that the passport-system database contains the photo of the accomplice submitted when they apply for the passport," he says.  

The passport system simply relies on determining if two faces match or do not match. All it does is verify if a photo of one person is identified against a saved photo in the back end. So such an attack is entirely feasible, though it requires some effort to pull off, Povolny says.

"It is less likely that a physical passport photo that was mailed in, scanned, and uploaded to this database, would work for the attack," he notes.

Generative Adversarial Networks

McAfee's research involved the use of a so-called Generative Adversarial Network (GAN) known as CycleGAN. GANs are neural networks capable of independently creating data that is very similar to data that is input into them. For example, a GAN can use a set of real images of human faces or of horses to autonomously generate completely synthetic — but very real-looking — images of human faces and horses. GANs use what are known as generative networks to generate the synthetic data, and discriminative networks to continuously assess the quality of the generated content until it reaches acceptable quality.

CycleGAN itself, according to McAfee, is a GAN for image-to-image translation: translating an image of zebras to an image of horses, for example. One feature of the GAN is that it uses significant features of an image for translation, such as eye placement, shape of head, body size, and other attributes.  

In addition to CycleGAN, the McAfee researchers also used a facial-recognition architecture called FaceNet originally developed by Google for image classification. Building and training the machine-learning model took a period of several months.

"While we would have loved to have access to a real-world target system to replicate this, we are thrilled with the results of achieving positive misclassifications in white box and gray-box scenarios," Povolny says.

Given the increasingly important role that facial recognition systems have begun playing in law enforcement and other areas, more proactive research is needed to understand all of the ways such systems can be attacked, he says.

"Anomaly testing, adversarial input, and more diverse training data are among the ways that vendors can improve facial recognition systems," Povolny notes. "Additionally, defense-in-depth, leveraging a second system, whether human or machine, can provide a much higher bar to exploitation than a single point of failure."

Related Content:

Jai Vijayan is a seasoned technology reporter with over 20 years of experience in IT trade journalism. He was most recently a Senior Editor at Computerworld, where he covered information security and data privacy issues for the publication. Over the course of his 20-year ... View Full Bio
 

Recommended Reading:

Comment  | 
Print  | 
More Insights
Comments
Newest First  |  Oldest First  |  Threaded View
COVID-19: Latest Security News & Commentary
Dark Reading Staff 9/25/2020
WannaCry Has IoT in Its Crosshairs
Ed Koehler, Distinguished Principal Security Engineer, Office of CTO, at Extreme Network,  9/25/2020
Safeguarding Schools Against RDP-Based Ransomware
James Lui, Ericom Group CTO, Americas,  9/28/2020
Register for Dark Reading Newsletters
White Papers
Video
Cartoon
Current Issue
Special Report: Computing's New Normal
This special report examines how IT security organizations have adapted to the "new normal" of computing and what the long-term effects will be. Read it and get a unique set of perspectives on issues ranging from new threats & vulnerabilities as a result of remote working to how enterprise security strategy will be affected long term.
Flash Poll
How IT Security Organizations are Attacking the Cybersecurity Problem
How IT Security Organizations are Attacking the Cybersecurity Problem
The COVID-19 pandemic turned the world -- and enterprise computing -- on end. Here's a look at how cybersecurity teams are retrenching their defense strategies, rebuilding their teams, and selecting new technologies to stop the oncoming rise of online attacks.
Twitter Feed
Dark Reading - Bug Report
Bug Report
Enterprise Vulnerabilities
From DHS/US-CERT's National Vulnerability Database
CVE-2020-26120
PUBLISHED: 2020-09-27
XSS exists in the MobileFrontend extension for MediaWiki before 1.34.4 because section.line is mishandled during regex section line replacement from PageGateway. Using crafted HTML, an attacker can elicit an XSS attack via jQuery's parseHTML method, which can cause image callbacks to fire even witho...
CVE-2020-26121
PUBLISHED: 2020-09-27
An issue was discovered in the FileImporter extension for MediaWiki before 1.34.4. An attacker can import a file even when the target page is protected against "page creation" and the attacker should not be able to create it. This occurs because of a mishandled distinction between an uploa...
CVE-2020-25812
PUBLISHED: 2020-09-27
An issue was discovered in MediaWiki 1.34.x before 1.34.4. On Special:Contributions, the NS filter uses unescaped messages as keys in the option key for an HTMLForm specifier. This is vulnerable to a mild XSS if one of those messages is changed to include raw HTML.
CVE-2020-25813
PUBLISHED: 2020-09-27
In MediaWiki before 1.31.10 and 1.32.x through 1.34.x before 1.34.4, Special:UserRights exposes the existence of hidden users.
CVE-2020-25814
PUBLISHED: 2020-09-27
In MediaWiki before 1.31.10 and 1.32.x through 1.34.x before 1.34.4, XSS related to jQuery can occur. The attacker creates a message with [javascript:payload xss] and turns it into a jQuery object with mw.message().parse(). The expected result is that the jQuery object does not contain an <a> ...