Dark Reading is part of the Informa Tech Division of Informa PLC

This site is operated by a business or businesses owned by Informa PLC and all copyright resides with them.Informa PLC's registered office is 5 Howick Place, London SW1P 1WG. Registered in England and Wales. Number 8860726.

Application Security

6/10/2021
01:00 PM
John Donegan
John Donegan
Commentary
Connect Directly
Twitter
LinkedIn
RSS
E-Mail vvv
50%
50%

Deepfakes Are on the Rise, but Don't Panic Just Yet

Deepfakes will likely give way to deep suspicion, as users try to sort legitimate media from malicious.

Emerging technologies have been known to cause unwarranted mass hysteria. That said, and at risk of sounding hyperbolic, the concerns over deepfakes' potential effects are absolutely warranted. As the FBI's cyber division noted in its recent private industry notification, malicious actors have already begun to incorporate deepfake audio and video into their existing spear-fishing and social engineering campaigns. With deepfake technologies becoming more accessible and convincing every day, synthetic media will spread, potentially resulting in serious geopolitical consequences.

Related Content:

Defending Against Deepfakes: From Tells to Crypto

Special Report: Assessing Cybersecurity Risk in Today's Enterprises

New From The Edge: A View From Inside a Deception

Current State of Deepfakes
Much like consumer photo and video editing software, deepfake technologies are neither inherently good nor bad, and they will eventually become mainstream. In fact, there are already a host of popular, ready-to-use applications, including FaceApp, FaceSwap, Avatarify, and Zao. Although many of these apps come with disclaimers, this synthetic content is completely protected under the First Amendment. That is, until the content is used to further illegal efforts, and of course, we are already seeing this happen. On Dark Web forums, deepfake communities share intelligence, offer deepfakes as a service (DaaS), and to a lesser extent, buy and sell content

At the moment, deepfake audio is arguably more dangerous than deepfake video. Without visual cues to rely on, users have a difficult time recognizing synthetic audio, making this form of deepfake particularly effective from a social engineering standpoint. In March 2019, cybercriminals successfully conducted a deepfake audio attack, duping the CEO of a UK-based energy firm into transferring $243,000 to a Hungarian supplier. And last year in Philadelphia, a man was targeted by an audio-spoofing attack. These examples show that bad actors are actively using deepfake audio in the wild for monetary gain.

Nonetheless, fear of deepfake video attacks is outpacing actual attacks. Although it was initially reported that European politicians were victims of deepfake video calls, as it turns out, the attacks were conducted by two Russian pranksters, one of whom shares a remarkable resemblance to Leonid Volkov, chief of staff for anti-Putin politician Alexei Navalny. Nevertheless, this geopolitical incident, and the reaction to it, shows just how fearful we've become of deepfake technologies. Headlines such as Deepfake Attacks Are About to Surge and Deepfake Satellite Images Pose Serious Military and Political Challenges are becoming increasingly common. It does, indeed, feel as if the fear of deepfakes is outpacing actual attacks; however, this doesn't mean that the concern is unwarranted.

Some of the most celebrated deepfakes still take a great deal of effort and a high level of sophistication. The viral Tom Cruise deepfake was a collaboration between Belgium video effects specialist Chris Ume and actor Miles Fisher. Although Ume used DeepFaceLab, the open source deepfake platform responsible for 95% of deepfakes currently created, he cautions people that this video was not easy to make. Ume trained his AI-based model for months, then incorporated Fisher's mannerisms and CGI tools.

Credit: freshidea via Adobe Stock
Credit: freshidea via Adobe Stock

Seeing as deepfakes are going to be used as an extension of existing spear-phishing and social engineering campaigns, it's vital to keep employees vigilant and cognizant of such attacks. It's important to have a healthy skepticism of media content, especially if the source of the media is questionable.

It's important to look for different tells, including overly consistent eye spacing; syncing issues between a subject's lips and their face; and, according to the FBI, visual distortions around the subject's pupils and earlobes. Lastly, blurry backgrounds — or blurry portions of a background — are a red flag. As a caveat, these tells are constantly changing. When deepfakes first circulated, weird breathing patterns and blinking eyes were the most common signs. However, the technology subsequently improved, making these tells obsolete.

What's In Store
We have seen some deepfake detection initiatives from big tech, namely Microsoft's video authentication tool and Facebook's deepfake detection challenge; however, a lot of promising work is being done in academia. In 2019, scholars noted that discrepancies between head movements and facial expressions could be used to identify deepfakes.

More recently, scholars have focused on mouth shapes failing to match the proper sounds, and perhaps most groundbreaking, a recent project has zeroed in on generator signals. This proposed approach not only separates authentic videos from deepfakes, but it also attempts to identify the specific generative models behind fake videos.

In real time, we're seeing a back and forth between those using generative adversarial networks for good and those using them to do harm. In February, researchers found that systems designed to identify deepfakes can be tricked. Thus, not to belabor the point, but concerns over deepfakes are well-founded.

Protect Yourself and Your Company
As is the case with any new technology, regulatory and legal systems are unable to move as quickly as the emerging technology. Like Photoshop before them, deepfake tools will eventually become mainstream. In the short term, the onus is on all of us to remain vigilant and cognizant of deepfake-powered social engineering attacks.

In the longer term, regulatory agencies will have to intervene. A few states — California, Texas, and Virginia — have already passed criminal legislation against certain types of deepfakes, and social media companies have engaged in self-regulation as well.

In January 2020, Facebook issued a manipulated media policy, and the following month, Twitter and YouTube followed suit with policies of their own. That said, these companies don't have the best track records when it comes to self-regulation. Until deepfake detection tools become mainstream and federal cybersecurity laws are enacted, it's wise to maintain a healthy skepticism of certain media, especially if the media source is suspicious, or if that phone call request doesn't sound quite right.

John Donegan is an enterprise analyst at ManageEngine. He covers infosec and cybersecurity, addressing technology-related issues and their impact on business. John holds several degrees, including a B.A. from New York University, an M.B.A. from Pepperdine University, an M.A. ... View Full Bio
 

Recommended Reading:

Comment  | 
Print  | 
More Insights
Comments
Newest First  |  Oldest First  |  Threaded View
Commentary
What the FedEx Logo Taught Me About Cybersecurity
Matt Shea, Head of Federal @ MixMode,  6/4/2021
Edge-DRsplash-10-edge-articles
A View From Inside a Deception
Sara Peters, Senior Editor at Dark Reading,  6/2/2021
Register for Dark Reading Newsletters
White Papers
Video
Cartoon
Current Issue
The State of Cybersecurity Incident Response
In this report learn how enterprises are building their incident response teams and processes, how they research potential compromises, how they respond to new breaches, and what tools and processes they use to remediate problems and improve their cyber defenses for the future.
Flash Poll
How Enterprises are Developing Secure Applications
How Enterprises are Developing Secure Applications
Recent breaches of third-party apps are driving many organizations to think harder about the security of their off-the-shelf software as they continue to move left in secure software development practices.
Twitter Feed
Dark Reading - Bug Report
Bug Report
Enterprise Vulnerabilities
From DHS/US-CERT's National Vulnerability Database
CVE-2021-23394
PUBLISHED: 2021-06-13
The package studio-42/elfinder before 2.1.58 are vulnerable to Remote Code Execution (RCE) via execution of PHP code in a .phar file. NOTE: This only applies if the server parses .phar files as PHP.
CVE-2021-34682
PUBLISHED: 2021-06-12
Receita Federal IRPF 2021 1.7 allows a man-in-the-middle attack against the update feature.
CVE-2021-31811
PUBLISHED: 2021-06-12
In Apache PDFBox, a carefully crafted PDF file can trigger an OutOfMemory-Exception while loading the file. This issue affects Apache PDFBox version 2.0.23 and prior 2.0.x versions.
CVE-2021-31812
PUBLISHED: 2021-06-12
In Apache PDFBox, a carefully crafted PDF file can trigger an infinite loop while loading the file. This issue affects Apache PDFBox version 2.0.23 and prior 2.0.x versions.
CVE-2021-32552
PUBLISHED: 2021-06-12
It was discovered that read_file() in apport/hookutils.py would follow symbolic links or open FIFOs. When this function is used by the openjdk-16 package apport hooks, it could expose private data to other local users.