Dark Reading is part of the Informa Tech Division of Informa PLC

This site is operated by a business or businesses owned by Informa PLC and all copyright resides with them.Informa PLC's registered office is 5 Howick Place, London SW1P 1WG. Registered in England and Wales. Number 8860726.

Attacks/Breaches

6/20/2019
12:15 PM
50%
50%

Machine Learning Boosts Defenses, but Security Pros Worry Over Attack Potential

As defenders increasingly use machine learning to remove spam, catch fraud, and block malware, concerns persist that attackers will find ways to use AI technology to their advantage.

Machine learning continues to be widely pursued by cybersecurity companies as a way to bolster defenses and speed response. 

Machine learning, for example, has helped companies such as security firm Malwarebytes improve their ability to detect attacks on consumer systems. In the first five months of 2019, about 5% of the 94 million malicious programs detected by Malwarebytes' endpoint protection software came from its machine-learning powered anomaly-detection system, according to the company. 

Such systems, and artificial intelligence (AI) technologies, in general, will be a significant component of all companies' cyberdefense, says Adam Kujawa, director of Malwarebytes' research labs.

"The future of AI for defenses goes beyond just detecting malware, but also will be used for things like finding network intrusions or just noticing that something weird is going on in your network," he says. "The reality is that good AI will not only identify that it's weird, but [it] also will let you know how it fits into the bigger scheme."

Yet, while Malwarebytes joins other cybersecurity firms as a proponent of machine learning and the promise of AI as a defensive measure, the company also warns that automated and intelligent systems can tip the balance in favor of the attacker. Initially, attackers will likely incorporate machine learning into backend systems to create more custom and widespread attacks, but they will eventually focus on ways to attack other AI systems as well.

Malwarebytes is not alone in that assessment, and it's not the first to issue a warning, as it did in a report released on June 19. From adversarial attacks on machine-learning systems to deep fakes, a range of techniques that general fall under the AI moniker are worrying security experts. 

In 2018, IBM created a proof-of-concept attack, DeepLocker, that conceals itself and its intentions until it reaches a specific target, raising the possibility of malware that infects millions of systems without taking any action until it triggers on a set of conditions.

"The shift to machine learning and AI is the next major progression in IT," Marc Ph. Stoecklin, principal researcher and manager for cognitive cybersecurity intelligence at IBM, wrote in a post last year. "However, cybercriminals are also studying AI to use it to their advantage — and weaponize it."

The first problem for both attackers and defenders is creating stable AI technology. Machine-learning algorithms require good data to train into reliable systems, and researchers and bad actors have found ways to pollute the data sets as a way to corrupt the resultant system. 

In 2016, for example, Microsoft launched a chatbot, Tay, on Twitter that could learn from messages and tweets, saying, "the more you talk the smarter Tay gets." Within 24 hours of going online, a coordinated effort by some users resulted in Tay responding to tweets with racist responses.

The incident "shows how you can train — or mistrain — AI to work in effective ways," Kujawa says.

Polluting the dataset collected by cybersecurity firms could similarly create unexpected behavior and make them perform poorly.

A number of AI researchers have already used such attacks to undermine machine-learning algorithms. A group including researchers from Pennsylvania State University, Google, the University of Wisconsin, and the US Army Research Lab used its own AI attacker to craft images that could be fed to other machine-learning systems to train the targeted systems to incorrectly identify images.

"Adversarial examples thus enable adversaries to manipulate system behaviors," the researchers stated in the paper. "Potential attacks include attempts to control the behavior of vehicles, have spam content identified as legitimate content, or have malware identified as legitimate software."

While Malwarebytes' Kujawa cannot point to a current instance of malware in the wild that used machine-learning or AI techniques, he expects to see examples soon. Rather than malware that incorporates neural networks or other AI technology, initial attempts at fusing malware with AI will likely focus on the backend: the command-and-control server, he says.

"I think we are going to see a bot that is deployed on an endpoint somewhere, communicating with the command-and-control server, [which] has the AI, has the technology that is being used to identify targets, what's going on, gives commands, and basically acts as an operator," Kujawa says.

Companies should expect attacks to become more targeted in the future as attackers increasingly use AI techniques. Similar to the way that advertisers track potential interested users, attackers will track the population to better target their intrusions and malware, he says.

"These things could create their own victim profiles internally," he says. "A dossier on each target can be created by an AI very quickly."

Related Content

 

Veteran technology journalist of more than 20 years. Former research engineer. Written for more than two dozen publications, including CNET News.com, Dark Reading, MIT's Technology Review, Popular Science, and Wired News. Five awards for journalism, including Best Deadline ... View Full Bio

Comment  | 
Print  | 
More Insights
Comments
Newest First  |  Oldest First  |  Threaded View
How Attackers Could Use Azure Apps to Sneak into Microsoft 365
Kelly Sheridan, Staff Editor, Dark Reading,  3/24/2020
Malicious USB Drive Hides Behind Gift Card Lure
Dark Reading Staff 3/27/2020
Register for Dark Reading Newsletters
White Papers
Video
Cartoon Contest
Write a Caption, Win a Starbucks Card! Click Here
Latest Comment: This comment is waiting for review by our moderators.
Current Issue
6 Emerging Cyber Threats That Enterprises Face in 2020
This Tech Digest gives an in-depth look at six emerging cyber threats that enterprises could face in 2020. Download your copy today!
Flash Poll
State of Cybersecurity Incident Response
State of Cybersecurity Incident Response
Data breaches and regulations have forced organizations to pay closer attention to the security incident response function. However, security leaders may be overestimating their ability to detect and respond to security incidents. Read this report to find out more.
Twitter Feed
Dark Reading - Bug Report
Bug Report
Enterprise Vulnerabilities
From DHS/US-CERT's National Vulnerability Database
CVE-2020-10940
PUBLISHED: 2020-03-27
Local Privilege Escalation can occur in PHOENIX CONTACT PORTICO SERVER through 3.0.7 when installed to run as a service.
CVE-2020-10939
PUBLISHED: 2020-03-27
Insecure, default path permissions in PHOENIX CONTACT PC WORX SRT through 1.14 allow for local privilege escalation.
CVE-2020-6095
PUBLISHED: 2020-03-27
An exploitable denial of service vulnerability exists in the GstRTSPAuth functionality of GStreamer/gst-rtsp-server 1.14.5. A specially crafted RTSP setup request can cause a null pointer deference resulting in denial-of-service. An attacker can send a malicious packet to trigger this vulnerability.
CVE-2020-10817
PUBLISHED: 2020-03-27
The custom-searchable-data-entry-system (aka Custom Searchable Data Entry System) plugin through 1.7.1 for WordPress allows SQL Injection. NOTE: this product is discontinued.
CVE-2020-10952
PUBLISHED: 2020-03-27
GitLab EE/CE 8.11 through 12.9.1 allows blocked users to pull/push docker images.