Dark Reading is part of the Informa Tech Division of Informa PLC

This site is operated by a business or businesses owned by Informa PLC and all copyright resides with them.Informa PLC's registered office is 5 Howick Place, London SW1P 1WG. Registered in England and Wales. Number 8860726.

Vulnerabilities / Threats //

Advanced Threats

Microsoft Says It's Time to Attack Your Machine-Learning Models

With access to some training data, Microsoft's red team recreated a machine-learning system and found sequences of requests that resulted in a denial-of-service.

Mature companies should conduct red team attacks against their machine-learning systems to suss out their weaknesses and shore up their defenses, a Microsoft researcher told virtual attendees at the USENIX ENIGMA Conference this week.

As part of the company's research into the impact of attacks on machine learning, Microsoft's internal red team recreated a machine-learning automated system that assigns hardware resources in response to cloud requests. Through testing their own offline version of the system, the team found adversarial examples that resulted in the system becoming over-taxed, Hyrum Anderson, principal architect of the Azure Trustworthy Machine Learning group at Microsoft, said during his presentation.

Related Content:

Microsoft & Others Catalog Threats to Machine Learning Systems

Special Report: Special Report: Understanding Your Cyber Attackers

New From The Edge: What I Wish I Knew at the Start of My InfoSec Career

Pointing at attackers' efforts to get around content-moderation algorithms or anti-spam models, Anderson stressed that attacks on machine-learning are already here.

"If you use machine learning, there is the risk for exposure, even though the threat does not currently exist in your space," he said. "The gap between machine learning and security is definitely there."

The USENIX presentation is the latest effort by Microsoft to bring attention to the issue of adversarial attacks on machine-learning models, which are often so technical that most companies do not know how to evaluate their security. While data scientists are considering the impact that adversarial attacks can have on machine learning, the security community needs to start taking the issue more seriously - but also as part of a broader threat landscape, Anderson says. 

Machine-learning researchers are focused on attacks that pollute machine learning data, epitomized by presenting two seemingly-identical image of, say, a tabby cat, and having the AI algorithm identify it as two completely different things, he said. More than 2,000 papers have been written in the last few years, citing these sorts of examples and proposing defenses, he said.

"Meanwhile, security professionals are dealing with things like SolarWinds, software updates and SSL patches, phishing and education, ransomware, and cloud credentials that you just checked into Github," Anderson said. "And they are left to wonder what the recognition of a tabby cat has to do with the problems they are dealing with today."

In November, Microsoft joined with MITRE and other organizations to release the Adversarial ML Threat Matrix, a dictionary of attack techniques created as an addition to the MITRE ATT&CK framework. Almost 90% of organizations do not know how to secure their machine-learning systems, according to a Microsoft survey released at the time.

Microsoft's Research

Anderson shared a red team exercise conducted by Microsoft where the team aimed to abuse a Web portal used for software resource requests and the internal machine-learning algorithm that determines automatically to which physical hardware it assigns a requested container or virtual machine.

The red team started with credentials for the service, under the assumption that attackers will be able to gather valid credentials - either by phishing or because an employee reuses their user name and password. The red team found that two elements of the machine-learning process could be viewed by anyone: read-only access to the training data and key pieces of the data collection part of the ML model. 

That was enough to create their own version of the machine-learning model, Anderson said.

"Even though we built a poor man's replicable model that is likely not identical to the production model, it did allow us to study—as a straw man—and formulate and test an attack strategy offline," he said. "This is important because we did not know what sort of logging and monitoring and auditing would have been attached to the deployed model service, even if we had direct access to us."

Armed with a container image that requested specific types of resources to cause an "oversubscribed" condition, the red team logged in through a different account and provisioned the cloud resources. 

"Knowing those resource requests that would guarantee an oversubscribed condition, we could then instrument a virtual machines with hungry resource payloads, high-CPU utilization and memory usage, which would be over-provisioned and cause a denial of service to the other containers on the same physical host," Anderson said. 

More information on the attack can be found on a GitHub page from Microsoft that contains adversarial ML examples.

Anderson recommends that data-science teams defensively protecting their data and model, and conduct sanity checks—such as making sure that the ML model is not over-provisioning resources—to increase robustness.

Just because a model not accessible externally does not mean it's safe, he says.

"Internal models are not safe by default—that is an argument that is simply 'security by obscurity' in disguise," he said. "Even though a model may not be directly accessible to the outside world, there are paths by which an attacker can exploit them to cause cascading downstream effects in an overall system."

Veteran technology journalist of more than 20 years. Former research engineer. Written for more than two dozen publications, including CNET News.com, Dark Reading, MIT's Technology Review, Popular Science, and Wired News. Five awards for journalism, including Best Deadline ... View Full Bio

Recommended Reading:

Comment  | 
Print  | 
More Insights
Newest First  |  Oldest First  |  Threaded View
Ransomware Is Not the Problem
Adam Shostack, Consultant, Entrepreneur, Technologist, Game Designer,  6/9/2021
How Can I Test the Security of My Home-Office Employees' Routers?
John Bock, Senior Research Scientist,  6/7/2021
New Ransomware Group Claiming Connection to REvil Gang Surfaces
Jai Vijayan, Contributing Writer,  6/10/2021
Register for Dark Reading Newsletters
White Papers
Cartoon Contest
Write a Caption, Win an Amazon Gift Card! Click Here
Latest Comment: Zero Trust doesn't have to break your budget!
Current Issue
The State of Cybersecurity Incident Response
In this report learn how enterprises are building their incident response teams and processes, how they research potential compromises, how they respond to new breaches, and what tools and processes they use to remediate problems and improve their cyber defenses for the future.
Flash Poll
The State of Ransomware
The State of Ransomware
Ransomware has become one of the most prevalent new cybersecurity threats faced by today's enterprises. This new report from Dark Reading includes feedback from IT and IT security professionals about their organization's ransomware experiences, defense plans, and malware challenges. Find out what they had to say!
Twitter Feed
Dark Reading - Bug Report
Bug Report
Enterprise Vulnerabilities
From DHS/US-CERT's National Vulnerability Database
PUBLISHED: 2021-06-17
In CiviCRM before 5.21.3 and 5.22.x through 5.24.x before 5.24.3, users may be able to upload and execute a crafted PHAR archive.
PUBLISHED: 2021-06-17
In CiviCRM before 5.28.1 and CiviCRM ESR before 5.27.5 ESR, the CKEditor configuration form allows CSRF.
PUBLISHED: 2021-06-17
HashiCorp Nomad and Nomad Enterprise up to version 1.0.4 bridge networking mode allows ARP spoofing from other bridged tasks on the same node. Fixed in 0.12.12, 1.0.5, and 1.1.0 RC1.
PUBLISHED: 2021-06-17
An XSS issue was discovered in manage_custom_field_edit_page.php in MantisBT before 2.25.2. Unescaped output of the return parameter allows an attacker to inject code into a hidden input field.
PUBLISHED: 2021-06-17
All versions of package lutils are vulnerable to Prototype Pollution via the main (merge) function.