In recent days, the cybersecurity community has been abuzz with discussion of the latest announcement from Google's Threat Analysis Group. Google says it has spent the past few months tracking a new campaign orchestrated by "a government-backed entity based in North Korea," thought to be the threat actor known as the Lazarus Group. The campaign targeted a number of security researchers.
There are special lessons to be learned from this campaign. The researchers were attacked in a complex, multivector fashion. To cope with this kind of attack, security and risk teams need to look beyond virtual private networks and network infrastructure to the communication channels where social engineering is taking place.
Dissecting a Multivector Attack
Google hopes its announcement will remind people to "remain vigilant when engaging with individuals they have not previously interacted with." Why? Because this campaign was not simply a spoofed email. This was a sophisticated attack, where the threat actors played the long game with social engineering and a multichannel approach:
The uniquely dangerous feature of this campaign is its multivector nature. There is no single point of attack or contact. Instead, the threat surface implicated involves multiple cloud channels and messaging apps.
Stopping Social Engineering Attacks in Their Tracks
This North Korea-backed campaign recruited multiple points of attack. At a minimum: 10 Twitter accounts, five LinkedIn accounts, one Telegram account, nearly 20 malicious URLs.
How can organizations counter such a multipronged assault on their researchers? Google recommends that you "compartmentalize your research activities, using separate physical or virtual machines for general web browsing, interacting with others in the research community, accepting files from third parties, and your own security research."
These standard steps are worth remembering. But also, security teams should look for ways to detect bad actor accounts preventatively, so they can stop social engineering before it can start. For example, security controls at the account and message layers could have flagged the Twitter and LinkedIn accounts making contact with an employee as potentially suspicious.
This bad actor detection, alert to suspicious language, could have flagged the accounts as soon as they attempted to connect. This would have stopped the social engineering in its tracks. Moreover, security controls could also have detected issues at any attacker-owned command-and-control domains shared in messages. Most importantly, they could have unpacked and vetted any files from third parties before an employee could open them.
These powers of detection and visibility would have protected targeted researchers. Finally, there is no need for this oversight to breach employees' data privacy. The right technology can scan communications for threats, even as the actual content of messages remains masked. The best practice is to flag risks while content remains hidden.
A Multivector Approach
A few weeks ago, my business partner, Jim Zuffoletti, predicted that social engineering attacks would become more sophisticated in 2021. They are effective, and many organizations are not equipped to deal with them. Bad actors know social and mobile chat channels are invisible to security teams and are targeting these communication channels. Therefore, he says, "security teams need controls in social and chat apps that provide visibility into risks while respecting employees' data privacy."
With multivector campaigns, the risk lies in the third-party cloud channels that are increasingly central to modern business. This is where social engineering is taking place. This is where bad actors are grooming their targets.As the President, CTO, and Co-Founder of SafeGuard Cyber, Mr. Freire is responsible for the development and continuous innovation of SafeGuard Cyber's enterprise platform, which enables global enterprise customers to extend cyber protection to social media and digital ... View Full Bio