The American View: Why the Scam Calls Need to Come from Inside the House

I enjoy phishing. There’s something richly satisfying about crafting a fake email that can trick an unsuspecting victim into infecting their company with ransomware, all thanks to a clever arrangement of colour, text weighting, graphics, and misdirection. It’s like a magic trick carried out entirely through correspondence. It’s artistic expression with an immediate payoff … I don’t have to wait for a gallery exhibition to gage the public’s appreciation of my work because I get immediate feedback in the form of compromised PCs and howls of outrage. It’s brilliant fun. 

To be clear, I’m not advocating committing real cybercrime here. I’m a security awareness professional, so all my phishing is benign. We use the same tactics and same design strategy as real scammers, but our attacks are always equipped with “dummy” warheads (so to speak). Our phishing can’t hurt anyone because our “weapons” are designed from the outset with the intention of conditioning users’ instincts and training them where to look in a new message for signs of potential skulduggery. 

That said, just because we’re not allowed to actually steal millions of pounds from our victims doesn’t mean we’re losing out on the joy of the craft. If anything, our remit to send only harmless phish means we get to attack our “victims” again and again and get thanked for it instead of getting arrested. I quite like that part.

Anyway. Phishing simulations require either exceptional technical skill – if you want to craft artisanal bespoke weapons like a Red Teamer or a digital hipster – or a commercial phishing sim platform. I don’t have the time or interest to take the first approach. Why hand-build your own custom airplane when you can buy a ticket on a time-tested, meticulously maintained Airbus? 

Admittedly, I know a few Red Teamers who could and would build a world-class luxury experience phish just to show off, but they’re very weird people. Do NOT give your Red Team an unlimited budget because they will most assuredly use it. 

The trouble is, I believe that contemporary phishing simulation platforms all fall short of the mark. They’re good at what they do, but they’re annoyingly incomplete. 

Let’s backtrack a second. If you haven’t seen one of these, they’re websites that allow the customer to design, customize, launch, track, and report on phishing sims. Most of them allow you to tap dozens or hundreds of professionally built attack message templates. These are often slightly modified copies of real attack phish that have already been used by actual criminals to attack actual victims. Great stuff! When you want to train your users how to spot a specific type of attack, what better way than to deploy that exact attack against them with a benign payload?

Some platforms allow you to build your own attack phish from scratch (which I greatly appreciate). Other platforms restrict you to their proprietary template library but allow for some customization. You might be able to change external attack domains, for example, or swap out impersonated company logos. Most platforms give you some degree of editorial control in changing the text in the message body. 

Most all phishing sim platforms use the same core techniques for measuring victim behaviour. When a user opens the phish, the loading of graphics sends the server a unique code that identifies the user by name, along with what kind of computer or phone they’re reading the message on, where they are, what time it is, etc. Similarly, if the user falls for the lure and opens an attachment or clicks a hyperlink in the message body, the server notes that too. Very useful. 

It’s simultaneously a blessing and a curse, though. Flashy features like these convince users that “Mission Impossible” style survailance technology is real.

I said that contemporary phishing simulation platforms all fall short of the mark. Here’s how: with most platforms, I’m limited to measuring message opens, clicked links, and opened attachments and that’s all. You’d think that would be good enough. It’s not. 

See, the bad guys are fully aware of the countermeasures that companies have deployed to thwart their efforts. The baddies know what we’re able to train users on phish detection and they know how we train, so they modify their attack designs to exploit vulnerabilities that our commercial training platforms can’t train for.

One of the easiest ways to undermine a victim’s corporate defences is to lure them out of their company’s locked-down enclave. It’s the same principle as a guerrilla luring a soldier out of his bunker by pretending to be a comrade in distress. The victim is protected so long as they stay inside the bunker; once they venture out into the woods, the guerrilla has the upper hand, and all the base’s defences are no longer in play. The same principle applies in cybercrime: send one perfectly innocuous (and un-weaponized) phish to get the victim’s attention, then convince the victim to switch over to an encrypted messaging smartphone app before starting the actual scam – an information exchange environment that the company can’t monitor or block. 

Another way of doing this is to convince the victim to phone the attacker. Get out of email and network communications entirely and social engineer by voice them instead. It’s a bit riskier, but it pays off. Social engineering is exceptionally effective at overcoming victims’ suspicions since the attacker can perceive when, where, and how a victim reacts poorly to a given lie and they change the lie to get a better result. 

“No worries, Mr. Smith. I appreciate you being cautious. You’re very well trained and you sound so handsome! Now, if you’ll just read me your bank password one more time we’ll test your two-factor authentication and you’ll be finished with your security audit.”  

Whether by chat or by voice, this variant attack method allows the criminal to compel their victim into doing all the dirty work on their own with no direct (or detectable!) link back to the attacker. The victim might be told to install a remote management app (as per the classic tech support scam). They might be redirected to a neutral third party document sharing site where the criminal has camouflaged a malware installer as a shared document (as per the many Dropbox scams). Or they might be directed to a fake cloud service login page where the attacker will record the victim’s user ID, password, or even banking credentials (as per the many, many, many credential harvesting scams). Or they could just ask the victim for some sensitive company information directly, say “thank you,” and waltz away with their prize undetected.  

In all of these cases, the attacker is playing it safe by not asking the victim to click a link or open an attachment in the initial deceptive email; the email itself is completely “unarmed.” If we try to emulate this attack type, we can still use the same customized graphics tracking technique to know when a user opened the message, but we have no way of knowing if they followed the attacker’s instructions and shifted to using a different communications service. 

That’s what I want out of a “next generation” phishing simulation platform: give me the ability to know when a user I phished shifts to a different platform, app, or service, or calls a specific phone number. Or when they visit a site whose URL was only given to them via a separate channel. That’s what the bad guys are doing to evade my network’s monitoring and defences, so let me train my users on those specific tactics. 

Yes, I realize this might be more complicated (and more expensive!). Setting up a clone of What’s App or Signal and publishing the app on the iOS and Android app stores would be expensive and time-consuming, but it would effectively simulate the attack methodology. Likewise, buying a block of phone numbers and hiring some call centre workers to pretend to be social engineers would be expensive, but boy howdy would that experience be effective. In both cases, having the “live scammer” would be tremendously educational … especially when the “attacker” could explain immediately after the reveal how the attack was designed and how to recognize a similar attack in the future. 

Would it be worth it? From a pragmatic perspective, yes. An extra £50k a year to prevent a £1MM loss seems like a shrewd investment. That’s why we do phishing sims in the first place. The more effectively you train, the better you perform when it counts. The real question here is which vendor is going to build the capability first? When y’all figure it out … call me. 


Pop Culture Allusion: Steve Feke and Fred Walton, When a Stranger Calls (1979 film)

Keil Hubert

Keil Hubert

POC is Keil Hubert, keil.hubert@gmail.com Follow him on Twitter at @keilhubert. You can buy his books on IT leadership, IT interviewing, horrible bosses and understanding workplace culture at the Amazon Kindle Store. Keil Hubert is the head of Security Training and Awareness for OCC, the world’s largest equity derivatives clearing organization, headquartered in Chicago, Illinois. Prior to joining OCC, Keil has been a U.S. Army medical IT officer, a U.S.A.F. Cyberspace Operations officer, a small businessman, an author, and several different variations of commercial sector IT consultant. Keil deconstructed a cybersecurity breach in his presentation at TEISS 2014, and has served as Business Reporter’s resident U.S. ‘blogger since 2012. His books on applied leadership, business culture, and talent management are available on Amazon.com. Keil is based out of Dallas, Texas.

© Business Reporter 2021

Top Articles

How a digital revolution is transforming banking and financial services in Asia

Asia has become the hotspot of digital innovation in the global financial and banking sector.

Conscious customers: a year of change and the UK consumer

As the pace of change continues in the insights industry and beyond, it’s clear that the Covid-19 pandemic has not…

Related Articles

Register for our newsletter