Business

Mitigating Deepfake Phishing in Corporate Structures: Essential Steps for AI Governance

Blog Image
Published on
September 23, 2024

Savvy email (or SMS) phishing attempts come from corporate superiors for an obvious reason. Pushing suspicious financial requests down the hierarchy makes sense. A junior-level accountant is not going to get a CFO to sign-off on a ridiculously anomalous invoice.

Adversaries recognize the structures of corporate hierarchies. An extra factor of verification over an alternate channel can often mitigate that threat.

If a request comes over SMS, confirm it by email. If the request comes over email, confirm by voice.

"Are you sure you want me to transfer $13 million USD to that Bitcoin account?"

But now that deepfakes stepped onto the dance floor, the game just got a lot harder.

Deepfakes, for the uninitiated, are AI-generated videos, often of real people doing out-of-character things.

It's one thing to verify a corporate superior’s extraordinary email or SMS request, but it's another when the request comes over a video call from multiple people including a remote CFO.

It happened and apparently some British global corporation's Hong Kong office was the victim.

Imagine it: you're the junior person on a video call with senior staff, including the CFO. It's not some faceless request. It's multiple live people who look and sound like real people saying that suspicious email wasn’t a phishing attempt, but a real request that you need to address ASAP.

There will probably be technical mitigations in the future to these scenarios. The South China Morning Post article mentions two verification measures. One is just to ask the person making the extraordinary request to perform some head motions. Apparently, swaying is not yet incorporated into deepfake movements.

Sounds like a great way to start every video call, “Hey C-level, sway your head from side to side so I can be sure you’re not a deepfake.” It’s questionable whether that request will accelerate your career aspirations.

Hopefully when that particular mitigation ceases to be effective, everyone will be notified in a hard copy memo.

Are you really fired remotely if the HR person refuses to move their head to-and-fro, or can you just consider it a deepfake ploy?

The other mitigation is for law enforcement to get alerts if funds are transferred to suspicious accounts with the Hong Kong-based Faster Payment System.

I will use the wide reach of this blog to kindly request that all owners of suspicious accounts reach out to the Hong Kong authorities with their account details.

Well, you can at least catch the dumb and suspicious account holders.

Of course exception or anomaly reporting is essential, but it should be based on other factors, not just recipient accounts that are deemed suspicious.

What makes this particular instance disturbing is that it's multi-faceted. It's brand new recipe with several already existing ingredients. Add phishing email to deepfakes of actual senior staff based on public data, and your cake is baked.

If it’s multi-factor, it’s got to be true.

It's possible that the deepfake could have been mitigated by the actual video conferencing software. Was it everyone's usual video conferencing account? Is there adequate authentication of users to that platform?

Organizations also need to adopt more regimented processes for all outgoing payments. Assets should not be transferred with mere emails or verbal conversations.

It can't just be that "Alice and Bob said it was ok to pay." It needs to be "a verified Alice and a verified Bob okayed the payment." How they are verified depends on the context, but that verification can't be based on weak passwords or other trivial factors.

Non-technical mitigations can be extremely difficult for very real reasons.

With dispersed corporate offices the norm even before the COVID remote work phase, one can't walk down the hall to ask "Did you really mean that request?"

Even more difficult is the dynamic of corporate hierarchies, as I've mentioned in several trainings about phishing attacks.

If you have multiple senior people telling you to do something, no matter how nonsensical, who would feel comfortable questioning their authority?

Pushing back at C-level requests is a short walk to being reprimanded or even fired in most organizations. Hierarchies rely on those "down the ladder" to implement the orders from above with haste and not necessarily with verification.

Will a wave of deepfake compromises change that dynamic?

It's hard to believe that's possible, although it should be a $25 million USD lesson one Hong Kong office learned the hard way. An expensive mistake should make more comprehensive processes more digestible, even if some degree of friction is introduced.

Whether a junior staff person can challenge a C-level on a video call over a suspicious request, well, that isn’t about to change.

Featured Blog

We are constantly writing new content. Check back often or join our newsletter!

Lorem ipsum dolor sit amet, consectetur adipiscing elit. Suspendisse varius enim in eros.
Lorem ipsum dolor sit amet, consectetur adipiscing elit. Suspendisse varius enim in eros.
Lorem ipsum dolor sit amet, consectetur adipiscing elit. Suspendisse varius enim in eros.
Lorem ipsum dolor sit amet, consectetur adipiscing elit. Suspendisse varius enim in eros.
Lorem ipsum dolor sit amet, consectetur adipiscing elit. Suspendisse varius enim in eros.
Lorem ipsum dolor sit amet, consectetur adipiscing elit. Suspendisse varius enim in eros.

Take Control of Your AI Today: Contact Us!

Don't lose control of your proprietary data because you failed to implement governance.