Tuesday, February 27, 2024

I’ve been thinking about how to keep your company safe from the new trend in fraud tactics, the use of artificial intelligence (AI) impersonations. Did you see the reports lately of the $25 million heist in Hong Kong? The bad guys can now impersonate you, on video with your voice, convincing your staff to wire money away, because you told them to. Let’s break down how this one happened.

It all began as most do, with a phishing email. The request within it left the employee skeptical. However, it didn’t ask for any immediate action. Eventually the employee believed the threat had passed.

It should be noted that in the world of AI, many of the previous tell-tale signs of a scam, such as poor grammar, stiff language or odd punctuation, can no longer be counted on. In AI generated emails, the AI will correct all of that.

In this case, it appears the employee received an initial suspicious email just alerting them to watch for some upcoming requests for urgent confidential transactions needed for the company business. When no more suspicious emails followed, the employee stood down their vigilance. Then a few weeks later, the scammers struck the death blow.

The employee, a member of the corporation’s finance team, next received an invitation to a Zoom meeting with the CFO and two other senior finance department staff, which they attended. The employee was instructed to send multiple wires out, totaling over $25 million. The employee, who knew the CFO and was familiar with at least one of the other senior staff who participated on video during the brief meeting, executed their instructions perfectly. What the employee didn’t know is that everyone else on the video Zoom meeting was a deep fake, generated by AI.

Simply by taking publicly available video and/or audio of the three senior staff members, running it through an AI image and voice simulator, and then writing up a viable script for the fake talking heads to run through, the thieves got away with all of that money.

Yes, the Zoom session was brief. And yes, it was constructed in such a way as to confer urgency, not to mention that the script was intricate with each of the two “senior attendees” seeming to validate the requests made by the “CFO”. And yes, the Zoom session reportedly ended quickly once the instructions were conveyed and the employee indicated compliance.

It’s especially startling because the employee reportedly felt like they were speaking to the actual CFO whom they had known and worked with for years.

So what do we do? I guess maybe we’re going to need to institute safe words or phrases. Code words that we ask of anyone when we need to confirm we are ourselves. Something to employ even when “face to face” via web or receiving audio instructions like a voice memo, phone call or voicemail. It won’t save your bacon in every situation, yet I can’t help but imagine if the targeted employee had said, “Great, I’m on it, just confirm your code word for me please….” this particular AI stage production would have ended immediately, and no funds would have left the account.

Of course, you wouldn’t want to store or circulate a set of code words anywhere on your network. We’d have to establish these the old-fashioned way. Like 007.

And what do we do about your customers? These deepfake scams are going to leave them even more vulnerable. Can you find a way to exchange code words with them during your order confirmation or clear to close Safe Wire Instruction windows? You’d have to do it old school, not via computer. But I’d love to hear your creative ideas.

Also, from me to you, if you have senior citizens in your life, please talk to them about these deep fakes that can come in a phone call, virtual meeting, email, text or anything more sophisticated than a carrier pigeon, and please establish a code word or phrase that everyone they care about will use with them. To protect them.

Maybe carrier pigeon will be my code phrase.

Until next time,

Mary Schuster
Chief Knowledge Officer
October Research, LLC