Artificial Intelligence and Social Engineering. A match made in cybercrime heaven. Could it happen to you? Let’s see…
Your Chief Executive Officer calls you from the road one evening in a time crunch asking you to quickly transfer funds for a company investment she’s brokered while traveling.
This is unusual— there are official processes for this. But you’re not going to say no to your CEO or hold her up. You enter the information she gives you, transfer the money, and hang up, satisfied that you’ve been helpful.
It’s one thing to receive an email from an unfamiliar domain making an unprecedented request. It’s another thing to hear a trustworthy voice over the phone. These days, Artificial Intelligence (AI) is capable of both, and cybercriminals are taking advantage to social engineer their way into organizations worldwide.
While the AI threat is expanding rapidly, BAI Security is here to help you rise to the challenge. Let’s take a look at the top three risks AI poses to your organization, and how you can triumph in this new game of man versus machine.
Get serious about cyber defense.
1. Devious Deepfakes
In 2020, Meta released a statement about the proliferation of manipulated media on Facebook. AI-generated video and audio clips could create completely fake depictions of public figures, including prominent politicians.
Michigan State University has since worked with Meta to develop a method of reverse engineering to detect deepfakes. But not everyone has access to this cutting-edge technology. Cybercriminals might manipulate AI technology to produce the voice of your trusted supervisor, taking advantage of a live conversation to direct an employee to click a malicious link, give up critical credentials, or transfer money. Other, more aggressive scams could target someone with an AI-generated clip of the target doing something embarrassing or incriminating, leveraging the threat of compromise to blackmail them into submission.
2. Revised and Refined
It’s a hallmark of spam emails: the bad spelling and grammar, punctuation in strange places, awkward sentences that tip you off to ill intentions. But AI can take the work of email generators and translators, or emails composed by those for whom English is not their first language, and polish them into professional-sounding correspondence.
An unexpected request or an unrecognized domain are still red flags. But a well-composed body gives an email more legitimacy, which can act as blinders when it comes to identifying potential scams. Checkpoint Research has even identified conversations on dark web forums about how to use ChatGPT to refine malicious emails and chats.
3. This Time, It’s Personal
To pull off a successful con, a cybercriminal must know their target. AI can do it for them. Certain software can sweep the Internet for a victim’s digital presence, picking up personal details via social media and other sites, and allow malicious actors to personalize their communiques.
When the person contacting you knows who your friends are and the names of your loved ones, it becomes a little harder to suspect them of being a complete stranger out to exploit you. This particular approach can even take the angle of a romance scam, preying on the victim’s emotions to make them feel as if they’re speaking to someone they can trust.
Fight the Machine
Humans are hard-wired for pattern recognition. We look for interactions that behave like we would — and unfortunately for us, AI is getting increasingly adept at doing just that.
A 2022 report by Verizon notes that 82% of security breaches involve a human element. With the help of AI, phony emails and other communications will become more legitimate, putting your team at risk of making an error that could lead to compromise.
So, how do you mitigate a threat designed to sidestep all the classic tells?
Step one is to integrate the AI threat into your cybersecurity training. Your team needs to understand what AI is capable of, with the unfortunate reality that sometimes they won’t be able to trust the usual signs of legitimacy. Make sure that you’re continuously reinforcing this training with drills, reminders, and a culture of awareness that encourages employees to stay up-to-date on the latest news from the cyber world.
In a world increasingly integrated with the Internet, your social media presence can have equal impact on your personal and professional life — so it’s a good idea to implement policies for your workers to limit AI’s data mining. You can introduce similar procedures for monetary transactions, so that if the CEO calls and asks for money, your employees will be ready to request confirmation through proper channels.
Filters can also be a simple but robust layer of defense against AI probing. AI-powered email filters can deftly recognize suspicious content or metadata and flag emails that may not ping as suspicious to the human eye.
To encourage employee scrutiny, you may choose to run social engineering simulations or other programs that posit a real attack against your organization. Hands-on experience can give your team a solid idea of what to look out for and put their training to the test, with opportunities to grow and improve before a real attack occurs.
AI can adapt, but so can you and your team. To get an overview of where your vulnerabilities lie in today’s rapidly shifting landscape, start with a comprehensive Red Team Assessment, which uses real-world social engineering and other methodologies to evaluate your organization’s defenses against a human (or human-like) attacker.
Then, take steps to quash human error with a Social Engineering Evaluation. Turn your team into a human firewall with a knowledge of the most cutting-edge attacks and how to spot a hacker — or the robot working for them.
And just in case disaster arrives at your doorstep, you can greatly minimize negative consequences by preparing with Tabletop Exercises, such as Incident Response and Disaster Recovery.
Don’t let artificial intelligence surpass the real, devoted workforce at your fingertips. Amp up your digital security and contact BAI Security today.