Apr 9, 2025
How Will AI Affect Your Cybersecurity?
Is AI affecting the cybersecurity of your business?
According to Darktrace, 74% of IT security professionals report suffering significant impacts from AI-powered threats. Criminals are using machine learning and generative AI to separate money and data from businesses.
So, how can you keep your money and data safe from new cyber threats?
At BigSpeak, we reached out to our top cybersecurity and artificial intelligence keynote speaker Eric O’Neill to discover what your business should be doing. Eric has spent decades as a national security attorney, corporate investigator, and national cybersecurity strategist, teaching businesses how to protect themselves and giving them actionable tools to do so.
Eric said businesses need to understand that AI is generating sophisticated phishing emails and realistic deepfakes to gain access to business data and demand ransoms. To find out more about what threats your business faces and how to fight back, read the interview below.
How is AI being used to enhance cyberattacks or cyberespionage today?
Cybercrime syndicates operating on the Dark Web are harnessing AI much like everyday users do—with one major difference: their goal is to weaponize efficiency. While you and I might use AI to streamline research, polish our writing, or crunch complex data, these digital criminals are using it to deceive, impersonate, and manipulate trust at scale.
AI has become a force multiplier for cybercrime. Just as businesses automate tedious tasks to boost productivity, cybercriminals are using AI to industrialize malicious operations. They’re generating sophisticated phishing emails that mimic human nuance, writing polymorphic malware that evades detection, and mining vast amounts of social media data to tailor attacks with unsettling precision.
In short, the same tools that help us work smarter are helping them steal faster, lie better, and hack deeper.
What are some of the most common AI-driven cyber threats organizations face today?
Business Email Compromise (BEC) is still one of the most financially devastating cybercrimes in the world—but now it’s leveling up, thanks to AI and deepfakes.
Traditionally, BEC attacks relied on creating urgency: a spoofed email from a CEO demanding a wire transfer right now or a fake vendor invoice slipped past a distracted AP team. But today’s attackers aren’t just impersonating someone in writing—they’re stepping into full digital disguises. With deepfake technology, cybercriminals can now mimic the face, voice, and mannerisms of executives with terrifying accuracy, making it exponentially harder to detect fraud.
Take one chilling example: A UK-based company fell victim to a BEC 2.0-style attack. The finance manager was invited to a Zoom call with what appeared to be the CFO, a few familiar faces from the finance department, and two individuals introduced as new business partners. On screen, everything looked normal—faces, voices, small talk, the usual corporate cadence. Then came the ask: wire funds to help close a critical deal.
Over the course of two weeks, the finance manager wired $25 million across 15 transactions before finally placing a call to the real CFO—only to discover that the entire interaction had been a deepfake. Every single person on that Zoom call was AI-generated, modeled off data scraped from social media and prior video content.
We’re no longer just up against clever emails. We’re up against synthetic humans.
In what ways can AI be exploited for social engineering attacks?
AI isn’t just revolutionizing deepfakes—it’s quietly perfecting one of cybercrime’s oldest and most effective tricks: spear phishing.
Gone are the days of clunky grammar, odd phrasing, or cartoonishly bad spelling giving a malicious email away. Today, AI can craft messages that read exactly like they came from your boss, your vendor, or your colleague. It mimics logos, formats signature blocks with uncanny precision, and even studies prior email threads to match tone, cadence, and context—down to the inside jokes.
The goal? To earn just enough trust to get the target to click a link, open a file, or unknowingly install malware that opens the door to a full-blown breach.
We used to train employees to spot the red flags: poor grammar, sketchy formatting, off-brand logos. But AI has erased those tells. The phish now looks like a real fish—and it’s swimming straight through the filters and firewalls.
It’s time we reimagine what awareness training looks like, because the old rules just got torched by a machine with perfect grammar and no conscience.
What role does AI play in enhancing insider threats within organizations?
In my book Gray Day, I introduced the term “Virtual Trusted Insider”—a concept that’s become even more relevant in the age of AI-fueled cyberattacks.
A Virtual Trusted Insider isn’t a rogue employee. It’s a legitimate user account that’s been quietly hijacked by an external attacker—then used as a Trojan horse from within. Because the account already lives inside the organization’s circle of trust, the breach is difficult to detect without sharp, adaptive cybersecurity.
AI supercharges this tactic. It helps cybercriminals and state-sponsored spies deceive employees into handing over login credentials through hyper-personalized phishing attacks. Once inside, AI can be turned loose to navigate internal systems, mimic normal behavior, avoid triggering alarms, and deploy sophisticated countermeasures to stay invisible.
Think of it like this: the attacker doesn’t break down the door—they borrow your key, slip past the guards, and walk out with terabytes of your data… all before anyone notices they were ever there.
The threat is no longer just from the outside. It’s already inside—and it’s fluent in your language, your systems, and your trust.
How can organizations use AI defensively to counter AI-powered cyber threats?
Just as criminals and spies are using AI to deceive, infiltrate, and compromise, the most effective cybersecurity today fights fire with fire—deploying AI to detect, respond to, and isolate threats at machine speed.
Think of it as a digital spy hunter. AI-driven security systems constantly sift through massive volumes of behavioral data, flagging the subtle anomalies a human might miss. For example, let’s say an employee suddenly logs in from a region they’ve never worked in, accesses systems outside their role, and does it all from a device the company’s never seen. To an AI trained on that user’s normal behavior, this screams red alert.
That’s when smart cybersecurity kicks in: the account gets locked down, protocols are triggered, and the human IT team is called to verify before any damage is done.
This isn’t paranoia—it’s the new baseline. I call it: “Trust nothing. Verify everything.”
Because in the age of AI, trust without verification is an open door—and the bad guys are already knocking.
How do you see AI shaping the future of cybersecurity in the next five years?
I’ve started saying this often, and I mean it literally: Trust is now the most valuable commodity on Earth.
By 2026, experts predict that 90% of the content we encounter online will be synthetic—crafted not by humans but by AI. Videos, photos, news articles, social media posts, even the emails you receive—composed by machines trained to mimic us with uncanny precision.
Soon, you won’t know if you’re talking to a real person unless you’re face-to-face. And even that might not be enough. I suspect it won’t be long before AI-generated avatars start showing up in our Zoom Brady Bunch grids, flawlessly imitating facial gestures, syncing to real-time audio, and looking so human you’ll never have to fix your hair or iron a shirt again.
We now live in a tele-everything, Internet-first world—a shift that accelerated post-pandemic. And into that world, AI has marched confidently, hijacking our communications, our visuals, and even our identities.
Which is why cybersecurity must evolve just as fast. We’ll need AI not just to protect data, but to detect deception. Deepfake detectors will become standard in business tools, messaging platforms, and dating apps—running quietly in the background to flag synthetic voices, faces, and behavior. Think of them as digital lie detectors—shielding our wallets, our reputations, and yes, even our hearts.
Because when everything can be faked, trust is the only thing that matters—and the only thing worth defending.
What emerging AI-driven threats should businesses start preparing for now?
Start with the one that’s about to blur the line between reality and fiction: lifelike deepfakes—powered by AI, indistinguishable from a real human, and already wreaking havoc.
The technology is no longer on the horizon. It’s here. AI-generated voices are already convincing enough to fool employees into wiring millions of dollars to fraudulent accounts. And fully rendered video is catching up fast—realistic facial movements, perfect lip sync, and natural gestures. Soon, deepfakes will walk, talk, and look exactly like your CEO, your client, your coworker… even you.
As these synthetic personas become tools in the criminal arsenal, spotting deception will become exponentially harder. A video call won’t prove identity. A voice on the phone won’t confirm legitimacy. Business communications will become minefields of manufactured trust.
The threat isn’t theoretical—it’s operational. Right now.
To stay ahead, businesses need to deploy AI defenses that can detect deepfakes in real time, authenticate identities with multi-factor intelligence, and train teams to question what they see and hear—because soon, seeing won’t be believing.
Prepare now or risk being fooled by someone who doesn’t even exist.
If you would like to learn more about cybersecurity, contact BigSpeak Speakers Bureau for a top cybersecurity speaker like Eric O’Neill.
For more on AI and Cybersecurity, read
Insights and Learnings from Eric O’Neill on the CrowdStrike Incident
Speaker