At this year’s Control24 summit, we had the pleasure of hosting Martin Borrett, an IBM Distinguished Engineer and IBM Security’s Technical Director for UK&I.
Martin delivered a fascinating keynote, titled ‘AI for Security and Security for AI: Opportunity or Threat?‘ It was one of the highlights of the event, touching on how artificial intelligence is transforming security practices and the tough questions we need to ask as we dive deeper into AI’s capabilities.
Martin’s presentation sparked a new way of thinking about AI in the context of security, and demonstrated that IBM is lifting the curtain on the usual ‘AI will save us’ narrative.
So back to the big question… Is AI an opportunity or a threat? Of course, the reality is that it is both. And that’s exactly the point we’re unpacking in this article.
The Benefits: AI as Our Best Defence?
Martin shared data from IBM’s latest Cost of a Data Breach report, underscoring the financial toll of data breaches, which now sits at an average of nearly $5 million per incident. However, organisations that have invested in AI-driven security saved an impressive $2.2 million on average per breach, thanks to faster detection, triage, and resolution times.
These are big numbers, and they explain why so many companies are increasingly turning to AI to support cyber security operations.
“Organisations using extensive amounts of security AI and automation saw the time to resolve a breach drop by 98 days,” Martin highlighted. That’s three months of headaches gone.
But just as AI helps us manage increasingly sophisticated threats, there’s a flipside we can’t ignore.
The Other Side: Are Cybercriminals Catching Up?
Martin touched on something many are reluctant to discuss. Cyber adversaries are experimenting with AI too. While they haven’t adopted it on a large scale yet, the rise of AI-driven phishing campaigns and retooling efforts are signs that attackers are laying the groundwork for an AI arms race.
“There’s a game of cat and mouse going on”
Martin said, acknowledging the ongoing battle between defenders and adversaries. “For now, the good guys are slightly ahead. But we can’t be complacent.”
In cybersecurity, assuming that we’ll stay one step ahead can be dangerous. Cybercriminals have always been quick to adopt technology, and as the tools they use become more accessible, we’re likely to see AI-driven attacks gain traction. So, the big question becomes: are we truly ahead, or just a step away from an AI-powered wave of cyber threats?
Securing AI: The Hidden Risk
Martin didn’t just talk about using AI to boost security; he pointed out that AI itself is a new risk. As more organisations adopt generative AI models, the integrity of these systems becomes a critical concern. Martin’s advice? Treat AI like any other sensitive asset and secure it from data poisoning, model theft, and unauthorised manipulation.
“As we think about securing AI, it’s important that we consider how to protect the data, the model, and the usage,” he said.
“Without trust and confidence, AI can’t succeed in the Organisation.”
The problem is, these are vulnerabilities many organisations haven’t even begun to address. As companies roll out AI-powered systems, it’s easy to focus on the benefits without fully understanding the risks.
The Takeaway: A Proactive Stance
Martin’s session at Control24 was a wake-up call. Yes, AI has massive potential to boost security and streamline incident response, but it’s a tool—not a silver bullet. As he so rightly pointed out, “AI is both an opportunity and a threat.” And if we aren’t securing it with the same rigour we apply to other systems, we may be inviting new risks into our defences.
So, as we embrace AI, let’s ask ourselves: are we prepared for the new threats it could bring? Because in this game of cat and mouse, we can’t afford to be reactive. We need to think ahead, secure our models, and always stay one step ahead—not just of the attackers, but of our own assumptions. If you want to find out Predatar is using AI to boost Recovery Assurance contact us here.