In the News

Separating Hype from Reality in AI’s Role Across Cybersecurity

by Dan Pagel, CEO & Board Director//7 min read/

Originally Published by US CyberSecurity MagazineOriginally Published by US CyberSecurity Magazine

AI has become the talismanic answer to almost every business problem. Everywhere you look, someone is promising that machine intelligence will close the talent gap, eliminate manual processes, and run an entire security program unaided. The irony is that even while organizations adopt AI, their trust in it remains limited. Surveys show that while 72% of organizations are using AI to improve operations, only a small number are actually willing to let AI take unilateral action in security-critical environments. That tension between the zeal to adopt and the reluctance to relinquish control is the defining story of AI in cybersecurity right now.

This is not an irrational tension, either.

Cybersecurity teams are living with the operational reality that mistakes have very real financial and reputational consequences, not just theoretical ones. They know AI is accelerating the pace and precision of attacker activity. In fact, nearly half (45%) of organizations report facing AI-enhanced attacks, and this escalating arms race has created an uncomfortable paradox. That is, defenders are feeling pressured to automate more, even as their confidence in automation lags the marketing narratives circulating around them.

As a result, what’s emerging is not a question of whether AI is good or bad, but whether we are asking it to do the right things. Many organizations still cling to the belief that the endgame for AI is full autonomy – an AI-driven Security Operations Center (SOC) where tickets resolve themselves and vulnerabilities patch without human inspection. The real friction, however, is that no one wants to live in that world. Not CISOs. Not frontline analysts. Not the boards pushing for ‘more AI.’ Responsible security doesn’t begin with blind trust but rather clarity, governance, and oversight.

The tension between the zeal to adopt AI and the reluctance to relinquish control is the defining story of AI in cybersecurity right now.

Dan Pagel, CEO, Brinqa

Human Oversight: Key to Secure AI in Cybersecurity

The truth is that AI’s most valuable, most defensible role in cybersecurity today does not involve replacing human judgment, but magnifying it.

The truth is that AI’s most valuable, most defensible role in cybersecurity today does not involve replacing human judgment, but magnifying it. For example, look at how teams use AI when the hype is stripped away. They’re applying it to triage, correlation, investigative context-building – all areas where the volume of data far exceeds human bandwidth. Analysts who are drowning in an average of 960 alerts per day don’t need a robot to ‘take over;’ they need an assistant that elevates the 20 alerts that matter the most in that very moment and discards the noise without demanding their attention. They need AI that accelerates their expertise, not something that overrides it.

Keeping Humans in Charge of Machines

However, if you talk to practitioners, what you hear isn’t a fear of technology but rather the fatigue of unrealistic expectations. Security teams have spent the last decade being told that the next platform, the next automation layer, the next algorithm would finally be the one thing that delivers the efficiency and clarity they’ve been searching for. Instead, what they often receive are systems that are powerful but opaque, impressive but poorly integrated, fast but fragile when placed in the messy, imperfect reality of enterprise environments. AI introduces that same duality at an even larger scale and at a far faster speed.

Security teams don’t need a robot to ‘take over’; they need an assistant that elevates the alerts that matter and discards the noise.

Today’s teams want tools that fit into the workflows they’ve refined and battle-tested, not ones that force them to rethink how their entire SOC operates. They want insights that are traceable, explainable, and defensible – not recommendations that arrive with no audit trail or no way to verify how the model reached its conclusion. Just as importantly, they want technology that can seamlessly consolidate and integrate their data, thus reducing the overload of fragmented tools and disconnected signals. When practitioners push back on autonomy, it’s not because they lack imagination; it’s because they know that accountability, traceability, and context are what make security programs work.

In other words, AI must remain accountable to human operators, not the other way around.

When you dig deeper, it’s apparent that organizations already know this. Despite rapid adoption, only 37% have formal processes in place to evaluate the security of AI systems before they deploy them, and fewer still have strong guardrails around autonomy. That isn’t a rejection of AI per se, but rather recognition that AI in its current form (and in most current implementations) still lacks the transparency, predictability, and explainability required for high-stakes decision-making. In other words, AI must remain accountable to human operators, not the other way around.

There’s also another element often lost in the narrative: cybersecurity is not simply a technical discipline; it is a business discipline rooted in context. No model, no matter how advanced, can fully understand the nuances of business risk unless it is fed rich, accurate, complete data. And much of that data is still missing, mislabeled, or inconsistent in most companies. This is where AI can be most transformative: not by acting independently, but by enriching and repairing the data that informs human decision-making whereas if AI can make context whole, humans can make decisions that are risk-aligned.

“This is where AI can be most transformative: not by acting independently, but by enriching and repairing the data that informs human decision-making.”

Conclusion

The real opportunity in front of us isn’t simply to push for more autonomy but to build security programs where humans and AI operate in sync – where AI handles the scale and speed, and humans handle the interpretation and judgment. This hybrid approach is not a halfway measure or a compromise, but a recognition of the strengths and limitations on both sides, and a pragmatic blueprint for where the industry can responsibly go next.

The narrative around AI in cybersecurity doesn’t need more promises of a fully automated future. What it needs is a clear-eyed understanding of the present. One that acknowledges AI is powerful but also understands it is not magic. Yes, it can augment judgment, but it cannot substitute for it. And its greatest enterprise value today lies not in autonomy, but in amplification. If we can get that balance right, AI won’t replace defenders but will make them exponentially more effective and businesses more secure.

D
Dan Pagel
Chief Executive Officer & Board Director
Dan Pagel leads Brinqa with a focused mission: empowering enterprises to align technology with business risk and protect what matters most. With over 15 years of leadership experience in cybersecurity and enterprise software, Dan has a proven track record of building customer-centric organizations, fostering innovation, and driving growth.
See all of Dan's posts

Ready to Unify Your Cyber Risk Lifecycle?

Get a DemoGet a Demo