How Will IBM’s New AI Agents Stop Agentic Cyber Attacks?

How Will IBM’s New AI Agents Stop Agentic Cyber Attacks?

Matilda Bailey is a distinguished networking specialist who has spent her career at the intersection of next-generation wireless solutions and autonomous systems. As enterprises grapple with an increasingly volatile digital landscape, her expertise in how specialized AI agents navigate complex IT estates offers a crucial perspective for modern defense. In this conversation, we explore the transition from manual security workflows to machine-speed architectures, focusing on the emergence of frontier models and the shift toward automated remediation in an era of high-velocity cyberattacks.

As security programs transition toward autonomous architectures, how do specialized agents coordinate to detect and remediate threats at machine speed? Please describe the workflow for analyzing runtime environments and provide examples of how this coordination reduces the window of exposure for high-velocity attacks.

In an autonomous architecture, coordination happens through a multi-agent system where specialized entities act in concert to close the gap between detection and resolution. These agents are designed to analyze software exposures and runtime environments simultaneously, mapping out potential exploit paths before a human analyst could even open a ticket. When an anomaly is detected, the system doesn’t just alert; it investigates the origin, recommends a specific response, and executes the fix directly across the security stack. By enforcing security policies automatically across fragmented tools, we see a dramatic reduction in the exposure window, effectively containing high-velocity attacks that would otherwise overwhelm traditional, manual workflows.

Frontier models significantly lower the cost and expertise required for sophisticated cyberattacks. What specific indicators identify AI-specific exposures during a threat assessment, and what practical steps should a team take to prioritize mitigation when traditional software patches are unavailable?

When we conduct deep-visibility assessments, we look for AI-specific exposures such as policy weaknesses and vulnerable exploit paths that frontier models are particularly adept at finding. These models act as a step-change in offensive capability, often targeting the “gray areas” of an IT estate where security gaps haven’t been codified or documented. If a traditional software patch isn’t available, the priority shifts to implementing interim safeguards, such as tightening authentication controls or isolating affected runtime environments. It is vital to use these insights to feed directly into governance systems, ensuring that even without a permanent fix, the business can maintain a compliant and resilient posture against machine-led disruption.

Exploitations of public-facing applications are increasing due to AI-driven vulnerability discovery and weak authentication. How can organizations effectively codify complex IT estates to remove these attack paths, and what metrics best demonstrate the success of shifting from manual to automated security processes?

The reality is that we’ve seen a 44% increase in attacks targeting public-facing applications, largely because attackers use AI to scan for missing authentication controls faster than we can patch them. To counter this, organizations must move away from fragmented tools and toward a unified intelligence layer that treats the entire network as a single, codifiable entity. Success in this transition is best measured by the drastic reduction in “time-to-contain” and the volume of threats mitigated without human intervention. When your metrics show that the majority of anomalies are resolved at the edge before they can escalate into business disruptions, you know the shift to automation is working.

Network intelligence now relies on pre-trained models to interpret telemetry, alarms, and time-series data. How do these tools develop a root-cause hypothesis for hidden issues, and what are the procedural challenges when integrating these automated remediation suggestions into existing governance and risk systems?

Modern network intelligence tools consume a massive diet of telemetry, flow data, and time-series alarms to identify the subtle signs of early degradation that a human might overlook. By learning the baseline of a specific network design, these pre-trained models can reason through complex data sets to generate a probable root-cause hypothesis for hidden issues. The main procedural challenge lies in trust and integration—specifically, ensuring that automated remediation suggestions align with established risk management frameworks. Bridging this gap requires a seamless flow of data where AI-generated insights provide decision support that is both actionable for the security team and transparent for the compliance officers.

What is your forecast for agentic attacks?

I anticipate that agentic attacks will become the baseline for cybercrime, moving from experimental scripts to fully autonomous, self-optimizing entities that can pivot through a network in seconds. We are entering an era where the “human in the loop” becomes a bottleneck, and organizations that fail to adopt their own coordinated AI agents will find themselves defending with a shield of paper against a digital wildfire. My forecast is that by 2026, the primary battleground of cybersecurity will not be between hackers and defenders, but between competing autonomous models, making machine-speed response the only viable path to survival.

Subscribe to our weekly news digest.

Join now and become a part of our fast-growing community.

Invalid Email Address
Thanks for Subscribing!
We'll be sending you our best soon!
Something went wrong, please try again later