Is Cisco the New Foundation for Secure AI Infrastructure?

Is Cisco the New Foundation for Secure AI Infrastructure?

Matilda Bailey is a distinguished networking specialist whose work sits at the intersection of cellular technology, wireless innovation, and next-generation infrastructure. With a career dedicated to deciphering how data moves across global systems, she has become a leading voice on how traditional network architectures must evolve to support the “agentic era” of artificial intelligence. In this conversation, she explores how the integration of observability and identity management is transforming the network from a passive pipe into a sophisticated control plane for autonomous AI.

The discussion focuses on the critical shift from deterministic monitoring to continuous AI assurance, the security challenges of non-human identities, and the strategic importance of building a neutral, multi-model infrastructure. By examining the integration of tools like Galileo and Astrix into the enterprise stack, we uncover how large organizations can bridge the gap between experimental AI projects and rigorous, production-grade IT operations.

Traditional observability was built for deterministic systems like packets and virtual machines, but AI behavior is probabilistic and context-dependent. How does transitioning to continuous assurance loops help enterprises detect hallucinations in real-time, and what specific steps allow AI agents to be treated as first-class production services?

Transitioning to continuous assurance loops fundamentally changes the relationship between the operator and the system because it moves beyond simple “up or down” status checks. In a probabilistic environment, we use specialized evaluation metrics—such as hallucination detection, context adherence, and attribution—to verify that an agent’s output aligns with its intended purpose in real-time. To treat these agents as first-class production services, enterprises must integrate them into existing incident workflows and attach specific service-level objectives to their performance. This involves moving AI monitoring out of isolated “black box” side projects and into a unified plane where model quality is measured as rigorously as network latency. By doing so, a business can intervene the moment a model begins to drift or fabricate information, ensuring the agent remains a reliable component of the operational fabric.

Autonomous agents rely heavily on a complex web of API keys and service accounts that often carry opaque permissions. What are the best practices for inventorying these non-human identities, and how can a zero-trust policy effectively remediate overprivileged access before it becomes a security exploit?

The first step in securing the agentic era is a comprehensive discovery phase where every API key, service account, and SaaS integration is mapped to its specific function and owner. We are seeing a massive surge in these non-human identities, and the best practice is to move toward a zero-trust model where the network enforces policy based on the identity of the entity rather than just its location on a subnet. This allows security teams to detect “toxic combinations” of permissions—where an agent might have unnecessary access to sensitive data stores—and remediate that overprivileged access automatically. By aligning identity governance with the network layer, you create a safety rail that constrains what an autonomous agent can actually do, effectively preventing a compromised credential from turning into a full-scale data leak.

Large organizations often find their AI projects operating as “unmanaged risk islands” disconnected from the core network. How does linking AI-driven incident signals directly to network and application layers change how teams enforce service-level objectives, and what evidence is most convincing to auditors during a compliance review?

Linking these signals allows IT leaders to see the entire lifecycle of a transaction, from the moment a user provides a prompt to the final execution on the network. When an AI incident occurs, such as a failure in logic or a breach of a guardrail, having it surfaced alongside network and application signals means the root cause can be identified across the entire stack. This integration is vital for compliance because it provides auditors with a clear, traceable path of governance, showing exactly how an agent was identified, what it accessed, and how its behavior was monitored. Providing evidence of consistent risk controls and automated incident responses transforms AI from a experimental outlier into a governed corporate asset that meets the same rigorous standards as any other enterprise application.

Many vendors are currently adding minor AI features to their products rather than addressing foundational issues like trust and governance. What are the strategic advantages of building a neutral observability layer that supports multiple model ecosystems, and how does this control-plane approach help businesses scale without constant re-architecting?

The primary advantage of a neutral observability layer is that it future-proofs the enterprise against “model lock-in” by providing a consistent interface for OpenAI, Anthropic, Azure, and AWS Bedrock simultaneously. Most large organizations are unlikely to standardize on a single model provider, so they need a control plane that can follow their AI workloads regardless of where they run. This approach allows a company to scale its AI initiatives without having to re-architect its security or monitoring stack every time a new, more efficient model hits the market. By focusing on the “hard” problems of trust and governance at the infrastructure level, businesses can treat different models as interchangeable components while maintaining a single, unified assurance and control plane.

When AI agents begin acting on behalf of users and devices across the entire network fabric, the definition of “secure networking” must evolve. What specific workflows are required to bridge the gap between IT operations and line-of-business AI projects, and which metrics best track the success of this integration?

To bridge this gap, we must implement workflows that pull line-of-business AI projects into the existing operational fabric, specifically by aligning AI identity management with the broader zero-trust stack. This means that when a business unit launches a new contact center agent, the IT team is automatically involved in mapping its permissions and setting up its observability parameters within the standard dashboard. Success is best tracked through metrics that measure the “time to remediation” for AI-specific incidents and the percentage of agents that are fully integrated into the corporate identity-governance framework. When you can trace an AI-driven event across the network and model layers with the same visibility as a standard web request, you know the integration is working.

What is your forecast for AI infrastructure?

I believe we are entering a phase where the intelligence of the model will be secondary to the integrity of the fabric it runs on. In the near future, the most successful enterprises won’t be those with the largest models, but those with the most “AI-ready” networks—systems that are inherently identity-aware and capable of providing continuous assurance. We will see the network evolve into a high-performance, policy-enforcing substrate that not only carries AI traffic but actively governs the behavior of every agent acting upon it. Ultimately, the winners in this space will be the providers who solve the systemic problems of trust and safety, enabling businesses to deploy autonomous intelligence at scale without sacrificing security.

Subscribe to our weekly news digest.

Join now and become a part of our fast-growing community.

Invalid Email Address
Thanks for Subscribing!
We'll be sending you our best soon!
Something went wrong, please try again later