Matilda Bailey is a preeminent figure in the networking sphere, specializing in the complex intersection of hardware procurement and next-generation architectural shifts. With the industry currently navigating a volatile mix of unprecedented demand and chronic supply chain fragility, her perspective provides a vital roadmap for understanding how enterprise giants are reshaping their long-term strategies. In this discussion, we explore the move toward high-speed optical innovations, the escalating financial stakes of the AI revolution, and the tactical adjustments required to survive a global component shortage that shows no signs of immediate cooling.
Global shortages in wafers, silicon chips, and memory are now projected to persist for up to two years. How are multiyear purchase commitments altering your procurement strategies, and what specific steps are being taken to mitigate the impact of rising component costs on long-term growth targets?
The current climate is a paradox where we are witnessing the strongest demand in a generation, yet it is met with an “opposite tail” of supply constraints that are truly industry-wide. We have moved far beyond a simple shortage of memory; we are now seeing bottlenecks in every wafer fabrication facility, affecting silicon chips, CPUs, and even specialized optics. To navigate this, the operations team has shifted from tactical ordering to forming deep, multiyear purchase commitments with our vendors to ensure we aren’t left behind as lead times stretch into 2026. This long-term locking of supply is essential because these shortages aren’t just a quarterly blip—they are a one-to-two-year phenomenon that requires us to be much more aggressive in our procurement. Despite the elevated costs associated with securing these components in a tight market, we are still pushing our growth forecast to 27.7%, aiming for a full-year 2026 revenue target of $11.5 billion.
AI revenue targets are expected to double and reach $3.5 billion by 2026. What internal infrastructure shifts are necessary to support this scale, and can you share an anecdote or metric regarding how you are managing demand that currently outstrips available supply?
Scaling to a $3.5 billion AI revenue target requires a fundamental reimagining of how we handle product volume, especially when our Q1 revenue has already surged by 35.1% to reach $2.71 billion. The sheer pressure of demand outstripping supply means that every single chip and wafer is being fought over, creating a high-stakes environment for our engineering and logistics teams. I often look at our current situation as a race where the track is constantly being extended; we are seeing the best demand in Arista’s history, yet we have to be incredibly diligent in how we allocate limited resources to keep our long-term targets on track. We are effectively doubling our AI sales expectations in a single year, which necessitates a more robust engagement with our entire vendor ecosystem to ensure that the infrastructure can keep up with the appetite of our enterprise customers. It is a grueling process of constant negotiation and forward-planning, but it is the only way to bridge the gap between our current capacity and that $11.5 billion vision for the future.
Extended pluggable optics (XPO) are being positioned as a decade-long solution, unlike more experimental co-packaged optics. How does the 12.8 terabit throughput of XPO change rack density requirements, and what are the technical steps involved in cooling modules that reach 400 watts of power?
The introduction of XPO is a pivotal moment because it moves us away from “science experiments” like co-packaged optics, which remain largely proprietary and fragmented, toward a standardized solution with a ten-year runway. By delivering 12.8 terabits per pluggable module, XPO allows us to achieve an unprecedented rack density of 204.8 terabits per OCP rack unit, which is a massive leap forward for high-speed data environments. Managing the heat is the primary hurdle, as these modules can reach 400 watts of power, requiring the integration of specialized cold plates to maintain operational stability. This transition to liquid cooling at the 1.6T and 3.2T levels is the only way we can support the massive scale-up racks that modern AI workloads demand. While we expect to embrace open co-packaged optics eventually, the universality of pluggable optics—which currently represent 99% of the market—means that XPO will be the dominant influence on networking architecture for the next decade.
Enterprise AI is beginning to shift from massive training clusters toward inference-based models and agentic AI. Why is the transition toward high-performance CPUs becoming more viable for these specific use cases, and what does the “calm before the storm” look like for edge computing deployments?
We are in the very early stages of a transition where AI becomes more distributed, moving away from centralized training that is almost entirely dependent on GPUs. In this “calm before the storm,” we are seeing inference-based clusters and agentic AI applications that manage smaller sets of parameters and tokens, which reduces the absolute necessity for high-end GPUs in every instance. High-performance CPUs are becoming more viable here because they provide the flexibility needed for these specific agentic use cases, allowing for more efficient edge deployments where localized processing is key. While we aren’t seeing “super big” deployments in this category just yet, the trials we are observing suggest that in a couple of years, the landscape will look much more varied. This period of relative quiet is actually an intense preparation phase, as enterprises figure out how to distribute their AI workloads across a broader range of compute resources beyond just the massive training centers.
What is your forecast for the future of AI networking?
I believe we are entering a decade where the innovation in pluggable optics and high-performance compute will allow AI to move from experimental silos into the very fabric of the enterprise. We will see a shift where the network is no longer just a pipe but a critical component of the “scale-up” rack, utilizing technologies like XPO to handle throughput that was previously thought impossible. The supply chain constraints will eventually ease over the next two years, but the architectural changes we are making today to accommodate 400-watt modules and inference-heavy workloads will set the standard for the next ten years. Ultimately, AI networking will become more decentralized and efficient, moving beyond the “GPU-only” mindset to a more balanced ecosystem of high-performance compute and universal, open optical interfaces.
