Matilda Bailey is a distinguished networking specialist whose work sits at the intersection of infrastructure modernization and next-generation cellular solutions. With a career dedicated to unraveling the complexities of large-scale enterprise environments, she has become a leading voice on how legacy carriers can pivot toward agile, AI-driven architectures. In this conversation, we explore the monumental task of dismantling decades of structural debt to build a unified, automated future for global connectivity.
The discussion centers on the strategic overhaul of fragmented systems, moving from manual, error-prone workflows to high-velocity planning powered by digital twins. We delve into the critical role of data integrity in reducing resolution times, the cultural shifts required to embrace automation, and the bold strategies used to migrate enterprise clients off aging mainframe infrastructure.
Integrating nearly 500 disparate data sources from decades of acquisitions is a massive undertaking. How did you design a unified data layer to link physical equipment with customer revenue, and what specific data objects proved most critical for building a functional digital twin of the ecosystem?
The architecture of this unified data layer was less about a traditional network refresh and more about solving a massive data silos problem. We had to ingest nearly 500 different data sources into a common platform to ensure that every system finally spoke the same language. The most critical data objects we built were those that bridged the gap between the physical hardware and the financial balance sheet, specifically linking network elements to customer services and revenue data. By creating these relationships, we developed a digital twin that allows us to see exactly which customers are riding on a specific piece of 40-year-old hardware. It’s a sensory experience for the engineers who can now visualize the entire ecosystem—from a single fiber route to the $12.4 billion in annual revenue it supports—all within a single pane of glass.
Transitioning from manual queries across 50 separate systems to a single workflow interface is a significant shift. Can you walk through the step-by-step process engineers now use to decommission equipment and explain how this change achieved an eightfold increase in planning throughput?
In the past, decommissioning a single piece of equipment was a grueling sequential process where an engineer had to manually update records across 50 disconnected systems, and one small error could crash the downstream migration. Now, using our proprietary tool NetPal, an engineer simply identifies the target equipment in the interface, and the system instantly surfaces the traffic patterns, affected customers, and even the projected energy savings. The tool then maps out the consolidation path and automatically flags the necessary inventory updates for service assurance and the NOC. This streamlined automation is exactly why we’ve seen the planning team’s output surge by more than 8x. It transforms a process that used to take several weeks or months into a task that can be finalized in a matter of minutes.
Mean time to resolution for outages can drop from hours to 15 minutes when technicians have accurate inventory data. Beyond speed, how has real-time visibility improved first-time-right rates for field dispatches, and what specific metrics best capture the resulting impact on service delivery?
Real-time visibility has fundamentally changed the “truck roll” dynamic by ensuring that when a technician arrives on-site, they aren’t wasting the first hour reconciling conflicting records. Because our inventory data is now accurate and unified, our first-time-right rates have improved significantly, which directly reduces the need for repeat dispatches. The most telling metric of this success is the drop in mean time to resolution from several hours down to just 15 minutes for certain outages. We are also seeing the impact in our Rapid Routes product, where AI-assisted pre-provisioning allows us to deliver 400G wavelength services at speeds that were previously impossible. This reliability creates a visceral sense of trust with our enterprise clients, as they see service delivery move at the pace of modern software rather than legacy hardware.
Many operators avoid migrating legacy customers because they fear losing revenue. Why is moving away from mainframe-based inventory systems essential for a modern tier-one carrier, and what strategy did you use to convince major enterprise clients to transition off aging infrastructure?
Exiting mainframe-based inventory is essential because these legacy systems are the primary anchors holding back the transition to Network-as-a-Service (NaaS). We are on a path to be the first tier-one operator to fully migrate off these 15-plus legacy systems onto a modern platform like Blue Planet. To handle the “poking the bear” risk of moving legacy revenue, we shifted the narrative from a sales pitch to a risk-management consultation. We approached major enterprise customers with hard data, showing them the inherent fragility of staying on 40-year-old infrastructure. By laying out a clear migration path supported by our data platform, we proved that the risk of a catastrophic failure on an aging network was far greater than the temporary friction of an upgrade.
Technical transformations often stall due to internal silos and inherited organizational friction. How did leadership and cultural frameworks change the mindset of your engineering teams, and what steps were taken to ensure staff embraced AI-driven automation rather than resisting it?
You cannot solve a 40-year-old structural debt problem with technology alone; you have to address the “human debt” as well. We integrated cultural frameworks like Brené Brown’s “Dare to Lead” to help our engineers move past the fear of being replaced by AI and instead see it as a tool that grants them superpowers. This program encouraged our teams to “think bigger” and speak openly about the friction they encountered across different organizational boundaries. By involving the engineering staff in the creation of the AI agents and showing them how these tools eliminate the mundane, repetitive tasks of querying 50 systems, we fostered an environment where they felt like architects of the future rather than victims of automation. It turned a potentially resistant workforce into a group of innovators who are now finding “pockets of opportunity” that we never knew existed.
What is your forecast for the future of AI-driven network-as-a-service?
I forecast that the industry will move toward a “self-healing” autonomous network where the distinction between the physical fiber and the digital service layer completely disappears. As we refine these AI agents, we will reach a point where the network can reconfigure itself in real-time to optimize for energy consumption and latency without human intervention. For readers looking to navigate this shift, my advice is to stop treating your data as a secondary concern; your network is only as fast as your inventory records are accurate. Start by cleaning your data today, because AI cannot automate chaos—it can only accelerate a foundation that is already solid.
