Top
image credit: Freepik

AI Networking: How to Future-Proof Networking Systems to Manage AI Payloads

May 20, 2024

Category:

With the proliferation of artificial intelligence, network systems will need to be upgraded to manage the increased bandwidth strain to support AI-driven tools and technology. AI networking is the infusion of machine learning technology into networking infrastructure with the aim of improving performance, enhancing security, and increasing the operational efficiency of network management. 

With massive differences between AI processing and data payloads in traditional network workflows, experts are asking what changes need to be considered to get networks ready for the support of AI applications. 

The research shows that companies are increasingly using AI-driven tools, with roughly 35% of companies using some kind of artificial intelligence. As AI gains momentum, networking businesses will need to reflect on the investments and upgrades to network architecture that are required in this new age.

Do network professionals need to be AI-savvy? 

Until now, network professionals haven’t needed extensive knowledge of applications beyond knowing how much data is being transmitted and at what rate per second. When big data entered the mainstream, networks had to adapt to this new form of unstructured data for video and analytics. On the whole, big data wasn’t the transformative catalyst for network change that many predicted it would be. 

AI, on the other hand, could be the industry shake-up that was predicted. Network professionals will have to develop a working knowledge of AI applications and back-end development. When it comes to AI processing, each application requires a unique networking model. Different algorithms are required to support AI applications, and each of them has its own bandwidth needs, which differ drastically. 

Supervised learning algorithms, for example, require that all the data entered into the app is tagged. This makes data easy to retrieve and process. Supervised algorithms use a finite database that can be quantified. With generative AI, data is untagged and requires additional processing, and with an unlimited flow of data into the application, quantifying data becomes impossible. 

Providing network bandwidth for AI applications that use supervised learning algorithms is much easier because the finite, tagged, and quantifiable data entry provides the required information for making educated decisions. Having this information is crucial to adequate resourcing, and without it, applications run the risk of poor performance. 

This is generally the case for unsupervised learning algorithms. Because it can be difficult to gauge how much network bandwidth is required, the recommendation from experts in the field is to simply gain experience over time by using the app. Ordinarily, network administrators have an idea of how much data is incoming, the size of payload burst rates, and how difficult it might be to process data, but with generative AI applications, this will be a trial-and-error process. 

Cross-collaboration for a seamless experience

Collaboration is key. Network staff will need to communicate with applications and data science teams so that they have a better understanding of the algorithms being used. This is crucial to planning bandwidth requirements and better preparing other elements of network performance to ensure the workload is handled efficiently and apps work seamlessly. 

Another aspect that poses significant challenges is the computer processing that AI requires. Parallel computing splits processing into smaller problems that are processed independently and concurrently. This speeds up the data processing but requires hundreds of processors across different machines. 

Computing clusters are formed by grouping similar process flows, and they exert tremendous strain on networks. Bottlenecks in any of these process flows can create congestion in the entire cluster. These are the kinds of problems network professionals are now required to anticipate and solve. 

Currently, best practice methods in this area of networking are few and far between. Ultimately, network administrators will need to develop new expertise to solve these challenges and create the best practices as they gain knowledge.

What new network investments need to be made for AI?

For network providers, solving new difficulties can equate to spending more money, and the question is what these new investments look like. The parallel computing that generative AI applications require needs to be matched by supercomputers on the network end. This will give them the capability to handle these large, infinite workloads. 

For those inquiring about the impact this has on edge network devices, experts recommend Google’s Tensor Processing Unit. This ASIC (application-specific integrated circuit) supports Google’s TensorFlow programming infrastructure, which is used for AI machines and deep learning. Apple users are guided to use A11 and 12 bionic CPUs. 

Overall, Ethernet structures need to be empowered to support AI technologies. There are organizations like the Ultra Ethernet Consortium (UEC) that work to define and develop an “open, scalable, and cost-effective communications stack” that will enable high-performance AI processing and unlimited data workloads while maintaining a stable Ethernet base.  

The difficult part in the immediate future is that this AI stack-enabled technology hasn’t arrived yet, though the hopeful predicts it could be available as soon as 2025. The resounding call, however, is for network providers to plan for the future and determine how they will incorporate advances in technology that might alter network structure entirely. 

Concluding Thoughts 

Artificial intelligence has seeped into every facet of technology, with complex and often infinite streams of data entering an application. This is certainly the case for generative AI applications, like ChatGPT, which have exploded into the mainstream and are widely used. 

The challenge for network professionals is that this new age of applications requires them to gain an understanding of server-side development in order to adequately provision bandwidth and power these complex algorithms. 

Unlimited workloads, untagged data, and complex computer processing are key signatures of unsupervised algorithms, which feature majorly across generative AI applications and tools. In this new scenario, network professionals will need to increasingly collaborate with developers and data scientists to ensure they’re well-prepared to meet the hefty bandwidth demands of these applications and enable a seamless customer experience. 

For network providers, the next few years are crucial to future-proofing businesses and ensuring they are ready for advancements in network technology. From investing in supercomputer capabilities to updating edge network devices and preparing for an overhaul of the Ethernet, network providers will need to ready themselves for the era of AI Networking.