Interest in Artificial intelligence (AI) and machine learning (ML) is at a fever pitch.

While the idea of AI itself isn’t new, consumer-friendly tools like chatbots have created massive hype over the past year.

ChatGPT set a record for reaching 100 million monthly active users faster than any other consumer application in history.

The public-facing aspect of many AI applications seems simple enough. But under the hood, so to speak, is an incredibly complex infrastructure and ravenous demand for computing power.

Data centers are scrambling to design and deploy facilities to keep pace. The sheer volume of processing units, cabling infrastructure, power consumption, and physical space required to build the AI data center nodes to meet the demands of the marketplace is a massive undertaking – and operators need to plan accordingly.

Fiber and dual networks fuel AI growth

There are two kinds of processors in data centers driving the AI revolution: central processing units (CPUs) and graphics processing units (GPUs).

CPUs are the traditional workhorses of the data center, with a focus on serial processing, meaning one task at a time. However, because GPUs have a large number of smaller and more specialized cores, they work better with the multiple parallel processes needed for training AI functions, like large language models (LLMs).

An AI network within a data center, is essentially, a network within a network. Within the AI network, GPUs and CPUs function like the two halves of the human brain. Large server farms with this setup can effectively act as a supercomputer, speeding up the time to train AI models.

Fiber is the key to enable a system to grow smarter and smarter at exponential rates. For example, when a person poses a question to a digital assistant, AI functions will be interlinked with fiber connections that will analyze untold amounts of data and possible answers in real time. And as those answers become faster, more accurate and more “human” sounding, these features will become more useful and more integrated into everyday life.

To be effective, AI applications require a dense, adaptable, and fiber-rich environment to function. Many fiber links and optical interconnects are needed to facilitate higher data processing and the higher memory bandwidth that AI will demand.

As a network’s computing power grows, network operators will likely experience a significant “memory gap” caused by the disproportionate rates of memory capacity and processing capabilities. This is where high fiber count components are key.

A solution to bridge this memory gap is to closely network GPUs with fiber dense optical interconnects, resulting in higher fiber counts per server.

Supercomputer
– Getty Images

Data center operators must plan for a new processing model

For data center operators, however, this means that to support AI growth they are essentially building two networks, one on top of the other. There are a number of key considerations:

  • Space: Physical rack space must be maximized for an AI network. Traditionally data centers are mostly same row, point-to-point cabling with patch cords. But in AI/ML clusters, each node is cabled to other nodes, meaning that as the AI/ML cluster scales up, the cabling will need to cross the data hall. When this happens, taking hundreds of patch cords to different racks throughout a data hall becomes time consuming; in addition to just making things generally messy and congested, it can increase the risk of damage and misidentification. This increased density needs to be carefully managed.
  • Power: The intense computing required for AI means power needs have to be managed differently. The power density envelope for AI and ML can be several times higher than traditional envelopes. That also means more advanced cooling systems will be required to handle the increased output.
  • Bandwidth: As a network’s computing power grows, the need for memory bandwidth grows deeper and faster. The need for memory bandwidth has created a “memory gap” between memory capacity and high-bandwidth interconnect technology. One solution is to closely network GPUs with higher capacity optical interconnects, resulting in higher fiber counts per server.
  • Latency: For AI, servers have both GPUs and CPUs onboard to minimize bottlenecks between the compute and the accelerators. While many consumer tech applications like virtual reality/augmented reality, remote operation, and immersive gaming prioritize the reduction of latency between the compute and the end user, latency in AI is dictated by the speed between those units inside the server, which requires high-fidelity cabling.

Fiber dense products are key to meeting exploding demand

Fiber-rich componentry is essential for maximizing data center space and helping operators meet rapidly increasing demand for AI processing.

Rather than point-to-point connections for machine-to-machine or “east-to-west” communications that were only previously seen at massive research institutions like national laboratories, these interconnections are becoming commonplace at hyperscale data centers dealing with AI. The key to managing this is reducing complexity.

To speed up the time to cable these nodes and simplify the installation process, pre-terminated cable assemblies should be used.

With fiber-rich pre-connectorized cables, structured cabling can be put in place before the arrival of racks and hardware, allowing for shorter patch cords and assemblies to be used to connect the ports for flexibility and future growth.

What does this all mean? Data center operators – particularly at the hyperscale level – need to ensure they have the cabling and infrastructure to meet demand.

The processing needs of AI are different from other use cases driving hyperscale expansion like streaming, SaaS, etc. With proper planning, data center operators will be well prepared to handle growth in AI as applications scale up from chatbots to more commercial and widespread use cases.