Although optics has been used in data centers for decades, it is now reaching further and further into the beating heart of the data center - the processors and accelerators that run the data center workloads. So, how did we get here and what does the future look like?

Optical communication has the advantage of high bandwidth, low latency, and low energy loss. Although it was initially used for facilitating traffic to and from data centers, it has since been used within the data center itself – typically at the top-of-rack (TOR) to create high-speed links between top-of-rack switches across racks. The large data flows used in AI, where low-latency and high-bandwidth interconnects are crucial, have only accelerated the use of optical communications.

As an example of the importance of optics, Google has even developed its own optical switch, which makes use of free space optics components to reduce the latency and power consumption of its network. By using free space optics, Google has avoided the need to convert signals between the optical and electrical domains at each switch.

Google claims its custom network improves throughput by 30 percent, uses 40 percent less power, incurs 30 percent less CapEx, reduces flow completion by 10 percent, and delivers 50x less downtime across its network. It means that nodes can quickly be switched in and out if they go down or need to be repaired.

The rise of optical interconnects

Recently there has been a growing trend towards using optical interconnect within the rack itself. Driven by the high-bandwidth, low-latency requirements of AI (as AI models are distributed across dozens of processing nodes), optical interconnect is helping these multi-node systems run as fast as possible. As always, speed is critical.

Optical interconnect is an area packed with innovation. There are startups developing fully optical packet switching - avoiding the need to convert signals between the electrical and optical domains, significantly saving on power consumption and latency.

Others are applying optics to the next layer down, where work is underway to develop fully optical chip-to-chip or even silicon-die-to-silicon die interconnects. Here, the communications bandwidth is even higher. For this advancement to take place, co-packaged optics is critical. Foundries and chip packaging companies are investing heavily in this capability.

From switching to processing

Optics isn’t just about switching – it can be used for processing too. It has similar benefits of delivering speed, power, and efficiency.

Why use optics for processing? AI processing demand is increasing at a phenomenal rate, and the silicon roadmap is not able to keep up. Silicon chip developers are chasing diminishing returns; pouring more and more time, effort, and money into achieving marginal performance gains.

The current answer in the industry is to increase the area of silicon, have multiple silicon dies within a chip and to use sophisticated packaging to solve the challenges this creates. But this comes at a huge cost, both in dollars and the power consumed.

The large monetary cost is driven by the huge capital expenditure of buying the latest and greatest AI accelerators, plus the increased infrastructure needed to supply and cool such power-hungry devices; this is before considering the increased cost of the energy consumed.

Optics is ideally suited to AI, particularly because the heart of AI uses vector-matrix multiplication, which is very efficiently solved in light. Over the last few years, there had been hope of applying integrated photonics for AI processing, however, the technology is now primarily focused towards applications in interconnect or switching, due to limitations in using integrated photonics for the enormous scale of matrix processing needed for modern AI.

Using 3D (free space) optics for processing overcomes these challenges – it uses a fraction of the power and offers a leap in performance. Using 3D optics means that very wide vectors can be used and this maximizes performance and energy efficiency. AI accelerators that use 3D optics promise to offer low-power, high-performance solutions at a fraction of the capital and operating costs compared to today’s GPU solutions.

With the ever-increasing use of optics in the data center, 3D optics for AI processing will inevitably be the next step.