Google has announced a new supercomputer virtual machine, which can grow to 26,000 Nvidia H100 Hopper GPUs.

At full spec, the A3 supercomputer is capable of up to 26 exaflops of AI performance. The system is not located at a single data center, but is instead the pooled resources of multiple facilities.

A single A3 virtual machine features eight H100 GPUs, 3.6TB/s bisectional bandwidth via Nvidia NVSwitch and NVLink 4.0, 4th Gen Intel Xeon Scalable processors, and 2TB of host memory via 4800 MHz DDR5 DIMMs.

It is also the first instance to use Google’s custom Intel Infrastructure Processing Unit (IPU), which the company claimed allows for 10x the networking bandwidth over A2 VMs.

“Google Cloud's A3 VMs, powered by next-generation Nvidia H100 GPUs, will accelerate training and serving of generative AI applications,” said Ian Buck, vice president of hyperscale and high performance computing at Nvidia.

“On the heels of Google Cloud’s recently launched G2 instances, we're proud to continue our work with Google Cloud to help transform enterprises around the world with purpose-built AI infrastructure.”

Google said that the instances were designed with AI training in mind. "Given the demands of these workloads, a one-size-fits-all approach is not enough — you need infrastructure that’s purpose-built for AI," Google Cloud's Roy Kim and Chris Kleban said in a blog post.

The company also offers its own TPU chips over the cloud, but Nvidia's GPUs have cornered much of the generative AI market.

Get a monthly roundup of Hyperscale news, direct to your inbox.