Google has launched a slightly updated version of its TPU AI chip line, the Cloud TPU v5p.

The company also announced the 'AI Hypercomputer,' a cloud-based supercomputer architecture that combines performance-optimized hardware, open software, machine learning frameworks, and flexible consumption models.

The Hypercomputer uses liquid cooling and Google's Jupiter data center network technology.

Google-server_1.width-358.jpg
– Google

Google's TPU v5p follows the v5e, and has twice the flops performance of the v4. It can be scaled up to 8,960 chips in a single pod.

"At Google, we’ve long believed in the power of AI to help solve challenging problems. Until very recently, training large foundation models and serving them at scale was too complicated and expensive for many organizations," Google's Amin Vahdat and Mark Lohmeyer said.

"Today, with Cloud TPU v5p and AI Hypercomputer, we’re excited to extend the result of decades of research in AI and systems design with our customers, so they can innovate with AI faster, more efficiently, and more cost-effectively.