Running 1,000 H100 GPUs and 1,000 A100 GPUs could cost $2 million in power bills annually, data from Liftr Insights suggests.

Nvidia H100 tensor core GPU
– Nvidia

According to Liftr's data covering semiconductors and power usage in Texas, using 2,000 Nvidia cards - worth around $33 million in AI accelerator components - could see power bills reaching $2 million in Dallas.

This cluster would have a compute performance in excess of 44.7 FP64 petaflops.

In other areas of Texas, that bill could vary to as much as $2.1 million in Houston, to $1.9m in San Antonio or $1.6m in Austin.

Liftr also notes that demand for H100s and A100s remains high.

"Despite the news of the delay in the Blackwell processes," said Tab Schadt, CEO of Liftr Insights, "major cloud providers like AWS (Amazon Web Services), Azure, and GCP have been increasing their adoption of the latest Nvidia semiconductors."

2,000 GPUs is on the lower end of potential deployments when considering how large the clusters hyperscalers and major tech companies are rolling out are becoming.

In January 2024, Meta revealed that it expected to have the compute equivalent of 600,000 H100s by the end of the year.

Elon Musk is aiming for 100,000 H100 GPUs for his xAI startup, while Tesla has deployed some 35,000 Nvidia H100s.

According to a January 2024 report from TechInsights, 2023 saw around 878,000 accelerators used by cloud providers, though the analyst firm argued that in turning out seven million GPU-hours of work, they were likely underutilized. The report noted that AWS' clusters each have 20,000 H100s.