Roughly a trillion dollars is being spent overhauling existing infrastructure, building new data centers, manufacturing chips, and modernizing the grid as artificial intelligence takes over the data center.

To celebrate AI Week, DCD's week-long look into the AI age, we've created a 16-page supplement on AI and data centers.

Here's what in it:

The true believers

Lambda Labs has been waiting for the AI explosion for more than a decade.

The company has raised hundreds of millions, and is set to raise hundreds more, to build out an AI-focused cloud.

But can the company keep up with the heavy spending hyperscalers to help train ever-larger models? As Microsoft plans to put as much as $100bn into Stargate, what's left for the second-tier cloud companies?

We chat to the head of Lambda Cloud about what's next for the Silicon Valley upstart.

Mr. Nvidia

Ian Buck, head of Nvidia's data center and accelerated computing efforts, and the original creator of CUDA, Buck talks us through the future of Nvidia and why it is thinking at a system level.

Plus, Buck's thoughts on TDP and why Blackwell will have the fastest rollout of a GPU ever.

The cloud inception

There's a cloud within a cloud.

Nvidia is rolling out a DGX Cloud service, only available through the cloud services of AWS, Google, Microsoft, and Oracle.

We explore this strange relationship, where Nvidia is in a delicate dance with its biggest customers and biggest competitors to deploy a cloud service that is offered through cloud services.

We talk to the head of DGX Cloud, former Meta VP Alexis Bjorlin about the careful balancing act of maintaining existing business ties while trying to earn money from becoming an as-a-Service GPU business.

Factory finish

These GPUs need to go into servers, so we head to Lenovo's factory in Budapest to learn what it takes to get ready for the next wave of compute.

Plus, we hear about Intel and AMD's competing accelerators, which hope to make this conversation a little less one sided.

Read it for free today: