Meta continued to spend heavily on servers, data centers, and networking infrastructure as it builds out its AI infrastructure.

Capital expenditure hit $8.5 billion for the quarter, with the company warning that it expected costs to rise further.

mark zuckerberg kitten.jpg
– Facebook

Facebook's parent firm reported revenues of $39.07 billion, up 22 percent over last year and above analyst expectations. It also provided revenue guidance of $38.5bn to $41bn for the next quarter.

Net income was $13.46bn, up 73 percent.

Capex was below analyst expectations of $9.51bn, but still grew 32.8 percent year-on-year. CFO Susan Li said that the company was "clearly in the process of building out a lot of capacity."

She added: "We're employing a strategy of staging our data center sites at various phases of development, which allows us to flex up to meet more demand and less lead time if needed while limiting how much spend we're committing to in the outer years.

"So while we do expect that we are going to grow capex significantly in 2025, we feel like we have a good framework in place in terms of thinking about where the opportunities are and making sure that we have the flexibility to deploy it as makes the most sense."

The company will use the same infrastructure it built for generative AI training for generative AI inference. "We can also use it for ranking and recommendations by making certain modifications like adding general compute and storage," Li said.

"There's a whole host of use cases for the life of any individual data center."

While costs will rise next year, Li said the exact level of spend is still being determined. "There is a big range of capex outcomes that we’re working through right now based on different data center options in different places," she said. "It’s really a very dynamic planning area for us right now and we’ll have more clarity on that at the Q4 call."

Despite the increased spending, generative AI is not expected "to be a meaningful driver of revenue in 2024," Meta said.

However, CEO Mark Zuckerberg said that he would "rather risk building capacity before it is needed, rather than too late, given the long lead times for spinning up new infra projects."

Giving a glimpse into the scale needed to keep up with generative AI, he said: "The amount of compute needed to train [AI model] Llama 4 will likely be almost 10× more than what we used to train Llama 3 -- and future models will continue to grow beyond that."

The company expects to operate 600,000 H100 GPU equivalents of compute by year-end.