The AI Industry has been making waves lately, pushing the boundaries of what machines can do in terms of creativity and human-like outputs. For example, ChatGPT reached 100 million monthly active users earlier this year, with 10 million daily queries.

Generative AI is truly remarkable – changing the way we work, the way we do business, and even the way we create. Yet, this is just the tip of the proverbial iceberg. Today, ChatGPT is a standalone application. When integrated into search engines, such as Microsoft’s Bing, it is used for any search from any user, which will not only return more accurate and meaningful search results as the neural network learns, but will lead to an exponential increase in overall usage.

But, there’s a catch. As generative AI models become more complex and demanding, they consume even more significant amounts of power. That poses some challenges for the data center industry, especially when it comes to meeting intense power requirements. Let's break it down.

AI
– Getty

The power-intensive nature of generative AI

Generative AI has come a long way in recent years, thanks to a perfect storm of factors. We've seen advancements in hardware like GPUs, which can handle heavy computational demands. Plus, the availability of large-scale datasets and the development of sophisticated architectures and algorithms have made training AI more effective than ever before. With the advent of cloud computing platforms, access to substantial computational resources has enabled faster training times and experimentation with larger models.

It's been an exciting journey so far, but with great power comes great emissions. The more intricate and capable these generative AI models become, the more computational resources they require, which means more power consumption.

Intuitive generative AI models are capable of understanding and analyzing vast amounts of data, detecting patterns, and generating outputs that are both novel and remarkably accurate. To achieve this level of sophistication, these models rely on deep neural networks with numerous layers and parameters. The training process involves feeding the model with extensive datasets and iteratively adjusting the network's parameters to optimize its performance and output.

This iterative process is computationally intensive and demands substantial computational resources, often in the form of powerful GPUs or specialized hardware accelerators. In fact, just a single machine learning model can emit more than 626,000 pounds of carbon dioxide equivalent over its lifetime, which is five times the lifetime emissions of an average American car. That's no small number.

As generative AI models become more capable, they require larger datasets for training, further amplifying their computational requirements. These datasets can consist of millions or even billions of examples, necessitating extensive processing power to analyze and extract meaningful insights. Additionally, the increased complexity of the models results in longer training times, consuming additional computational resources over extended periods.

The relationship between the intricacy and capability of generative AI models and their power consumption is a significant concern from both environmental and practical perspectives. The energy consumption associated with powering these models not only contributes to carbon emissions and environmental degradation but also poses challenges for data center operators and power infrastructure providers.

The challenges of powering high-density data centers

The vast majority of existing data centers are not equipped to handle the rack densities these devices require. A typical rack may be providing only eight-20 kW, which is adequate for traditional enterprise servers and storage, but not GPUs which demand three to four times that amount.

Due to the current power available to each rack, the full potential of GPUs remains untapped, resulting in underutilized rack space and suboptimal resource allocation, all of which means reduced efficiency, higher operational costs, and scalability challenges. To the data center developer, this could mean losing tenants and/or not attracting a large subset of new tenants who wish to deploy these high-density devices.

To accommodate the energy-intensive nature of AI workloads, integrating GPUs into data center infrastructure requires significant modifications, including the need to upgrade power distribution systems, transmission and substation upgrades, deploying new cooling technologies, and rethinking physical space arrangements. It's a balancing act, finding the sweet spot between power availability and rack utilization, ensuring efficient resource utilization while attempting to not overload the power infrastructure, risking disruptions.

Retooling the world’s data centers

"We're seeing incredible orders to retool the world's data centers. I think you're seeing the beginning of, call it, a 10-year transition to basically recycle or reclaim the world's data centers and build it out as accelerated computing," Nvidia founder and CEO Jensen Huang said.

"You'll have a pretty dramatic shift in the spend of a data center from traditional computing and to accelerate computing with SmartNICs, smart switches, of course, GPUs and the workload is going to be predominantly generative AI."

Industry leaders like Huang envision a monumental shift in the landscape of data centers. The increasing demand for accelerated computing, driven by the rise of generative AI workloads, is prompting significant orders to retool existing data centers. This marks the start of a transformative transition towards repurposing and revitalizing these facilities for accelerated computing purposes.

With this transition, data centers will undergo a remarkable transformation. Traditional computing approaches will give way to the dominance of accelerated computing facilitated by SmartNICs, smart switches, and GPUs. The focus will be on unlocking the full potential of generative AI workloads, shaping the future of data centers. As we navigate this transformative period, it becomes crucial for industry leaders to recognize the magnitude of this transition.

Distributed energy solutions: A shift towards decentralization

Here's where industry leaders can truly make a difference. The centralized electrical grid is struggling to keep up with the surging energy demands of AI workloads, especially with the increasing adoption of GPU-intensive tasks. This strain on the power infrastructure not only challenges its capacity but also contributes to a concerning rise in carbon emissions, hampering global efforts to combat climate change. To address these pressing issues, a shift towards distributed energy solutions is imperative.

Distributed energy solutions offer a decentralized approach to power generation, empowering data centers to reduce their carbon footprint and reduce their reliance on traditional power grids. However, the benefits extend beyond environmental concerns. Embracing distributed energy solutions bolsters the resilience and reliability of the power supply, safeguarding the industry against disruptions caused by centralized power infrastructure limitations.

In this landscape, Bloom is poised to seize a unique opportunity. By providing solutions that supplement existing power infrastructure in data centers, Bloom can swiftly meet the increased energy demands without the need for lengthy substation or transmission upgrades. This agility enables developers and colocation providers to upgrade their capacity, accommodating the higher power requirements of GPU workloads in a timely manner. Bloom energy servers can work seamlessly with existing infrastructure, especially when there is also grid power available. We can simply supplement the existing power capacity directly to the building, working in conjunction with a centralized supply.

Bloom's Primary Energy Server (PES) stands as a testament to its commitment to rapid deployment and reliable power solutions. With multiple CAPEX or financing options, Bloom can align with the financial needs of customers who are willing to pay a premium lease rate to access the higher power densities demanded by GPU workloads. For the developer, Bloom can increase rack densities quickly, enabling the retooling of existing space in order to accommodate the 30+ kW per rack demand.

AI growth
– Getty

Imagine a groundbreaking power ecosystem for generative AI workloads, where decentralized energy solutions not only drastically diminish the facility's carbon footprint but also amplify reliability and pricing predictability for the customer/tenant. Now you don’t have to – because that’s what Bloom offers. Additionally, a Bloom deployment ensures future-proofing from a green hydrogen standpoint, so when the distribution of green hydrogen and corresponding economics are established, Bloom stands poised with a zero-carbon power solution.

Don’t miss the next article in our series, ​​’Power as the great AI equalizer: Addressing power constraints to enable future growth,’ where we will delve into the significance of scalable power solutions and innovative cooling techniques. Together, we will explore how the industry can overcome power limitations, empowering the future growth of AI workloads while maintaining optimum efficiency and sustainability.

Want to learn more about Bloom’s distributed energy solutions for data centers? Visit bloomenergy.com/industries/data-centers.