Data centers running artificial intelligence — and especially those training large language models — don’t behave the same as traditional data centers.
For years, typical workloads like streaming video or cloud-based productivity applications have required consistent levels of data processing and power consumption. This has been replaced by dramatic, “spiky” use of compute and associated power consumption, which can be both intense and volatile particularly during the training phase of AI rollouts.
At the same time, the scale of data centers continues to grow, which exacerbates the associated challenges. It has become routine to see data center campuses hundreds of megawatts in scale are increasingly routine; and gigawatt-scale campuses are on the near horizon.
But it is the unpredictability that presents the most unique challenges. AI training results in rapid increases in power demand, which can drop off equally quickly, creating sometimes-extreme “peaks and valleys” in consumption. Grids, generally speaking, are not designed for this level of volatility; they operate best when conditions are stable and consistent.
Traditionally, grid operators have not had the tools to manage this challenge effectively. With the rise of AI, they need management and control systems that can adapt quickly to changing conditions, and infrastructure flexible enough to accommodate those changes while still maintaining the levels of resiliency and responsiveness needed to keep the power flowing reliably.
Yes, utilities need to be able to meet the increased demand for power, but they also need to be able to accommodate the dynamic nature of these fast-growing, city-sized loads. To do this, they need flexibility above all.
Joining with the grid
When it comes to siting a data center, the most critical factor is the availability of power. It is also increasingly the biggest bottleneck. In the U.S. in particular, grid interconnection queues are notoriously and frustratingly long.
This sits in stark contrast with the development timelines we typically see in the IT industry, which moves fast, innovating and shifting direction quickly to adapt with the market.
Data center operators can’t just be customers of utilities; they need to act as partners, fully engaged in designing and driving the increasingly diverse, distributed, and digitalized energy ecosystem. They need to be exploring options like on-site generation (solar, wind, or even nuclear) and building flexibility into the system with technologies like on-site battery energy storage systems (BESS), hydrogen-ready generation and other ‘behind-the-meter’ solutions, coupled with sophisticated energy management systems.
Furthermore, they need to consider new approaches to using energy — including techniques like load shaping or demand response — that can enable them to take advantage of off-peak power availability in ways they may not have considered before.
Going modular
An increasingly attractive option for data center developers that has emerged in recent years is the use of prefabricated, modular grid infrastructure. Compact substations are now available to incorporate transformers, gas-insulated switchgear, breakers and other key components into a single simple-to-deploy unit. This kind of equipment can ease permitting challenges and cut deployment times, particularly in crowded urban areas where space is at a premium.
This approach has already been used effectively in a number of instances, such as in Ireland’s Castlebagot region, where a hyperscale data center operator was able to overcome particularly challenging space limitations and accelerate their timeline to develop a new campus. We are also starting to see this strategy applied in other markets, such as the U.S.
Digitalization for real-time optimization
Critical grid components like transformers and switchgear are not enough, though. They need to be coupled with digital technologies like predictive analytics, and real-time monitoring and asset management systems to bring much-needed intelligence to the network. These technologies can enable more sophisticated and dynamic approaches to managing both loads and supply — and ultimately both optimize energy flows and detect problems early, before they have a significant impact on operations.
AI itself can help with this, transforming how we manage and maintain grids, rather than simply increasing consumption of electricity. Intelligent infrastructure can help us prevent or mitigate disruptions, anticipate problems, and even adopt new business models altogether.
But none of this can be accomplished without supportive policies. Data center operators, their utility partners and their technology suppliers can’t do this on their own. We need innovative policy frameworks that provide needed flexibility and remove bottlenecks to grid expansion, interconnection and planning.
The AI Action Plan announced in July by the White House represents an important step toward establishing such a framework. More cooperation between federal, state, and local governments, in collaboration with industry stakeholders, will also be needed. We have the opportunity to ensure that the data center industry can continue to flourish here in the U.S. But it will depend on the re-imagining of the power system, to create a more intelligent, more flexible, and resilient grid.
Anthony Allard is the executive VP and head of North America for Hitachi Energy. The opinions represented in this contributed article are solely those of the author, and do not reflect the views of Latitude Media or any of its staff.


