The last two years have brought increasingly dramatic forecasts of load growth and of the trillions of dollars in new infrastructure that will be needed to accommodate it. In the U.S., for instance, peak load is expected to grow over 20% in the next decade.
But new research published today suggests that the existing power system already has substantial extra capacity, if only we can unlock it. The country could add over 100 gigawatts of new load by adopting flexibility solutions, without major capacity expansion, according to the new study from Duke University’s Nicholas Institute for Energy, Environment & Sustainability.
The study assesses the main U.S. balancing authorities’ capacity to add new load “before total load exceeds what system planners are already prepared to serve, provided the new load can be temporarily curtailed as needed.” That’s a volume of headroom that is “significantly beyond expectations,” according to Tyler Norris, who co-authored the report.
With an average load curtailment rate of 0.25%, the system could integrate around 76 GW of new load, which is the “equivalent to 10% of the nation’s peak demand,” according to the study. Major power consumers such as data centers could achieve this by agreeing to curtail the new load they’re bringing onto the grid by the equivalent of 90% of one single day in an entire year, Norris told Latitude Media. With a 1% curtailment rate, the equivalent of 3.6 offline days in a year, the potential for new load is as high as 126 GW.
Varun Sivaram, senior fellow for energy and climate at the Council on Foreign Relations, who did not author the report, describes these findings as “seismic.”
“For this minimal amount of flexibility, existing U.S. regional power grid infrastructure and power plant capacity could support FOUR Project Stargates, or more data center capacity than the entire current U.S. nuclear fleet of 94 reactors can supply today,” he told Latitude Media in an email (emphasis his). Project Stargate, the joint venture between OpenAI, Oracle, and SoftBank to finance and build the world’s largest data center supercomputers, is estimated to need around 15 GW of power.
Data centers as flexible grid assets
The new report is a major contribution to the discussion around the potential flexibility of new load that has emerged amid the artificial intelligence boom, as new data centers flood utilities with interconnection requests and queue times get longer and longer.
Utilities and data center operators have already been experimenting with flexible loads through options like batchable workloads and load shifting between data centers. An Electric Power Research Institute-led research and development coalition known as DCFlex, for example, is conducting flexibility demonstrations at data centers around the country, with the aim of turning data centers into grid assets. The group includes utilities such as Constellation Energy and PG&E, as well as hyperscalers like Google and Meta.
The goal of capacity via load flexibility is more viable thanks to trends such as the rise of on-site generation and storage technologies, which can provide data centers with power during curtailment events. In this way, curtailing a data center’s load doesn’t actually require that the facility go fully offline.
“You might only need to curtail 10% or 20% or 50% [of a data center] because that’s all the flexibility you need right to stay below the existing system peak,” Norris explained. “So it ends up being a larger number of hours, but in most of those hours, you’re retaining most of the data center loads…In 85 hours of the year on average, there would need to be some amount of load curtailment. But during 73 of those, you’re retaining at least 50% of the data center load.”
The report found that the average curtailment event lasts about two hours. “And that ends up looking like a profile relevant to short-duration lithium-ion batteries, Norris added, “especially if you don’t need to size the battery at the full size of the data center.”
Even long interconnection queues can be used to encourage load flexibility. For instance, the Northern California utility PG&E is piloting flexible service agreements — a program called Flex Connect — to get new load online faster in capacity-constrained parts of California.
Flexibility “could be the whole ballgame”
A more tailored analysis is needed to establish the precise amount of headway for new load in every market, and extensive data around how many data centers are already experimenting with flexibility solutions is hard to come by.
But, Norris said, the potential of load flexibility is “ demonstrated, it’s feasible, and it’s happening already.”.
Of course, major investments in building new peaking capacity and in the grid are still necessary. However, the report found that “flexible load strategies can help tap existing headroom to more quickly integrate new loads, reduce the cost of capacity expansion, and enable greater focus on the highest-value investments in the electric power system.”
Costa Samaras, Director of the Scott Institute for Energy Innovation at Carnegie Mellon University, told Latitude Media that even if the power demand from the data centers boom ends up being less than forecasts suggest — a possibility that’s been recently thrown into the conversation by Chinese start-up DeepSeek’s release of its R1 reasoning model — “we will never regret building a more flexible grid right now. It will save emissions, make the grid more resilient, and save costs.”
In fact, Sivaram said that embracing the potential flexibility of data centers “could be the whole ballgame.”
“With data center flexibility, the United States can build out data centers for AI training and inference at a breakneck pace without waiting years or decades for power infrastructure to catch up, compete in the global race with China, and ensure the reliability of power grids around the country and their affordability,” he said, adding that with the appropriate demonstration and investment, major scale flexibility could be implemented within a year. “This report should be a call to action to speed efforts to achieve demand flexibility of data centers.”


