The months since Duke University researcher Tyler Norris released his landmark paper on data center flexibility in February have been a whirlwind. That research argued that if large loads — namely data centers — could be flexible with their energy use, it could unlock more than 100 gigawatts of extra capacity on the grid.
Since the report’s publication, the data center flexibility ecosystem has evolved rapidly. New companies have entered the scene offering various forms of flexibility services; pilot projects have taken off; and new partnerships have been established to explore the potential of flexibility. And the interested parties now span every corner of the energy and data center worlds, from hyperscalers and investors to utilities, state and local governments, and regulatory agencies.
But — as Norris outlined when he appeared on Friday’s Latitude Dispatch — six months later, significant questions remain. The industry has to address both regulatory and market design challenges, figure out how to coordinate demand response efforts, and establish measurement protocols. And it will need to answer these questions while simultaneously aligning flexibility products with grid reliability and economic goals.
“We’re not going to turn the whole industry on a dime,” Norris told the audience last week. “It’s going to take a few years for all this to play out, but I think it’s very exciting to see these different, creative approaches now being seriously explored.”
What does the flexible data center solutions ecosystem look like right now?
Norris categorizes these emerging solutions into two buckets: those shifting computational workloads, and those ensuring workloads don’t have to shift during a grid event.
The first category of solutions is one that Google, for example, has been developing and using internally for several years, Norris noted. It’s also being pursued by Emerald AI, a startup developing software that orchestrates shifting batchable workloads to create windows of flexibility. The approach aims to balance performance needs with grid support, by deferring or advancing certain training jobs to moments when the grid has capacity to spare.
For more on Emerald AI and the mechanics of data center flexibility, listen to founder and CEO Varun Sivaram’s discussion with Shayle Kann on the Catalyst podcast:
In the second category Norris pointed to Verrus, which emerged from stealth in mid-March, after incubating inside Alphabet spinout Sidewalk Infrastructure Partners. The Verrus approach includes redesigning the data center itself to serve as a flexible grid asset. Verrus is pitching data centers that can curtail up to 100% of their load within one minute of receiving a signal from the utility by using software-managed onsite batteries.
The company doesn’t have any data centers up and running quite yet, though; it conducted a demonstration in collaboration with the National Renewable Energy Laboratory in May, and hopes to break ground on its own data centers later this year.
As far as how much impact either category of tech solution is having yet on the data center industry, Norris said there’s still a long way to go, especially given how large the industry is.
“It’s almost worth thinking about these standardized data center architectures and designs…like designing an aircraft carrier,” he explained. “You have a whole supply chain and a whole set of standards, construction crews, equipment, all of it oriented towards these standardized designs.”
Shifting those standards will require both time and coordination, he added — and potentially for AI companies to shift what they expect from a data center.
The reality is that most AI companies will prefer the second bucket of approaches because they have zero impact on their users. That’s pushing them toward onsite generation and in some cases energy storage, Norris explained.
“But from a speed and minimal capex impact perspective, a software-based solution that allows you to time-shift — especially for already batchable and deferrable training loads — in some ways that’s the fastest possible option you can imagine here,” he added.
How is the demand response landscape evolving?
One key question as more actors embrace flexibility is who actually runs these programs. While there will likely be many models that come to the surface as the market evolves, Norris said “it would seem that the large laid itself should play a significant role in actually procuring those flexibility resources.”
“Maybe that is ultimately sleeved through the utility, like green tariff programs,” he added. “I’m just increasingly mindful about putting more and more burden, and more and more stress on the utilities and the transmission providers themselves, because they already have so much on their plate.”
To the extent that large loads themselves aren’t able to or are unwilling to be flexible, the notion of a “secondary market for flexibility” comes into play. In that dynamic, the large loads may act more as flexibility brokers, procuring flexibility from other customers — like existing demand response aggregators, for example — and maintaining the benefits associated with flexibility.
How consequential is PJM’s “non capacity-backed load” proposal?
The practical implications of these questions are playing out in the PJM Interconnection, the biggest regional transmission organization in the U.S.
PJM is projecting a peak load increase of around 32 GW between now and 2030, driven almost entirely by data centers. Recent capacity auctions have cleared at record prices, which many in the energy world have taken as an indicator that power markets just aren’t prepared to meet the coming surge of AI-driven demand.
In response to concerns about rising prices and consumers covering the costs for data center infrastructure buildout, PJM put forward a proposal known as the non capacity-backed load initiative, or NCBL. Under the proposal, data centers and other large loads over 50 megawatts could avoid procuring capacity at auction by agreeing to limit their power use during capacity shortfalls. One key element of the proposal, however, is that while PJM would ask for voluntary participation, large loads could be roped in involuntarily based on need.
“PJM frames it as… [curtailment] is voluntary at first and we hope people will sign up, but if not, and resource adequacy is short, it’s going to be mandatory,” Norris explained On top of that, large loads would be subject to curtailment before PJM calls upon conventional demand response.
It’s hard to overstate just how consequential PJM’s efforts here are, Norris added. “We really are talking about the future of power markets in the U.S., and their design. The Independent Market Monitor…came out and said that if we don’t get this right we may just see PJM dissolve entirely.”
It’s important to recognize that PJM is in a “serious bind,” Norris said, given the load growth the region is experiencing. “Something has to happen here, and I think there is some credit to PJM that they were willing to stick their necks out and put out a challenging proposal that very few are going to like, because it is such a shift in the way that market is structured.”
How are stakeholders in the data center world responding to the proposal?
However, the NCBL proposal has proved controversial already.
Earlier this week, PJM published the input it had received on the initiative. Stakeholders including Amazon, Microsoft, and Constellation Energy weighed in on the proposal, pushing back especially against the mandatory nature of curtailment PJM had outlined.
One key issue, Norris said, was the lack of a “defined speed to power benefit” for the load. In other words, “there was no defined path to faster interconnection.” That’s compared to a proposal in the Southwest Power Pool that emphasizes that flexible loads could get grid connected in as little as 90 days.
“It’s hard to react positively to something that’s completely undefined,” Norris added. “PJM said very clearly that this is a conceptual proposal. They were seeking input and I’m sure they expected to get a lot of pushback.”
That said, Norris said he didn’t interpret the nearly 200 pages of stakeholder input as a “full-scale pushback” by the industry against flexibility.
“I counted nearly 20 separate responses, including from the Data Center Coalition, Google, Amazon, and the industrial customers…saying that they want to see modernization of the demand response program design,” Norris explained. “Google, which is one of the most forward-leaning here…noted that the existing demand response program doesn’t work well for them because…there’s no limit necessarily on how much the load can be curtailed.”
These questions and concerns are a natural part of creating something completely new to U.S. electricity markets, he added: “We’re trying to create a product that exchanges bounded flexibility for accelerated interconnection, and we’ve never done that before, especially at a transmission scale.”
That product, if done right, is the “holy grail” of the AI boom, he added. “Hopefully we can get a lot of the bright minds together to figure out that product.”


