Data center flexibility dominated the power sector headlines in 2025. The Duke University study published early last year outlined an irresistible idea: unlock 100 gigawatts of grid capacity, without new transmission, by ramping down data center demand during grid stress.
The research put the spotlight on a growing collection of flexibility startups, which attracted deep-pocketed investors and high-profile boards. Hyperscalers staffed up on flexibility experts, distributed asset aggregators spun out new business models, and studies tackled how to integrate massive AI loads without compromising grid stability.
Flexibility isn’t new. For years, it was a sustainability move for hyperscalers, which routed non-urgent workloads to cleaner hours, or else a straightforward cost play via demand response.
In 2026, however, flexibility is being discussed almost exclusively in the context of bypassing bottlenecks — getting data centers online faster in a world with long interconnection queues and gigawatt-scale data centers.
Aligning on the core thesis behind flexibility is key to assessing whether the enthusiasm is more than a pipe dream, said Peter Hirschboeck, founder and managing director at energy infrastructure analysis firm Impact ECI. There’s excitement around the idea that flexible data centers will prevent new transmission upgrades and increase grid utilization, ultimately bringing down costs for consumers, Hirschboeck said — but he’s not convinced. “It’s a nice story, it makes sense, but in reality nothing tends to work out the way it’s planned, and prices usually go up,” he explained.
Meanwhile, some skeptics argue that there’s a disconnect between what’s technically possible and what’s economically feasible. In other words: Are data centers likely to pursue flexibility unless forced to do so by policy and hard shutoffs? Are they likely to opt into long-term agreements requiring them to be flexible indefinitely?
We’re still in the land of mostly hypotheticals. Only a handful of U.S. projects have experimented with flexibility so far, and meanwhile hyperscalers are gobbling up land and power, inking deals for increasingly aspirational technologies, and even getting into the development business themselves.
As far as moving beyond hypotheticals and chasing scale, Hirshboeck said, it’s becoming clear that 2026 will be pivotal: “We’re going from the hype cycle to the roll-up-our-sleeves cycle, which is let’s get these ideas turned from VC pie-in-the sky thinking to actual deployed technology.”
Internal flexibility progress
Impact ECI assesses flexibility across four key pillars: two within the walls of a data center — compute flexibility and infrastructure flexibility — and two outside it — energy infrastructure and contract-enforced flexibility.
Compute flexibility is an area that has garnered a lot of attention, dating back to before the release of the Duke study. And while much of the activity has involved modeling and projections, there are a handful of deployments at real data centers underway.
In late 2024, the Electric Power Research Institute announced DCFlex, a research and development coalition including hyperscalers, ISOs, and utilities. The coalition’s focus, EPRI explained at the time, was to coordinate flexibility demonstrations at data centers around the country. In July, DCFlex shared the results of its first so-called “flexibility hub,” located at an Oracle data center in Phoenix, Arizona.
In a field test that leveraged orchestration software developed by Emerald AI (a new startup that emerged from stealth mode mid-last year), the data center successfully reduced its power consumption by 25% during peak grid demand hours, by choreographing clusters of Nvidia GPUs in real-time.
Now, why would a hyperscaler opt to slow or pause workloads, when it could run a ton of generators, or massive behind-the-meter battery arrays? Emerald AI CEO Varun Sivaram put it this way: Software is cheaper and can be deployed in a matter of weeks, without new permitting. Emerald AI and its partners are now working to develop a 96-megawatt training facility in Virginia.
Whether these technologies will be adopted more widely — within the gigawatt-scale data centers hyperscalers are planning, for example — remains to be seen, acknowledged Hirschboeck, who is an advisor to Emerald AI. One key challenge is how quickly compute demand itself is changing. A few years ago the focus was on AI training, which is batchable and therefore pairs nicely with curtailing power use. Today, though, the dominant workload is inference, which can’t be slowed; it’s possible that software-based flexibility ultimately will become more of a niche feature for training-specific sites.
Perhaps the lower hanging fruit is the second pillar of internal flexibility Hirschboeck is tracking, which he calls infrastructure flexibility. Solutions under this pillar modify a data center’s physical systems, like cooling and power distribution, to both operate more efficiently and to respond to market signals.
Direct-to-chip liquid cooling, for example, not only uses up to 18% less energy than air cooling, but transfers heat far more efficiently, meaning operators can scale cooling up and down in real time. That responsiveness, Hirschboeck explained, makes direct-to-chip a good tool for demand response programs.
Outside the data center walls
The first two buckets of solutions are seeing growth, Hirschboeck said, because they tend to constitute “small redesigns versus massive energy delivery system redesigns,” and aren’t dependent on equipment with long lead times.
But the most significant flexibility progress so far is happening via external contracts. Much of the tension has centered on PJM, where record-breaking capacity prices have sparked a reckoning over who pays for grid upgrades. In December, FERC directed PJM to adjust its rules so that data centers can connect directly to generation flexibilities, enabling flexibility and faster grid connection. Now, the industry is watching closely to see how PJM defines curtailment priorities, penalties, and rates.
As the largest energy market in the country, PJM has effectively become the national testbed for the flexible service models being discussed in FERC’s rulemaking on large load interconnection, initiated by the Department of Energy in October to standardize processes for co-located or curtailable loads. But the topic is a contentious one; in a separate proceeding, PJM’s market monitor dubbed flexibility a “regulatory fiction” and urged FERC to block new large loads unless they come with fully matching generation, arguing that without binding curtailment authority, there are no guarantees.
How these proceedings play out may ultimately dictate how flexibility scales, but they aren’t necessarily the only path forward. Tools like interruptible service contracts, demand response programs, or contingent connections, can be created by state regulators or even by the utilities themselves, Hirshboeck noted.
In ERCOT, for example, Texas Senate Bill 6 has given the grid operator what amounts to a kill switch. Under the new law, any load of 75 MW or larger must now install equipment that allows it to be remotely disconnected during firm load shed events. Elsewhere in the country, both Oregon and Virginia have instead focused on cost allocation for infrastructure upgrades.
Energy resources
A lot of today’s real-world flexibility lives in a fourth pillar: onsite batteries and generators. Generators, powered by fossil fuels, come with emissions and usage caps and are therefore a short-term path for flexibility, but not a sustainable long-term solution.
Batteries, meanwhile, are rapidly evolving from “idle insurance policies” to active grid resources, enabling peak shaving, load shifting, and eventually, renewable integration. “As new technologies continue to advance, data centers with BESS could gradually take on a new role as seasonal-balancing hubs, helping the grid better absorb and manage renewable energy,” Hirshboeck wrote in a recent Impact ECI assessment of the space.
A more nascent example is grid-interactive, uninterruptible power supply systems. Traditionally, data center UPS units wait on standby for the moments between grid failure and backup generators kicking in. But advances in control software and battery chemistry are changing that, Hirschboeck said, pointing to major hardware vendors like Eaton, Vertiv, and ABB, all now deploying UPS systems capable of grid interactivity and participation in ancillary services; these advances could potentially turn them into a source of revenue generation.
Of course, most of the solutions playing out at scale today involve solutions from several of these groups.
Portland General Electric, for example, recently became the first major utility to accept software-driven flexibility as a substitute for new generation: Aligned Data Centers is bypassing a multi-year infrastructure queue by installing a 31 MW battery that allows PGE to remotely reduce the data center’s grid draw during peak hours. The new data center is expected to come online this year, two years ahead of schedule.
Ultimately, Hirshboeck said, the thing to remember is that building a new data center — not a tent city in Ohio, but a “truly resilient cloud or AI data center” — takes 18 to 24 months, assuming electrical equipment is available. If adding flexibility of some form changes the way that core infrastructure needs to be built, whether by redesigning rack structures or by changing power flows, that pushes flexibility at scale even further out.
“We’ve got flexibility,” he added. “Whatever you’re seeing in terms of rollout it’s not just pilots, it’s not bench testing, but it’s still very much early days.”


