DCFlex, the research and development coalition that launched last year to help turn data centers into flexible grid assets, is sharing the first set of results from one of its flexibility “hubs,” located at an Oracle data center in Phoenix, Arizona.
In a field test that used orchestration software developed by Emerald AI, a new startup out of stealth mode today, the data center successfully reduced its power consumption by 25% during peak grid demand hours — while maintaining AI compute quality. It did so by choreographing clusters of Nvidia GPUs in real-time, on a job-by-job basis.
The test was the first phase of a broader DCFlex effort to develop replicable strategies for data center flexibility. There are ten demonstration hubs, some of which will also target flexibility in cooling and backup power in addition to compute workload. The hubs bring together utilities, data center owners and operators, and tech providers.
Data centers are seeking gigawatts of additional power capacity in the next few years, and interconnection queues stretch for a decade in some parts of the country. Against that backdrop, a recent Duke University study suggested that if new AI data centers can be flexible for small curtailments throughout the year, they would unlock over 100 gigawatts of extra capacity without the yearslong infrastructure buildout holding up development.
The Phoenix pilot, where Oracle owns the data center and Databricks manages the workload inside, the early results demonstrate that data centers have the capability to perform those curtailments without hurting their performance, said Anuja Ratnayake, who leads the emerging technologies practice at the Electric Power Research Institute, the organization behind DCFlex. And just as importantly, it also demonstrates just how big an impact small flex capabilities can make to the grid, when applied at data center scale.
The fact that a data center can “demonstrate significant reduction in energy use, without taking the entire workload to zero” is a step change for a sector that has long been reluctant to adopt flexibility solutions, Ratnayake said.
“The immediate negative response [to demand response proposals] came from a sort of baggage that we’ve carried, because of the way we have done demand response previously,” she said. In the past, large loads responding to utility demand response signals may have had to go entirely offline, but “that is not what we are talking about in this iteration of flexibility,” she added. “Now, depending on the type of grid need, the reduction needs to come in different forms.”
Inside the pilot
This is where Emerald AI comes in. The startup, which also announced today a $24.5 million seed round, is backed by a slew of high-profile investors, including Google’s chief scientist Jeff Dean and chief sustainability officer Kate Brandt, and former Secretary of State John Kerry. It was initially incubated by Radical Ventures, who was also the startup’s first investor, and was part of an Nvidia accelerator program.
The Emerald AI platform continuously profiles jobs across multiple dimensions — including computational flexibility, time sensitivity, and performance tolerance — to conduct dynamic load balancing. When an incoming signal from the grid requires power reduction, the platform models thousands of optimization scenarios in seconds, predicting the effect of each on both power draw and AI performance metrics. That predictive capability is essential for making data center flexibility pencil out.
Rather than shutting down entire data halls, the platform can then slow down less critical tasks, pause batch processing that isn’t time sensitive, and reschedule flexible workloads. And the algorithms powering the platform are continuously evolving based on things like which training jobs are most tolerant to resource constraints, and how different workload mixes affect overall system efficiency.
While DCFlex also plans to test out technologies like energy storage, geothermal, and hydrogen, the fact that this first pilot leverages a software solution is key, Emerald AI founder and CEO Varun Sivaram told Latitude Media.
“Our software can be deployed in weeks, not years,” he said. The low cost and speed-to-deployment combined with the pilot’s ability to meet the targeted power reduction is “game changing” for grid operators, he added, who have been “struggling with how to accommodate explosive data center growth.”
In Phoenix, partners Oracle, Emerald AI, Nvidia, and Databricks, and regional power utility Salt River Project, sought to reduce the data center’s power consumption by 25% for three hours during the system’s evening peak, by steering 256 Nvidia Tensor Core GPUs. In an effort to recreate typical AI data center workloads, they created four different combinations of jobs running simultaneously, including those that leaned more heavily toward training or inference, and those that were more evenly balanced among types of AI load.
Within each combination, individual jobs were labeled with priority tags, indicating whether they were high-priority jobs that shouldn’t be modified, or jobs that could tolerate either 10%, 25%, or 50% longer processing times. Emerald AI’s platform then choreographed each mix of jobs in order to reduce overall data center power demand by 25%.
Looking toward scale
In Phoenix, the DCFlex coalition only has access to computing workloads, and not to the data center’s cooling system or its backup power. But ultimately enabling a data center to ramp all the way down will require flexibility in all three areas, Ratnayake explained.
“We are trying to get at least two demonstration sites where all three of the flexibility elements are available to test,” she said. “And then we can choreograph to see if we can actually offer 100% load shed as far as the utility is concerned, while the work inside the data center actually never stops.”
In the meantime, though, there are further tests to run in Phoenix, starting with expanding beyond peak period use cases, which come with 24 hours of notice, to emergency periods. And in the wake of the initial data, data center operators and utilities alike are more eager to get involved, she said.
When DCFlex shared the data internally with its members, they responded with “a huge amount of optimism,” she said. On the utility side, that meant interest in joining in on future iterations of testing to see how the strategy performs under different scenarios. That said, there’s still some hesitancy, she added, from grid planners who want “more real, concrete field data” on the ability of a data center to go entirely offline.
On the hyperscaler side, Sivaram said, the reaction was “equally enthusiastic” because of the test’s success in meeting performance targets during the ramped-down hours. Of course, having the chip giant Nvidia as both a tech partner and an investor doesn’t hurt either.
“What’s particularly encouraging is that partners on both sides see this as just the beginning,” Sivaram added. “We’re already in discussions about expanding these demonstrations to additional locations and more complex scenarios. The Phoenix pilot proved the concept works; now everyone wants to see how far we can scale it.”


