Today a partnership coalition including Nvidia, Emerald AI, EPRI, Digital Realty, and PJM announced the world’s first power-flexible AI facility: the 96-megawatt Aurora AI Factory in Manassas, Virginia, slated to open in the first half of 2026.
Aurora is the first facility built to a new reference design and certification standard for power-flexible AI infrastructure. The vision enabled by this announcement is one of AI data centers capable of “connect and manage” — i.e. interconnecting before waiting for transmission network upgrades, and flexing the load when needed — while advancing grid visibility and consumer affordability.
The story began with Emerald’s Phoenix load flexibility pilot, involving Oracle, Nvidia, Emerald AI, and the utility Salt River Project, and also a DC Flex flagship demonstration. The leap to the Aurora announcement, a live innovation hub, signals that the tech ecosystem is serious about getting this done. AI factories can align with grid needs to relieve peak stress and improve utilization of the power network.
It will work like this: Several software and hardware features will work together to enable a tight coordination between the grid and the data center’s controls, with Emerald AI’s platform serving as the grid-facing control layer. Grid and operator conditions feed into Emerald, which translates them for the data center building’s management systems and ultimately, the compute stack.
In tech speak, Emerald’s GridLink and Conductor integrate with Nvidia’s AI Enterprise stack and Mission Control to coordinate workload scheduling and power management so the facility can dial demand when the grid needs it — while maintaining acceptable Quality of Service for training and inference. To validate this, EPRI’s DCFlex Initiative will run demonstration testing, measuring precise, real-time responses to simulated grid-stress events like summer heatwaves or sudden drops in renewable generation.
Aurora will live out the concept that Nvidia CEO Jensen Huang discussed in the company’s GTC 2025 presentation yesterday: “extreme codesign.”
To the outside observer, this is a delicate choreography, reminiscent of a classical dancer coordinating gestures with rhythm, both practiced and improvised.
To the technologist observer, this is a first principles solution demonstration: that standardization of AI load factories is possible to the point that the grid can be agnostic to the load’s real-time behavior, and the load can be grid-aware while self-managing.
As a Kathak dancer and as an advisor to Emerald AI, I observed today’s announcements through the dual lens of artistry and technology — and also find this is the embodiment of what I’ve long said about the importance of ensuring the grid works for everyone.
AI factories as polite dinner guests
Making an AI factory flexible requires grid etiquette: storage, compute, and power distribution moving in concert, performing feats like meeting ramp-rate limits, transient stability, harmonics/flicker, and voltage ride-through expectations. It must listen to the grid, adjust gracefully, and continue its work — an affable member of the society of the electric grid.
A power-flexible factory’s constituent parts are device-level flexibility meeting factory-level control systems, allowing the whole site to move with the grid instead of against it.
For more on Emerald AI’s approach to load flexibility, listen to founder and CEO Varun Sivaram’s interview on the Catalyst podcast:
This is not the first time I have described the flexible factory as polite. Months ago, I spoke and wrote about AI factories being polite dinner guests. Their aim is to be a consummate participant in a shared space: predictable, plannable behavior that their hosts (grid operators) can count on to not leave the table abruptly with the tablecloth tucked into the belt, not consume more than their fair share, or bring uninvited surprises with them.
This analogy is apt when we consider how to encourage more widespread load flexibility. Onboarding large electronic loads has become an unnerving exercise for system planners and operators. What operators need is for these loads to reliably respond to shifting grid needs; a flexible, standardized approach to achieve this, is overdue.
Predictable behavior should also earn faster interconnections for flexible loads, which is valuable to data center developers as the grid builds out on a slower timeline to serve more human needs.
How to optimize the flexible data center
There are three things that make an AI factory particularly reliable, or “polite,” featured in a recent Nvidia white paper:
- Energy storage systems: Both long- and shorter-duration storage need to be placed where it counts. This includes fast, real-time compensation near the racks to catch sub-second spikes as well as site-level batteries to shape seconds-to-minutes ramps. Together they steady the data centers rhythm and soften hard landings.
- GPU performance tuning and workload pacing: Firmware and scheduler controls smooth rapid power fluctuations, limit cycle-to-cycle ramp rates, and suppress peak-power thumps. This is the practice: intricate, precise, repeatable, and flexible to accommodate new information.
- Coordinated control strategies: Storage, compute, and the facility’s power distribution move in sync — meeting ramp-rate limits, transient stability, harmonics/flicker, and voltage ride-through expectations — so compliance can be choreographed live.
The result is etiquette-as-engineering: predictable, testable behavior that planners can model, operators can count on, and communities can live alongside without disruption to their experience of the grid.
The power-flexible AI factory advances technology innovation alongside the pace of human advancement and human needs. Electric grids are slated to require gigawatt-scale buildouts in light of new manufacturing and industrial production, population growth, electrification, and air conditioning load — well beyond what is slated as needed for AI.
Flexible factories — both for AI and for other purposes — are an affirmation that AI compute growth can coexist with and fundamentally enable human priorities.
Arushi Sharma Frank is a senior advisor to Emerald AI, and has worked with a large ecosystem of grid flexibility and grid responsive software and hardware companies such as Base Power and Tesla. For more of her thoughts on grid flexibility, listen to her interview on the Open Circuit podcast. Her bio is here, if you’d like to know more about her work. She is also a practitioner of classical Kathak dance. The opinions represented in this contributed article are solely those of the author, and do not reflect the views of Latitude Media or any of its staff.


