Federal regulators, tasked with building a framework to allow data centers to connect to the grid more quickly, are grappling with the question of whether large loads that agree to be curtailed should get expedited access.
A new analysis published this week, sponsored by Google, offers a potential blueprint for a tariff structure that would do just that, its authors argue — while also neutralizing the risk of shifting costs to ratepayers.
The report, conducted by grid orchestration platform Camus Energy, transmission modeling firm Encoord, and Princeton University’s ZERO Lab, uses real transmission data from an anonymized utility in PJM to model hourly constraints at six hypothetical 500 megawatt data center sites.
While many early iterations of flexible interconnection tariffs offer a binary, either firm or non-firm service, the report models a third option: data centers that receive a mix of firm (uninterruptible) and “conditional firm” (or curtailable) grid service. In that scenario, the data center must bring enough accredited capacity to cover its uninterruptible load — through power purchase agreements, virtual power plants, or onsite power — while agreeing to manage the demand of its “conditional firm” load during grid constraints.

The modeling found that accepting a flexible interconnection agreement could get those theoretical data centers online three to five years faster than under the existing interconnection process for inflexible load. The “reliability sacrifice” required across the study’s sites was minimal; grid power was available for more than 99% of all hours in the year. Bridging the remaining gap required dispatching backup sources for between 40 and 70 hours annually.
Responding to risk
The white paper also offers a counterargument to the skepticism raised by the PJM market monitor and others, that data centers are unlikely to curtail voluntarily because the revenue generated by their compute load vastly outweighs any grid penalty or payment.
The study argues choice for data centers isn’t between revenue or curtailment, but rather between revenue or waiting in line. By accessing the grid three years earlier via flexible interconnection, a single data center site stands to generate between $2.3 billion and $3.2 billion in additional earnings.
Data centers are inherently incentivized to adopt the “buy capacity to connect now” approach, despite the up front cost, the study found, because that massive revenue gain easily covers the lifecycle cost of the on-site batteries or generators needed to make flexibility physically reliable. In short, flexibility is the only profitable path through the queue.
On the consumer side, the report attempts to temper fears of cost-shifting, arguing that under the proposed framework, a data center covers between 96% and 100% of the incremental system costs required to serve it.
Flexible connection avoids the need for peak capacity buildout, which saves around $78 million per gigawatt of data center load in system costs, the report found. On top of that, the BYOC arrangement requires the data center to procure its own accredited capacity for its firm load, effectively removing that demand from the general market pool so it doesn’t drive up capacity clearing prices for other ratepayers.
But the report relies on a few foundational assumptions that flexibility critics warn may not align with the realities of the grid. For one thing, while the study proves that data centers can technically afford to be flexible without losing uptime, it leaves open the question of whether they will comply during a true emergency.
Another challenge, as Aurora Energy Research Head of USA East Julia Hoos explained during a recent Latitude Dispatch, is that site-specific flexibility, as modeled in the report, doesn’t account for complex knock-on effects. Taking a large generator or load offline can redirect the power flow all across PJM, potentially requiring transmission upgrades in entirely different parts of the footprint, Hoos explained.
FERC parallels
The hybrid solution the study proposes is closely aligned with what sponsor Google and other hyperscalers are advocating to FERC, as part of the large load interconnection proceeding kicked off by the Department of Energy earlier this fall.
The first round of filings in FERC’s docket show widespread support for fast-tracking flexible large loads, said Camus CEO Astrid Atkinson. “Specific comments from National Grid, Google, Microsoft, Constellation, Open AI, Critical Loop, and others are well aligned with the flexible connection and/or [bring your own] capacity approaches,” she added.
Broadly, the docket reflects a “growing convergence of opinion among diverse stakeholders that flexible load capabilities should be leveraged to bridge the gap between immediate interconnection needs and the longer timelines required for traditional transmission upgrades,” she added.
Google, for its part, urged FERC to approve expedited interconnection for loads that volunteer to be flexible and bring their own generation, which could be either onsite or “electrically proximate.”


