This year’s RE+ convention, with nearly 40,000 delegates at the Venetian in Las Vegas, yielded few surprises: CEOs and trade association leaders declared their commitment to the energy transition, and to an industry that has generated billions in revenue and millions of domestic jobs.
The mood was largely upbeat and resolute, even in the face of the Trump administration’s attack on renewables. If it’s low-cost, rapidly deployed energy and infrastructure you want, then the combination of solar and storage is delivering.
But where the unpredictability of the administration dominated most conversations at RE+, an arguably bigger conversation was taking place down the road on the MGM Grand’s stage.
At Yotta 2025 — the two-year-old conference focused on data centers — the energy industry’s new center of gravity was clear.
“I don’t think any of the solutions we’re seeing today truly scale to the numbers that we have to achieve,” said Keith Heyde, OpenAI’s director for infrastructure strategy and development on the main stage at Yotta. “So whether it’s large-scale, coordinated centers of infrastructure excellence, or it’s really large-scale power plants that look quite different from what we’re seeing today, those types of solutions have to exist in the industrial base to really get to the scale we’re talking about. It’s our job to try to drive down and convert power into compute.”
That should, and does, come as music to the ears of energy developers and innovators. As someone who has attended dozens of solar trade shows since 2007, one thing became increasingly clear to me last week: The era of growth-by-tax incentive is coming to a close, just as the era of AI’s mad rush to power at scale is beginning.
Over Yotta’s three days, many themes emerged. Those that stand out to anyone sitting at the AI-energy nexus include:
Bubble? What bubble?
Dylan Patel, founder, CEO, and chief analyst at SemiAnalysis, made a cogent and rapid-fire case for why we should count on the hyperscalers to drive this market for years.
Today, we think of Google, Meta, Amazon, and Microsoft as the core of this market, but Patel noted that four other players of potentially equal importance have emerged: Oracle, CoreWeave, OpenAI, and Anthropic. These companies are directly financing and building both digital and energy infrastructure, while also investing in a startup ecosystem to diversify the market.
“A year ago, these AI companies had very little revenue,” Patel said. “But now, OpenAI will have 21 or 22 billion dollars in revenue by the end of the year. And Anthropic just skyrocketed from one billion to seven and likely 10 by the end of the year. Twenty-four months from now, these AI companies may not grow 8x, but even if they grow 2x or 3x, these companies will be humongous.” Every dollar they earn will go straight into compute, and from the compute flows the demand for power.
Patel went on to emphasize how early we are in AI’s transformation of business. Demand will only grow as companies recognize where to integrate AI into business processes.
The onsite vs. grid power debate
The conversation around how to accommodate hyperscalers’ urgent need for gigawatts made an appearance in nearly every session, a clear indication that the market is unprepared to solve the challenge at scale. Today’s dominant narrative — that the way to access reliable baseload power at scale is to build it yourself with onsite gas generation — came under a lot of scrutiny. (This is scrutiny that Cloverleaf Infrastructure’s Brian Janous echoed on this week’s episode of Catalyst.)
Vibhu Kaushik, global head of energy at AWS, made clear his preference for a grid-first approach. “Going permanently off-grid, you’re going to pay for additional redundancy and generation to provide the reliability that compute workflow requires at data centers,” he told the audience. “You’ll not have access to a grid-connected market, so the only source of consumption is onsite load, and that typically ends up increasing the cost for that solution. And decarbonizing those solutions will have limited options if you’re not grid-connected.”
Cully Cavness, co-founder, president, and COO at Crusoe Energy, on stage with notable podcaster Jigar Shah, described how the concept of “gas as a bridge to grid power” is playing out at their project in Abilene, Texas, where Oracle will host OpenAI’s Stargate.
“It’s a mixture of grid and behind-the-meter natural gas power generation. In this campus, Crusoe is building out the 350 megawatt natural gas-fired power plant to serve as a temporary baseload as the grid expands. Then, as the grid is finally expanded, it will serve as a backup,” he said. “It’s actually a replacement of the diesel engine fleet you usually see at a data center. We were able to source and build those turbines faster than we could get the substation expanded. It’s actually a pretty efficient long-term backup solution.”
Cully went on to describe Crusoe’s project with Tallgrass in Wyoming, where a massive data center complex with up to 10 gigawatts of capacity is planned. Currently, the company is mainly relying on combined-cycle gas plans for generation, but has a plan to deploy carbon capture and storage as well. The challenge at this scale is sourcing the gas turbines.
Ramping manufacturing of power generation equipment is hard, and slow, Cully pointed out. Even the turbine blades themselves are hard to come by. There are only two real suppliers in the U.S., and they supply both the aerospace industry and the power industry.
Flexibility, everywhere and all at once
The most interesting conversations on and off the stage for those in the energy industry often came back to flexibility, and how it could speed time-to-power for data center operators.
There are basically two options for flexibility today. A data center, faced with the opportunity to access grid power more quickly (and effectively skip the interconnection queue) can either 1) flex demand through the use of advanced software that throttles compute, or 2) flex supply by leveraging onsite generation or batteries to fill in any gaps created by curtailments or demand response requests.
Neither option is simple. Data centers, Shah pointed out, aren’t like the steel plants of old, where a grid operator or utility could phone in a request to shut down for a period when the grid was stressed. Uptime is critical, as the payback on GPU investments by hyperscalers is reliant almost entirely on 24/7 availability.
But, as Cavness notes, “there’s a lot of CapEx that goes into getting five nines of reliability, in terms of electrical infrastructure and backup power generation,” and in the end customers will have to better understand the economic tradeoffs around the cost of that fifth nine of availability.
Emerald AI, which recently demonstrated their platform’s capabilities with Nvidia, Databricks, and Oracle along with EPRI and the utility SRP, is an early example of the kind of software that can flex compute. In conversation with Vibhu Kaushik of AWS at Yotta, the startup’s CEO Varun Sivaram said, “AI workloads — training, inference, fine tuning — can be flexed in such a way that you preserve the performance of the workloads while at the same time achieving what the grid might need at a moment of particular stress when, for example, all the air conditioners are running in the summer in Phoenix, and you’re hitting a system coincidence peak.”
Kaushik shares Sivaram’s enthusiasm, but notes the market is very much in its early days. Both utilities and data center operators are wary. “In utility planning there are two types of load,” he explained. “There’s a firm load in which typical NERC standard loss of load expectation is one day in 10 years. And then there’s a curtailable load, where you can curtail whatever you need as often as you need. All of us in this room building infrastructure for AI are scared of being that curtailable load.”
True enough, but the value of time-to-power has created an unprecedented opportunity for flexibility. Expect to see much more from this space, extending out to VPPs in the distribution grid and more creative ways to flex supply across the transmission grid. Google has already demonstrated a version of demand flexibility, and Verrus is building data centers where power management is central to their design.
2026 and beyond
These developments are in some ways the most visible collaboration between the data center world and the power sector. As we enter 2026, it’s becoming clearer that we’re leaving a period of reacting to the bewildering scale of the load growth challenge with brute force solutions — and entering one where serving these large loads with affordable, clean, and flexible power is possible.
In their session, OpenAI’s Heyde and his colleague Chris Malone discussed the fact that they want more from energy solutions providers in the coming years.
Heyde described several bottlenecks: deploying power, acquiring equipment, and especially financing. “There’s been a lot of movement in that space to come up with financing solutions for accelerating scaled projects like this,” he said.
They want to hear from the energy industry how to solve for all of those, and though they’re designing Stargate data centers today with one model in mind, they see each site as having unique attributes that create opportunities for novel solutions. “Oh, please always pitch,” said Heyde. “Our doors and ears are always open.”
A version of this story was published in the AI-Energy Nexus newsletter on September 17. Subscribe to get pieces like this — plus expert analysis, original reporting, and curated resources — in your inbox every Wednesday.


