John Belizaire spends a lot of time driving around West Texas, pulling up to wind farms where turbines sit idle. His pitch to operators starts with an observation: “Hey, I noticed that half of your wind farms aren’t spinning.”
The operators’ response is often the same: curtailment is a big problem, and the tax equity investors regularly call to ask about their returns.
“I have a solution for you,” Belizaire tells them. Their first guess is usually batteries. “We tried that. We’ve tried everything,” they say. That’s when Belizaire, CEO of Soluna Computing, makes his pitch: bring computing to the power plant.
“There’s a little secret,” he said. “Upwards of 30% to 40% of the power generated by these plants goes wasted. Globally, that’s close to about 300 terawatt-hours per year.”
Belizaire shared that anecdote on stage at Latitude Media’s Transition-AI conference in December. He was joined by Sayles Braga, a senior partner at Sidewalk Infrastructure Partners and Chase Lochmiller, CEO of Crusoe, for a conversation about the transformation of the data center industry.
All of them detailed thorny challenges: how to source enough power, how to integrate data centers with renewables, whether to build massive centralized facilities or distributed ones, and how to turn compute loads into grid assets.
The consensus: the data center industry has transformed from a real estate business into an energy-first business, reshaping the design and location of facilities.
“For most of its history, data centers were presented as solutions to utilities. They were offering large-scale, flat offtake,” explained Sayles Braga. “That was a good thing when utilities were not bumping up against the top of their load duration curve peaks. Pretty quickly, that has flip flopped.”
Rethinking data center design
As a partner at Sidewalk Infrastructure Partners, which was spun out of Alphabet, Braga is helping develop a data center platform called Verrus that packages hardware, software, and infrastructure to serve both utilities and hyperscalers as customers. The facilities are designed to be flexible grid assets, varying their compute loads based on utility needs.
Data centers and energy infrastructure have converged, Braga said; however, “many didn’t understand that because it wasn’t hitting them in the face like it is now.”
Today, data center developers like John Belizaire spend more time thinking about energy than any other piece of the data center stack. And that’s why he often finds himself in Texas, pitching wind farm operators on delivering energy to warehouse-scale computers.
Belizaire explained the reason for pursuing co-located projects. “Because we’re pulling power directly from the plant, that reduces the [grid] burden for delivering hundreds of megawatts or potentially gigawatts to a data center.” The approach also bypasses lengthy interconnection queues.
But executing on this vision brings formidable engineering challenges, starting with power quality. Soluna’s design process starts with a detailed “curtailment assessment,” where engineers analyze four years of generator data to optimize facility sizing, studying both seasonal and daily generation patterns. The goal: design a facility large enough to consume curtailed energy while maintaining strict power quality standards.
This requires rethinking everything from HVAC systems to power distribution units, specifically selecting components that minimize reactive power impacts, Belizaire explained. The average size of Soluna’s data centers is 200 MW.
The power management challenge looks quite different for hyperscale data centers. Grid capacity constraints are becoming more common as developers compete to connect their projects to the grid.
“In most markets that matter for traditional data center development,” Braga said, “any newly permitted 24/7, 365 large-scale load represents more of a challenge to the grid than an opportunity.”
The constraints vary across utility territories. Some utilities need peak shaving for just a few hundred hours per year. Others face a more complex challenge: filling regular six- to 12-hour gaps between renewable generation and evening demand. Verrus designs data centers that can respond to these specific grid conditions, varying compute loads based on grid signals.
Building for scale
The scale of new AI-specific facilities magnifies these engineering challenges.
Crusoe started construction last summer on a new data center in Abilene, Texas — reportedly to serve OpenAI and Oracle — that will start with a 206-MW first phase. The company plans to expand to two GW of compute capacity, which could require nearly three GW of total power when accounting for infrastructure overhead.
These are, according to Crusoe’s Lochmiller, “really significant footprints from an energy perspective that frankly wouldn’t be possible in energy constrained markets.”
Crusoe’s approach to backup power illustrates the industry’s dilemma. The company, which touts its “climate-aligned computing” strategy for AI using renewables, is also building a 300-MW gas plant at the Abilene site that can provide both backup power and grid services.
Construction methods are evolving to match these new demands for scale. “We have this whole piece of our business called Crusoe Industries that manufactures critical electrical infrastructure,” Lochmiller explained. The company manufactures switchgear and medium-voltage electrical rooms in controlled environments, delivering them as skid-mounted units. These prefabricated electrical rooms arrive as “data center Lego blocks” that can be rapidly assembled on site.
Lochmiller made the case for massive centralization with an analogy: “If you think about a 10 carat diamond, that’s far more valuable than 10 one-carat diamonds. Similarly, if you have a GW data center, that’s far more valuable than 10 separate 100-MW data centers.”
With a giant cluster of GPUs in one location, he said, “you can actually make really meaningful scientific breakthroughs” in AI research.
Belizaire sees it differently. “Very few companies are going to be building gigawatt-level data centers for their applications,” he said. “There are only a handful of companies that can do training at that level.”
In most markets that matter…any newly permitted 24/7, 365 large-scale load represents more of a challenge to the grid than an opportunity.
Instead, he envisions a distributed network serving enterprise AI needs. While a few companies will build massive facilities for training large language models, he believes most enterprises will need specialized facilities to tune these models with proprietary data for specific business applications.
A distributed approach also brings resilience: “Everything is distributed, our road system’s distributed, our internet infrastructure is distributed. In fact, because it’s distributed, it’s more resilient.”
This thinking shapes Soluna’s approach: building 200-MW modules that can be clustered together behind renewable generators, scaling incrementally while maintaining grid flexibility. “If you take a different approach to data centers and build it out in a distributed way,” Belizaire said, “you can now place this computing capability around the world.”
Braga takes a more measured view on scaling. Verrus builds data centers as multi-phase developments over half a decade to a decade, with 100 MW as a starting point. “Data centers aren’t built overnight and they’re not turned on overnight,” he said.
This gradual approach allows for more flexibility in power infrastructure, Braga said: “The choices you make on how you are supporting the energy feed into that data center on a primary and/or backup basis don’t have to be the same on building 10 as on building one.”
Where to get the energy?
All the panelists had varying opinions on how to source clean energy — but they all agreed it poses a significant challenge. The hyperscalers’ sustainability goals are “significantly harder than they were five years ago or four years ago when a lot of them got inked or targeted,” said Braga.
And now, most of the leading data center developers are making tough choices about energy procurement on a decadal time scale. “It’s not just what I am bringing on tomorrow to solve the need,” explained Braga. Rather, it’s about serving aggregate demand over the next decade — with a new class of technologies.
“If I am bringing on net new gas, how am I incorporating CCS? What steps am I taking to get firm, carbon-free baseload?” he asked rhetorically. “I think all of those are going to be really important to incorporate into these large plans.”
Crusoe is pursuing what Lochmiller calls an “all sources” strategy — combining wind and solar with grid power as a foundation, while exploring hydro opportunities and examining new gas plants with carbon capture. This is similar to the strategy of leading tech companies.
Soluna targets at least 75% renewable electrons for each facility, focusing on direct connections to wind farms while drawing grid power when needed. The company’s modular approach allows it to optimize each site based on local renewable resources.
Braga took the longest view. “If we’re having this conversation in 10 years, we will know whether the talk of distributed SMRs is real or not — or what the actual outcome of the potential nuclear renaissance is,” he explained, adding that in the meantime it may take an “all of the above” approach to power.
These questions will shape the next chapter of the data center industry. And it’s still early days when it comes to bridging the worlds of bits and clean electrons. As Braga noted, “the data center infrastructure industry is actually still really new. At scale, even newer.”


