The entire data center industry is going after the same goal right now: more megawatts, as fast as possible. The usual playbook starts with getting the land, securing a grid connection, and then building. This approach has worked in the past and will need to continue to keep up with projected demand.
Recently, though, there is a complementary move that capital allocators aren’t giving enough attention: extracting more compute from the megawatts the grid has already reserved, or more simply, capacity efficiency.
This option is most evident with legacy data center fleets. They’re inefficiently cooled, and their electrical systems are over-provisioned for worst-case scenarios that rarely, if ever, materialize. That means that there is potentially capacity to spare, and it shouldn’t be all that hard to access.
What PUE actually tells you
Power usage effectiveness, or PUE, is a formula that takes total facility power and divides it by IT power. For example, a site that has a peak PUE of 1.7 has an IT capacity of 1 divided by 1.7, or 58.8% of the facility’s grid power. This reflects the overall facility’s worst-case operating point — peak summer, for example, in a moment where chillers and fans are running at full tilt — and sets the design IT load the site is built to host: in other words, the maximum computing capacity available on that grid connection.
In many cases, the facility was designed conservatively, oversizing cooling and electrical systems for worst-case scenarios that rarely materialize, meaning IT capacity, which drives revenue, is chronically under-utilized.
Not all facilities, either legacy or new, are opportunities for improvement, however. The sites that are best candidates for improvement are pre-2020 air-cooled facilities, as their PUEs are usually north of 1.6 with room to improve. Water-cooled data centers, meanwhile, use evaporative towers or direct liquid cooling to achieve lower PUEs, as water is a much more efficient medium for transferring heat than air. The trade-off is that water consumption for cooling is usually inversely proportional to the power used.
Take a facility with a 100-MW grid connection operating with a PUE of 1.7. That site will deliver about 59 MW of IT load, with the remaining 41 MW reserved for cooling or other on-site electrical loads. Improving the PUE to 1.5 — through options like batteries or thermal energy storage — would increase the IT capacity by more than 13%, to closer to 67 MW. That’s 8 MW more compute, without having to invest in new land, interconnection, or permits.
Scale those improvements across an entire portfolio and the numbers are more striking. A fleet with one gigawatt of grid connections at legacy PUE levels might only deliver 550 to 600 MW of IT. If those facilities move into the mid-1.4s, they can unlock 100 to 150 MW of additional IT capacity, all without increasing total grid draw.
Why the economics are so compelling
Speed matters here as much as cost. Retrofits can deliver new megawatts of IT capacity in months, not the years required for greenfield builds.
Of course, these retrofits are not free; they can require upgrading cooling systems, replacing electrical equipment and UPS units, adding storage, and installing modern controls. But the costs are significantly lower than that of building a new facility and securing new power under standard interconnection requirements, which increasingly come with financial security obligations tied to them.
Much of the cost base at legacy facilities is already sunk, including for the shell, site work, grid connection, staff, and security. A retrofit may add 10% to 15% more billable IT capacity on that existing base. But since retrofit costs are generally lower-cost capex than new construction, or else can often be structured as service contracts, the returns on this newly enabled capacity is much higher. A project earning a 15% unlevered return in the base case may move into the high teens or low-20s with PUE-driven capacity gains. And with typical 50% to 80% debt ratios, levered IRRs on the right projects can climb from the low-20s range even into the 30s.
For an industry where investors scrutinize every basis point, that return profile on proven engineering measures should get attention.
How you actually get there
PUE improvements require a systematic program, not a simple swap. Cooling is typically the largest overhead consumer of power in air-cooled facilities. The engineering playbook for more efficient cooling is the most established solution: better containment, variable-speed drives on fans and pumps, economizer upgrades, chilled-water plant modernization, and targeted liquid cooling for the densest racks.
Energy storage is another important piece of the puzzle. Adding battery storage in sufficient durations to clip peaks is an obvious fix, but it is non-trivial in terms of electrical system modifications, and potentially requires utility notification and approval.
Thermal energy storage systems offer a compelling alternative. These systems super-cool water during off-peak hours and discharge cooling during later peaks, shifting between 10% and 30% of a facility’s peak cooling load to times when the grid is cheaper and less strained. This flattens the cooling load profile, keeps chillers operating efficiently, and avoids the complex fire safety permitting that can slow battery deployments since the technology uses water rather than lithium-ion chemistry. Legacy air-cooled sites that pair thermal storage with chiller upgrades can lower PUE, add IT headroom, offer new startup resilience options, and strengthen cooling system resilience.
Beyond cooling and storage, replacing aging low-efficiency transformers and power distribution systems, consolidating over-provisioned distribution, and tuning redundancy are other options that could help. And adding compute flexibility — meaning shifting delay-tolerant workloads to cooler hours or lower-PUE sites across a portfolio — also lets plants run closer to their efficient operating point instead of being sized for rare simultaneous peaks.
The portfolio case
None of this is an argument against new builds. The projected demand for AI will necessitate a buildout that includes both greenfield capacity and efficiency-driven capacity.
But sequencing and balance matter. Retrofits to increase capacity efficiency deliver new megawatts of IT in months, not years. They require materially less capital per incremental megawatt, even after accounting for real upgrade costs. They diminish regulatory exposure by aligning with emerging efficiency mandates, like the EU Energy Efficiency Directive. And they generate returns that are difficult to match with slower, heavier greenfield investments alone.
So why isn’t everyone already doing this?
Part of it is organizational: The teams that optimize existing facilities are rarely the same teams that deploy capital into new builds, and the latter tend to get more attention from leadership. Part of it is structural: Many operators have leases and service agreements designed around current configurations, which makes retrofit planning more complex. And part of it is simply inertia: The industry has been in build-new mode for so long that harvesting existing capacity doesn’t always register as a strategic priority.
Investors weighing where to direct their next round of data center capital face a question that’s more complicated than simply greenfield versus retrofit, namely “how much stranded capacity are we sitting on, and how fast can we convert it?” The fastest and cheapest new AI capacity is hiding in the unproductive overhead of old data centers. The smartest portfolio strategies will harvest it alongside new builds, not instead of them.
Peter Hans Hirschboeck is founder and managing director of impactECI, a firm providing consulting services in the energy, climate, and infrastructure spaces. The opinions represented in this contributed article are solely those of the author, and do not reflect the views of Latitude Media or any of its staff.
Transition-AI 2026, Latitude Media’s flagship event at the center of the AI-energy infrastructure buildout, will convene developers, utilities, regulators, hyperscalers, and capital providers to move towards coordinated, sustainable solutions for AI-driven demand. Register now to secure the lowest rates and join us in San Francisco April 13-14.


