Wannie Park says many data centers are leaving money on the table by only running GPUs a portion of the time in the name of reliability.
Hyperscalers may dominate the headlines, but the data center market is also populated by mid-market facilities: often colocation sites and typically less than 100 megawatts in size. Since they promise their customers 100% uptime and face steep fees if they fail, they often run their GPUs at around 30% to 40% utilization, leaving significant headroom to avoid overheating or clogging the system. And Park, founder and CEO of Pado AI, sees that headroom as an opportunity for improvement.
The startup uses AI-powered software to orchestrate workloads and optimize energy and hardware infrastructure within a data center, and today will announce its $6 million seed round led by NovaWave Capital. The funding comes less than a year after Pado AI’s founding as a spin-off from LG NOVA, the Korean tech giant LG Electronics’ North American innovation arm. LG serves as the anchor investor and primary backer of the NovaWave Capital fund.
The idea for Pado AI came as Park was working with LG and looking to find the best way to have an impact in the energy markets.
“If you go back two years, the primary question was: Where are those large, crazy, volatile, peaky loads and is there anything anybody can do [about them]?” Park told Latitude Media. The team ultimately zeroed in on that data center operational headroom that leaves infrastructure underutilized.
“Our software is designed to reallocate workload to optimize it with cooling, power, pricing and things like that, so that you can increase compute without massive capex or increase in power allocation,” Park said.
The startup is part of an emerging class of software companies tackling the data center energy puzzle and increasing a facility’s flexibility by looking to unlock spare energy within it, rather than by building new generation sources outside of it. Among them is Emerald AI, which has just completed a demonstration of its platform in collaboration with NVIDIA, Portland General Electric, and EPRI, showing how AI factories can respond to utility signals while maximizing the performance of priority AI workloads.
How it works
Within the compute part of any data center, there is a software receptacle — a job coordinator and management system — where compute workloads stand in a queue. These workloads, or jobs, differ in size, intensity, and priority. For example, an inference job, such as a real-time AI query, has different requirements than a long-term model training job. But in traditional data center infrastructure, these jobs are typically processed with a first-come, first-served approach, which, according to Park, is far from being the most efficient way of going about it.
“It’s like if you have a bottle, and you’re dropping both big rocks and sand in it — and that leaves a lot of gaps,” he said. “Those gaps are the underutilization.” To fill the gaps, Pado AI takes the rocks out of the bottle, and rearranges them in terms of priority, urgency, and size, so that they get assigned to the part of the data center that is best suited for them.
For cooling, for example, which Park notes is “one of the biggest chunks of power allocation,” this would mean distributing the workload so that high-intensity, larger jobs are moved to specific zones with more aggressive or specialized cooling. This way, Pado AI’s software acts like a “connective tissue” between the compute part of the data center, known as the white space, and the supporting power and cooling infrastructure, known as the gray space.
“Our thesis is that we can probably increase the 30 to 40% utilization rate to about 55 to 60% through optimization of the workloads,” Park said.
Recently, Pado AI has also started offering a GPU-as-a-service component, where it would offload some of the largest, less urgent workloads from data centers that have hit power or capacity limits, using third-party cloud vendors to process them instead.
The promise of flexibility
All these features, and the control and visibility Pado AI’s software provides over workloads, contribute to turning a data center facility into a flexible asset for the power grid. Through its software and the data it collects, the company can provide utilities or RTOs with a forecast of a facility’s load profile. This allows grid operators to manage demand more effectively, and to identify windows where the data center has the flexibility to participate in demand response programs.
Pado AI is part of EPRI’s DCFlex initiative, which brings together industry stakeholders to find flexibility solutions for data centers; the company currently has three demonstrations with them.
While competitors like Emerald AI are also chasing efficiency gains within the data center walls, Park notes that Pado AI focuses on the mid-market. This large swath of smaller, grid-connected data centers often lacks the sophistication of hyperscale facilities and are therefore underserved, but could still provide valuable flexibility to the power system.
“The sub-75 megawatt data center market, which already has the power and infrastructure, is totally underserved,” Park said. “We are going to really focus on that because there are immediate returns. It provides a lot of real, pragmatic flexibility, especially with cooling.”
While the company’s focus has so far been the U.S., the company aims to leverage its South Korean backing to be “incredibly aggressive” in markets outside of North America as well.


