In January, the Chinese company DeepSeek released a highly efficient reasoning model, rivaling the performance of OpenAI on some measures. The model prompted a wave of alarm among U.S. artificial intelligence companies, because the model cost far less to build and train — and also because it operated with far less power demand.
Energy efficiency, it seemed, was within the realm of possibility for the tech giants who have been competing hard for the infrastructure to power their ever-growing operations.
But in the months since, that alarm in the U.S. seems to have faded. Utilities have requested record rate increases. The need for baseload power to keep data centers running is causing a backlog of orders for new gas turbines.
And amid it all, training new AI models is requiring more, not less, energy. That’s one of the conclusions of a new report from the Electric Power Research Institute and Epoch AI, a think tank dedicated to the future of AI.
Their analysis found not only that the power demands of AI have increased steadily, but also that they will keep increasing. While training large, advanced AI models currently requires between 100 and 150 megawatts each, they are projected to require more than four gigawatts apiece by 2030.
The report compares a “historic baseline” of 2.2 times annual growth in the peak power demand of these models, which is roughly in line with what has been observed since 2018 through the present. But higher projections incorporate anticipated developments in the next five years: higher training compute and training duration, as well as better hardware efficiency.
The first is “the fundamental driver of the growth in power demand for AI training,” the report said. As models become more powerful, the clusters of AI chips devoted to training them have also become more powerful — with higher energy density. “This scale-up of clusters has outpaced hardware and algorithmic efficiency gains, driving the growth in power demand for training,” the report said.
However, pinpointing exactly how fast future compute will grow is a tall order. Scaling could even slow, given escalating costs and efficiency improvements like those suggested by the case of DeepSeek’s model.
That said, the authors said “the available evidence suggests that training compute scaling will likely continue in the near term, and it would be premature to predict a major shift in the compute growth trend.”
The implications for the energy sector are of course significant. While the report acknowledges that the power demands of electrification “ultimately could be much larger,” AI is an acute near-term challenge.
Total AI power capacity in the U.S. is estimated to be five GW today — a tiny fraction of the overall data center power demand. However, it could increase ten-fold to 50 GW, consuming more than 5% of total U.S. power generation, by 2030.
The sheer size of this power demand will require new approaches to grid planning, permitting, and investing in infrastructure. Training AI has historically required both a large and a highly localized power supply. However, as training needs increase, dedicated data centers “cannot keep doubling in size forever.”
Ultimately, training may need to be geographically distributed to overcome local power delivery limits. The report noted that “planning should account for both concentrated and distributed data center loads as well as the potential for real-time flexibility in training and inference workloads and from on-site generation and storage assets.”


