When it comes to the infrastructure of artificial intelligence — including chips, cloud platforms, and deployments — Amazon Web Services is one of the companies leading the race. Just in the past couple of days, AWS announced it’s going to invest up to $50 billion to expand AI and high-performance computing infrastructure for U.S. government agencies, and also committed $15 billion for a new data center in Northern Indiana.
As the cloud provider behind much of the energy sector’s software, AWS also has a front row seat to how utilities and independent power producers are adopting AI to improve operations. Ben Wilson, AWS director of products and solutions for the energy and utilities business, is the person who oversees everything from early customer conversations to product development and deployment.
I sat down with Wilson to understand how the power sector is changing the way it uses AI, and how he thinks the space will evolve. Below is an excerpt of our conversation, edited for brevity and clarity.
How have utilities and IPPs evolved in their implementation of AI?
Ben Wilson: Over the past 10 years, a lot of what we did was around analytics – things like getting a dashboard [to help manage analytics and operations]. Now, companies are moving from dashboards to AI agents. They have a dashboard telling a person to go and do something, and now they’re using that same dashboard to let the agents take that action. It’s a starting point for people in their AI journey.
When I used to build commercial software for control systems, we used to have alerts and alarms, where enough alerts would turn into an alarm. Once you have an alarm, you always know what you have to do. But why would you want a human to do it [rather than AI]? Most things in the energy space have a closed-loop system, which is deterministic. That makes it easier for the AI to determine what the next step should be.
Southern California Edison, for instance, has a predictive AI [tool] to detect faults with roughly 80% accuracy, and is moving from predictive detection toward agent-driven responses. Companies are doing this kind of experimentation, tackling one problem at a time, but we’re seeing decent results so far, and this type of adoption will expand rapidly.
This summer, an MIT report stated that over 90% of companies that had invested in generative AI were “getting zero return.” What are you seeing with the companies you work with?
Wilson: Five years ago, an AI prototype would be a $400,000 adventure over multiple months, involving multiple people; you’d want higher degrees of certainty, so you’d spend more time doing the analysis. Now, our customers have an idea, and they start coding [right away]. The investment it takes to do this today is so low that you should want to do 100 experiments, so that you can find five that are really valuable, while the remaining 95 won’t be. You can have an idea and see a meaningful outcome with a very low investment.
I would encourage companies to do even more of these types of experiments. We learn more from failure than from success, and today we’re going through some failures, but we’re doing so more rapidly, and the cost of them is so low, that their value far outweighs it.
What’s something that successful AI projects have in common?
Wilson: Where I see great success is in projects where you have someone who understands that business deeply, like a PhD in electrical engineering [working in the energy industry], who also understands AI. Now, it doesn’t mean they’re smarter or better. But they have more context to know what a good experiment looks like, identify the problems that need to be solved, and lead the project in a meaningful way.
Many unsuccessful projects, on the other hand, are developed by IT organizations, without enough grounding inside the business they’re being created for, and without someone from that business overseeing them. If you’re too removed from the company and you’re conducting AI experiments quickly, in three to four weeks, you don’t have the opportunity to go and engage with the business deeply enough.
You really need to have an electrical engineer who understands both AI and [their own industry’s] problems, and that’s mostly a generational thing. Now, we can pick an LLM and create an entire website just by prompting it. The question is, how does that formulate itself inside of an energy company using agents? If I have to put a load on the grid, how do I find out if everything converges? Where does it converge? Where does the fault happen? That’s what these electrical engineers are great at, and they can write the code and the [machine-readable specification] that says how to know those things.
What are the most interesting AI applications you’re seeing in energy today?
Wilson: For high-frequency operations, TotalEnergies is using AI for trading: They monitor trading patterns so they can auto-adjust the risk limits when the markets move up and down. The reason they’re doing this is that there’s so much data you have to bring in that a human can’t understand it, while an AI tool can understand and help them make better decisions. In this case, AI intervenes in something that happens every millisecond.
On the other end of the spectrum, Duke Energy is reducing the residential solar connection times from months to hours.
Last, one of the most interesting applications is around DERMs. As we look at trying to bring in all of this solar and wind and the rest of batteries online, DERMs are going to be one of the most important parts of it. We’re working with GE Vernova to accelerate decisions on how DERMs are going to impact the grid, so you can integrate them at speed while making sure the grid remains stable, which is the biggest challenge they face every day.
Which uses of AI for energy need to be explored more?
Wilson: The way Siemens uses AI to do grid simulations to find convergence at a high level of complexity is a perfect use case. You have a 22,000-node grid, loads drawing power off the grid, construction folks wanting to put power on the grid, and a huge list of customers who want to do both. How do you do that faster, more effectively, and in a way that optimizes for green energy? There’s no way a human can understand this level of complexity.
Companies like Hitachi and Siemens, which have algorithms helping with grid convergence, are working in a very interesting space. That’s what it’s all about right now: how we bring new loads on, and how we bring new generation on board — especially when so much of that generation isn’t baseload compatible. We’re going to see a lot over the next five years that will accelerate this.
Does the energy industry have some unique features that make it more challenging for it to implement and trust AI solutions?
Wilson: The stakes in the energy industry are high. Energy is the secret thing behind everything; there’s nothing in the world that is not driven by energy. When it comes to critical infrastructure, companies make decisions step by step, which is why energy companies generally take longer to adopt newer technology. But they’re figuring out how to adopt this technology more quickly than ever before.
It’s a change similar to what we saw in the early 2000s, when the advent of the internet meant you could send bills to email addresses. People were worried the bills would get hacked, but now everyone uses it. Now, AI adoption is happening more rapidly, partly because over the past 20 to 30 years, we’ve become more secure in what we do. At AWS, for example, everything is encrypted, from the Nitro security chip, through storage and networking. When you have that type of trust, customers trust you, and adoption is more rapid.
Do you think the potential efficiency gains enabled by AI are worth the challenges created by its energy requirements?
Wilson: Marc Andreessen talked about how a lot of LLMs and other products are now being built with picojoules, the smallest unit of energy you can think about, in mind, and how engineers are trying to design at that level. AWS’s Graviton chips have shown power savings of up to around 60% compared to previous generations.
I have faith that we’re going to figure out how to reduce energy consumption on the software engineering side. For companies, it’s going to be all about the margin, and using less energy is one way to increase it.
And it’s the same on the hardware side. All the chips, no matter whether we make them or Nvidia does, are trying to use less power, because less power means less water, less cooling, less everything. Over the last five years, chips have substantially reduced the energy they need. And with every new LLM that comes out, no matter who creates it, the amount of power and the number of chips required keeps going down. We’re going to see more breakthroughs that continue to drive that.
A version of this story was published in the AI-Energy Nexus newsletter on November 26. Subscribe to get pieces like this — plus expert analysis, original reporting, and curated resources — in your inbox every Wednesday.


