Image credit: Anne Bailey / Climate Change AI
Integrating artificial intelligence on the grid isn't simple. It requires applying the right tools to the right problems, and asking the right questions about when and where it will make utilities’ work easier. In short, nuance is called for — especially when a misapplied algorithm could result in a blackout.
Priya Donti, the co-founder of Climate Change AI, and an assistant professor at MIT, is one of the leading thinkers on this subject. She spoke with Latitude co-founder and executive editor Stephen Lacey at the Transition-AI: Boston event about the nuances of deploying automated technologies across the power system.
Their conversation has been edited for brevity and clarity.
Stephen Lacey: Let's start off by talking about what kinds of AI are potentially in use in the energy system. When we talk about intelligence or automation, what are the different use cases?
Priya Donti: What we're seeing a lot today across the electricity sector is machine learning automatically from large amounts of data. This includes machine learning geared toward forecasting things like solar power, wind power, electricity demand, or the state of the grid based on information that's available. There are also types of machine learning that are more geared towards controlling parts of the electricity grid — actually trying to observe the state of the system and then take actions based on it.
The basic question with machine learning is: does it give you information by understanding trends or correlations in the underlying data, or is it actually semi-autonomously trying to take an action based on analysis of the underlying data? Broadly, these are the two types of AI that we often see in the electricity system.
SL: We're talking about machine learning in a complicated physical system, like the grid, where you're controlling sensitive electronics or power plants. What are the physical limitations that we need to start to consider?
PD: Machine learning works by analyzing a large amount of data and trying to find what the predominant patterns in that data are — in some sense, the "average" thing that would happen in that data. When you're doing something like predicting what an image is, if you’re right most of the time because you've gotten averages in the data right, that’s great. And if every now and then you’re wrong, that’s okay.
On a power grid — where you have to make sure that you're not asking equipment to do something that it can't actually do, or make sure that your power lines are not being asked to carry too much power — averages often aren't enough. The fact that you can get a machine learning algorithm that learns to do something nuanced from data and does it right most of the time doesn't help you in those times when something goes wrong. That one time blacks out your grid. In these extreme or anomalous cases, that's when you need to put up guardrails to make sure that your algorithm is not crossing some physical boundary.
SL: With extreme changes happening on the grid and extreme weather, you start to suddenly have more anomalous events as well. That brings us to this question of interpretability. How does AI change the way you work through a problem and gauge who is responsible?
PD: Historically, a lot of prediction algorithms for the power grid were based on rule-based systems. So you try to figure out what electricity demand will look like by writing down a set of rules. Is it a weekend or a weekday? Is there a very famous TV show airing, where everyone's going to turn their tea kettle on right afterward? If something went wrong on the power grid, if there was a mismanagement of the grid that was in part due to a misprediction, the idea is that the regulator would ask your system operator to go back and figure out how to improve that for the future.
As we increasingly use AI and machine learning for these kinds of predictions, the same kind of regulatory practice continues to be applied. So if there's a misprediction, the regulator will often want to know what went wrong in the internals of your predictive model to cause that to have happened.
I think there's genuine debate about whether or not it is on the technology to become more interpretable, so that you can pull out what the weights of the predictive algorithm are and what went wrong. Was there data that wasn't available to the algorithm? Or is there something about the output performance characteristic that we can audit? I think we need to really get clear about what we need to know in order to not only assign consequences but also, more importantly, adapt to the future. We need to make sure that we understand what needs to change with the regulation and what needs to happen to make AI and machine learning algorithms auditable in a way that fits that regulation.
SL: At Transition-AI, we have a cross-section of investors, utilities, tech companies, and startups in this space. What are some provocative questions that they need to be asking themselves or each other?
PD: I think we should be asking ourselves: what are the kinds of transformations that must happen, but that we're having difficulty getting off the ground in practice? What are the cases where there may be a mismatch between the transformation of the grid and the practical implementation of that vision?
Within that, the question is: what is there that could be enabled by AI and analytics? What technology, policy, and social aspects are there and how do they interact with each other? How do we move forward in a holistic and integrated way? I think these kinds of questions are what we need to match our practice to our vision.
The second installment of our Transition-AI conference series is happening in New York on October 19th. See the agenda here.