Like many parents, we read to our kids every night before bed. Recently, my four-year-old son decided he was going to read to us. He found his current favorite book — Paw Patrol Dinosaur Rescue! — and proceeded to “read” it nearly word-for-word. Of course, you can tell by the scare quotes that he wasn’t actually reading; he has no concept of how letters turn into sounds, sounds into words, words into concepts. But I was still impressed! His mind is a sponge, which translates into feats of memorization — even if he still has a long way to go before he can enter a library and read any book on the shelf.
Utility deployments of artificial intelligence solutions are much like my son’s “reading.” They are undeniably impressive, accomplishing feats we could only imagine a short while ago. But they are isolated and unscalable, because we as an industry have not yet done the hard work of building our equivalent of phonics: the basic building blocks and frameworks that together can safely, securely, and consistently turn data into actionable intelligence.
The basics: Data access and security
Much ink has been spilled on the challenges facing widespread AI deployment. One EPRI report, for instance, highlights widespread issues with “insufficient data quality or availability” and “data privacy or security concerns.” Secure, standardized, machine-to-machine access to data is table stakes for any modern organization, let alone one that hopes to accomplish anything meaningful with AI. I propose two key actions to address these challenges.
First, the industry must come together to create the modern standards and open software ecosystems around them to streamline development, integration, and security. We have some standards already, but they generally move too slowly, are designed too narrowly, and are implemented too poorly to solve the general problem of utility data access. Vendors need to stop fighting over table scraps and wasting time on data plumbing, and instead genuinely adopt these industry-standardized approaches so that we can construct true multi-vendor systems made from a diverse set of best-in-class components. Utilities, in turn, must make a market for these solutions by prioritizing procurement from vendors who truly facilitate data access and interoperability.
Second, regulators and utilities must stop raising unfounded cybersecurity myths as roadblocks to progress. Regulators, in particular, need to level up their digital expertise by adding knowledgeable staff who can competently engage on these topics.
Too often, regulators and utilities cite vague, hand-wavy concerns about security. These concerns are commonly rooted in fear and lack of understanding of how modern digital technology works. They need to ground critiques in a solid understanding of cybersecurity fundamentals and contribute to moving the industry forward in a secure way.
The AI layer: Governance, evaluation, and controls
With data access and security addressed, we can finally move on, to AI itself. Bryce Yonker of Grid Forward clearly laid out many of the missing elements.To avoid reinventing the wheel, I will borrow from his fine work.
First, we need to address safety and guardrails for AI use by creating a governance framework that captures the risks and mitigations that support the on-boarding, development, and use of AI solutions. By establishing such a governance framework, we can see risks clearly, evaluating the trade-offs and understanding how best to proceed in a safe, timely, and well-structured way.
We also need a framework for evaluating the performance of AI solutions. These are probabilistic, not deterministic systems; they don’t always return the same output given the same inputs. We can’t just ask for “better” or “more accurate.” Instead, we need to create a shared vocabulary, set of metrics, and evaluation approaches to determine performance. This would accelerate the selection of new AI solutions and help utilities understand the impact of changes to model versions.
Last, we must create the standards and controls for AI. We don’t need to wait for NERC. Innovative industry leaders can come together right now to share their experience and start building out the security standards and controls that should govern the use of AI with grid operations data. These standards can then inform more formal efforts — perhaps with NERC’s help — down the line.
Guiding examples
Fortunately, there are several guiding examples to accelerate this progress.
For data access standards, LF Energy CDS defines a set of specifications for secure, standardized access and sharing of energy-related data. Developed in collaboration with utility data experts and the most innovative, privacy-obsessed tech companies, it is designed specifically for modern machine-to-machine communication and AI.
The specifications are broadly applicable to all utilities to capture common use cases, but adaptable for customization and evolution. They also incorporate industry best practices to ensure the protection of sensitive data.

Regulators, meanwhile, can look to the UK’s Ofgem for inspiration on how to provide guardrails for security and privacy while removing barriers to innovation. In 23 concise pages, their Data Best Practice Guidance lays out a set of common sense principles that “ensure data is treated as an asset and used effectively for the benefit of consumers, stakeholders and the Public Interest.” One principle guides that data must be presumed open, meaning data be made available for all unless the data holder provides specific evidence to justify withholding it. Regulators should study these best practices and adapt them to their jurisdictions.
The Model Context Protocol likewise serves as an example of the standardized controls we need, acting as an “airlock” that decouples AI reasoning from direct grid operations access. For example, it specifies protocol-level “human-in-the-loop” authorization, allowing systems to pause for explicit operator approval. This approach transforms the AI from an unpredictable, unsupervised agent into a predictable, auditable assistant.
For a governance framework, we can look to our friends in the highly regulated, cybersecurity-sensitive finance sector. The FINOS AI Governance Framework consists of Risk and Mitigation Catalogues. The Risk Catalogue helps identify potential risks in AI implementations across operational, security, and regulatory dimensions, while the Mitigation Catalogue helps discover preventative and detective controls for identified risks.
Finance also offers an evaluation framework. AI systems are non-deterministic and tasks rarely have a single “correct” answer, so existing benchmarks often fail to address the complexity, risks, and compliance needs of financial services. The FINOS framework links specific financial use cases to both risks and metrics. By doing so, it bridges technical benchmarking with real business value, helping reduce compliance risk and improve trust in AI deployments.
We don’t need to reinvent the wheel; we just need to adapt it. By borrowing from this existing work, utilities can accelerate the transition from AI pilots to a fully realized, intelligent grid.
AI offers a revolutionary tool that could help deliver affordable, safe, reliable, clean energy. The challenges to leveraging the power of this tool, and digitalization overall, are well-documented. It’s time to move on to solutions. We can meet this moment — but only if we embrace concrete collaboration on open, fast-moving, practical standards and technology that lowers barriers to adoption and contribution.
Alex Thornton is the executive director of LF Energy. The opinions represented in this contributed article are solely those of the author, and do not reflect the views of Latitude Media or any of its staff.


