In the United States grid congestion already costs a staggering $20 billion every year and threatens to slow down economic growth. Even so, a promising solution to enhance transmission has remained largely untapped for decades: dynamic line rating, or DLR. McKinsey recently underscored DLR’s potential as a grid-enhancing technology with “real benefits” that can “improve the grid today.” But at least in the U.S., the technology has barely been rolled out.
The main reason for this is that today’s DLR options are largely unreliable and expensive. In fact, the DLR technology with the potential to truly revolutionize our grid looks fundamentally different from sensor-based versions that have struggled to scale. The real breakthrough isn’t about adding more hardware; it’s about eliminating it entirely.
Multiple grid operators in Europe have begun to quietly opt for weather forecast-based, sensorless DLR systems. For example, TenneT in Germany has DLR on half of its lines, a system that has its roots in one of its employees’ PhD thesis. This approach is both scalable and cost effective, and an example of the kind of DLR with global relevance.
Problems with hardware
DLR’s potential has been known since the 1990s. Early research and pilots in the mid-2000s proved that we could optimize power flow based on actual weather conditions. So why aren’t we using it everywhere?
Well, the first versions of sensors that were released a decade ago simply failed to operate for very long when installed out in the field. The Estonian TSO tested sensor-based DLR in the early 2010s and the sensors stopped working within a year. The project managers far outlasted the DLR sensors — and they were suspicious of later sensor manufacturers who claimed to have solved the issues.
More importantly, DLR sensor measurements are unreliable. While sensors measure sag, tension, or temperature, getting them to align can be difficult.
Sag in one span can be significantly influenced by tower twist and isolator movement from adjacent spans. As highlighted in a 2011 Oncor study, careful initial configuration is required for accuracy.
Meanwhile, sensors that measure tension obtain the average conditions across multiple spans, and may miss critical hotspots by only measuring the average conditions across the length.
Temperature sensors, on the other hand, can only measure the part where they are installed. The temperature of the conductor can be ten degrees higher in short sections of the line that are sheltered from the wind. Weather patterns, terrain, and loading conditions vary widely along transmission lines. Without visibility in every span, operators are using piecemeal data and risking safety problems.
What’s often overlooked is that the majority of the potential value of DLR is in reducing energy prices on day-ahead markets. But the capacity that a line will have tomorrow isn’t measurable in the first place. The real value of DLR comes from making predictions in advance — and this necessarily calls for weather-based forecasting.
The challenge — and opportunity — of wind prediction
Predicting wind accurately is a key challenge for DLR, because even a light wind of six feet per second adds one third more capacity to an overhead line. Wind speeds are also strongly affected by the local landscape. Existing numerical weather forecasts typically have a resolution of about six square miles. They tell you the average conditions over a wide area, not the conductor’s wind speed.
New machine learning methods are capable of predicting wind at a much more localized level. Through the analysis of historical weather data, software can predict wind and weather conditions for individual spans with sufficient accuracy and resolution for DLR.
Using machine learning doesn’t just generate a line rating number; it provides a range and a confidence level based on the underlying data quality, historical model performance, and forecast volatility. Accuracy continues to improve as machine learning models constantly learn with new data.
The resulting DLR system is immediately scalable to entire grids. This is important because if DLR is applied to a single line, it would simply push congestion to the next-weakest line. Covering every line is necessary for actually increasing the capacity of the grid.
While legacy technology has limitations, a software-only approach is capable of dramatically reducing costs, accelerating deployment timelines and eliminating maintenance risks. As experience in Europe shows, this approach is scalable.
The way forward
Regulators and utilities need to recognize that the DLR landscape has fundamentally changed. The hardware-centric approaches that have stalled for decades are being leapfrogged by software solutions that are cheaper, faster to deploy, and more comprehensive.
There is no need for a regulatory requirement for DLR. There is no equivalent to Federal Energy Regulatory Commission’s Order 881 or the advance notice of proposed rulemaking on DLR in Europe. Grid operators have quietly gotten on with it and switched on software-based DLR in system operation — and U.S. utilities can do the same.
This matters because we don’t need to wait for massive infrastructure builds or hardware deployments to meet load growth. The digital transformation of our power system can begin today with software-first DLR — if we’re willing to leave outdated approaches behind.
Georg Rute is the CEO of Gridraven, a software provider for dynamic line ratings based on precision weather forecasting available globally. Prior to Gridraven, Rute founded Sympower, a virtual power plant company, and was the head of smart grid development at Elering, Estonia’s Transmission System Operator. The opinions represented in this article are solely those of the author and do not reflect the views of Latitude Media or any of its staff.


