Now more than 100 years old, the U.S. electrical grid is showing its age. In fact, 70% of transmission and distribution lines are more than 25 years old, according to the Department of Energy. Utility systems historically have been installed underground in high-density urban and suburban areas to preserve aesthetics and increase system longevity and safety. On the downside, undergrounding has made it more difficult to locate, access and repair failures in these densely populated areas.
San Diego Gas & Electric Co. was faced with this situation. Making up more than 60% of its total distribution grid, the utility’s underground assets are now approaching more than 40 years in age. In the last 10 years, T-splices were responsible for more than one-third of all increases in the utility’s asset failures. SDG&E manages more than 10,500 miles (16,900 km) of underground distribution lines and an estimated 150,000 T-splices on 700 underground circuits. The utility needed a better way to predict which of these assets would fail next.
Simple electrical components that join mainline underground cables, T-splices are like other assets in that they can fail routinely, causing unplanned outages. It can be hard to locate the outages immediately because T-splices are underground and not directly monitored by the control center, as they are considered minor assets. In an era of digital transformation, it may seem astonishing an electrical part costing less than US$60 can trigger outages that impact customers’ daily lives and cause utilities tens of thousands of dollars to resolve.
To fix this issue, SDG&E set out to create a solution for identifying high-risk T-splices prior to failure. The utility partnered with PA Consulting and Toumetis, a predictive analytics provider for the industrial internet of things, to develop a machine learning (ML) solution that could predict asset failures with not only a high level of accuracy but also sufficient warning. The eventual solution, called iPredict, was introduced recently to the market by PA Consulting for other utilities and extends to asset classes beyond T-splices.
A Deeper Look At Data
Data is at the heart of all utilities. However, comprehensive integration of operational data sets — for example, from the outage management system (OMS), supervisory control and data acquisition (SCADA) system and geographic information system (GIS) — is not easy to put in place. Although standard information technology/operational technology (IT/OT) data integration of these systems’ connectivity data can be challenging and complex, such integration is becoming more common. This comprehensive integration can be helpful for use cases such as identifying outage causes, diagnosing unhealthy assets and setting up alarms for operational staff to analyze.
It is much more difficult and less common to integrate system performance operational data with power quality data, from the same sources noted previously, including OMS, GIS and SCADA, as well as relay and protection systems. However, this data can be used for much more granular analysis of assets to provide improved predictive insights.
The project team found this more granular, high-frequency data needed to be correlated with other system data for the algorithms to predict T-splice failures with high confidence. To identify critical asset faults, algorithms needed to recognize and interpret anomalies that last less than two 60-Hz voltage and current waveform cycles. These data portray the spectrum of normal cycles versus those that indicate incipient/precursor signals, a predictive signature of asset failure.
The project team discovered the accuracy of asset failure predictions could be increased by using sub-cycle data — available from sources such as power quality (PQ) meters — to identify the fault anomalies and resulting predictive electrical signatures. In SDG&E’s case, these data previously had not been analyzed beyond voltage monitoring and regulation. This was a pivotal discovery on the journey to identify essential data in populating ML algorithms for predicting T-splice failure. It was a case of discovering previously unleveraged data that had not been integrated with the standard set of utility IT/OT data sources.
High-Sample Rate Data
The PQ meters at SDG&E’s substation bus level provide data at a high sampling rate of 128 samples per 60-Hz cycle, which enabled the ML algorithm to develop fault signatures that could help to predict end-of-life asset failures. The project team spent about 18 months aggregating, cataloging and integrating the high-sample rate data to prepare it for use in the algorithm. Learnings from this journey of discovery have resulted in identifying previously untapped data, integrating it with other data sources, and discovering the power and strength of ML applied to utility engineering, asset management analytics and operations.
Developing and training the algorithm required a high degree of utility engineering and operations subject matter expertise. This synergy of data science and deep knowledge of engineering principles and asset behavior enabled the code to be cracked, so to speak, for the ML solution.
To categorize assets as either normal or indicative of a forthcoming asset failure, the team had to correlate the circuits’ PQ data with outage data to gather enough of the desired data (fault/incident versus normal conditions) and then coordinate time stamps between the OMS reports and high-sample rate data. It was a classic reverse-engineering scenario where the data was used to create the desired waveform signatures for the failure as well as the signatures leading up to the failure. The algorithm was trained with this data to recognize the waveforms.
In addition, the team sought to leverage existing installed hardware, such as the PQ meter — rather than simply purchasing new hardware alternatives — to reduce the overall cost of the solution. The team also considered additional data samples and asset failure incidents to determine whether they could be exploited to enable the program to predict other types of asset failures.
Identification of high-frequency failure data for more asset types has resulted in the creation of additional algorithms (yet to be fine-tuned) for several other critical assets, such as oil switches, load break elbows and transformers. These algorithms will provide predictive insights and enable SDG&E to identify other types of critical asset outages. The processing of additional high-frequency sub-cycles and cataloguing of predictive signatures matched to asset failures will expand the ability to identify and predict additional asset failures.
Extending The Analytics
SDG&E is seeing success with how this ML solution can integrate and analyze data from multiple sources as well as identify asset locations more definitively. PA Consulting is looking to bring the iPredict solution to other U.S. utilities.
Applying the discoveries of this team, utilities can develop more robust data integration architecture solutions to increase their overall visibility of specific asset health. The solution can help utilities to identify probable failures of cables, elbows, overhead and underground splices, and junctions weeks in advance.
Other utilities can build on SDG&E’s process by identifying the right data sources required by the algorithms as well as streamlining the data collection and integration process. This innovation can empower crews to identify near-term asset failures and schedule repairs at the best time to minimize customer impact, maximize public and employee safety, and reduce environmental harm.
Gregg Edeson is the utility reliability lead at PA Consulting. Tom Bialek is the chief engineer at San Diego Gas & Electric Co (SDG&E).
Prevent power outages by predicting asset failures weeks in advance with iPredict™ for electricity distribution