Back
engineering and technologyNews

Can we trust decisions made by artificial intelligence?

Photo credit: Unsplash
Photo credit: Unsplash
Share

Machine Learning (ML) is something we have all heard about. ML research has applications in many domains, including power systems. But despite the impressive results of using these tools in research settings, they have not been widely adopted in practice. Why?

One of the main reasons is the lack of interpretability of ML models, meaning we do not understand how it reaches decisions. This causes a lack of trust from the user’s point of view, who would typically rather stick with proven tools they understand.

Solution by XAI

Researchers, including Juri Belikov, an assistant professor at the Department of Software Science at Tallinn University of Technology, now offer a solution to this problem.

Their recent paper ”Measuring Explainability and Trustworthiness of Power Quality Disturbances Classifiers Using XAI – Explainable Artificial Intelligence”, published in IEEE Transactions on Industrial Informatics shows one example where XAI can be useful in power systems, specifically when monitoring Power Quality Disturbances (PQDs). According to the PQD classification, the main types of disturbances are voltage drop, rise, distortion, flicker, frequency changes, and interruptions.

Power quality is a measure of the degree to which voltage and current waveforms comply with established specifications. Several power quality measures are harmonic distortions, variations in peak or RMS voltage, spikes and sags in voltages and currents, and variations in frequency.

The term XAI was coined by the Defense Advanced Research Projects Agency to group together techniques for explainability. These techniques are used to make an ML model interpretable, allowing the user to understand how it reached decisions. This concept is the subject of a lot of active research in critical systems where trust is important, such as defence, medicine, and autonomous driving.

The new method has great potential

Juri Belikov, co-author of the article, shows that the presented XAI method can be applied to various applications in the field of power systems, with a focus on fault and abnormal event classification tasks.

“The most interesting part of this method is that, unlike many other classification tasks, we were able to clearly define what a correct explanation is for power quality disturbances. By doing so we were able to overcome one of the biggest difficulties in XAI; how to measure and evaluate the explanation for the classifiers’ decisions. Accordingly, we developed an evaluation process to measure an explainability score for each XAI technique and classifier,” states Belikov.

 “We believe that the method and the results provided in this paper have significant potential to explain the decisions of machine learning algorithms, which are increasingly being explored nowadays in the power system community,” he adds.

In the past decade, Power Quality monitoring tools have become a necessity. One reason for the popularity of such tools is the continuing integration of nonlinear generators and loads in power grids, most of them based on power electronics technology, which may inject high-order voltage and current harmonics into the grid.

In addition to Juri Belikov, Ram Machlev, Michael Perl, Kfir Levy, and Yoash Levron from the Technion (Israel Institute of Technology) were in the research group. 

The article was originally published by Tallinn University of Technology.

Read more

Get our monthly newsletterBe up-to-date with all the latest news and upcoming events