Transmission Failure Prediction Using AI and Structural Modeling Informed by Distribution Outages

Understanding and quantifying the impact of severe weather events on the electric transmission and distribution system is crucial for ensuring its resilience in the context of the increasing frequency and intensity of extreme weather events caused by climate change. While weather impact models for the distribution system have been widely developed during the past decade, transmission system impact models lagged behind because of the scarcity of data. This study demonstrates a weather impact model for predicting the probability of failure of transmission lines. It builds upon a recently developed model and focuses on reducing model bias, through multi-model integration, feature engineering, and the development of a storm index that leverages distribution system data to aid the prediction of transmission risk. We explored three methods for integrating machine learning with mechanistic models. They consist of: (a) creating a linear combination of the outputs of the two modeling approaches, (b) including fragility curves as additional inputs to machine learning models, and (c) developing a new machine learning model that uses the outputs of the weather-based machine learning model, fragility curve estimates, and wind data to make new predictions. Moreover, due to the limited number of historical failures in transmission networks, a storm index was developed leveraging a dataset of distribution outages to learn about storm behavior to improve model skills. In the current version of the model, we substantially reduced the overestimation in the sum of predicted values of transmission line probability of failure that was present in the previously published model by a factor of 10. This has led to a reduction of model bias from 3352% to 14.46–15.43%. The model with the integrated approach and storm index demonstrates substantial improvements in the estimation of the probability of failure of transmission lines and their ranking by risk level. The improved model is able to capture 60% of the failures within the top 22.5% of the ranked power lines, compared to a value of 34.9% for the previous model. With an estimate of the probability of failure of transmission lines ahead of storms, power system planning and maintenance engineers will have critical information to make informed decisions, to create better mitigation plans and minimize power disruptions. Long term, this model can assist with resilience investments as it highlights areas of the system more susceptible to damage.

View this article on IEEE Xplore


BERT-NAR-BERT: A Non-Autoregressive Pre-Trained Sequence-to-Sequence Model Leveraging BERT Checkpoints

We introduce BERT-NAR-BERT (BnB) – a pre-trained non-autoregressive sequence-to-sequence model, which employs BERT as the backbone for the encoder and decoder for natural language understanding and generation tasks. During the pre-training and fine-tuning with BERT-NAR-BERT, two challenging aspects are considered by adopting the length classification and connectionist temporal classification models to control the output length of BnB. We evaluate it using a standard natural language understanding benchmark GLUE and three generation tasks – abstractive summarization, question generation, and machine translation. Our results show substantial improvements in inference speed (on average 10x faster) with only little deficiency in output quality when compared to our direct autoregressive baseline BERT2BERT model. Our code is publicly released on GitHub ( https://github.com/aistairc/BERT-NAR-BERT ) under the Apache 2.0 License.

View this article on IEEE Xplore


EXplainable Artificial Intelligence (XAI)—From Theory to Methods and Applications

Intelligent applications supported by Machine Learning have achieved remarkable performance rates for a wide range of tasks in many domains. However, understanding why a trained algorithm makes a particular decision remains problematic. Given the growing interest in the application of learning-based models, some concerns arise in the dealing with sensible environments, which may impact users’ lives. The complex nature of those models’ decision mechanisms makes them the so-called “black boxes,” in which the understanding of the logic behind automated decision-making processes by humans is not trivial. Furthermore, the reasoning that leads a model to provide a specific prediction can be more important than performance metrics, which introduces a trade-off between interpretability and model accuracy. Explaining intelligent computer decisions can be regarded as a way to justify their reliability and establish trust. In this sense, explanations are critical tools that verify predictions to discover errors and biases previously hidden within the models’ complex structures, opening up vast possibilities for more responsible applications. In this review, we provide theoretical foundations of Explainable Artificial Intelligence (XAI), clarifying diffuse definitions and identifying research objectives, challenges, and future research lines related to turning opaque machine learning outputs into more transparent decisions. We also present a careful overview of the state-of-the-art explainability approaches, with a particular analysis of methods based on feature importance, such as the well-known LIME and SHAP. As a result, we highlight practical applications of the successful use of XAI.

View this article on IEEE Xplore


AMS Circuit Design Optimization Technique Based on ANN Regression Model With VAE Structure

The advanced design of an analog mixed-signal circuit is not simple enough to meet the requirements of the performance matrix as well as robust operations under process-voltage-temperature (PVT) changes. Even commercial products demand stringent specifications while maintaining the system’s performance. The main objectives of this study are to increase the efficiency of the design optimization process by configuring the design process in multiple regression modeling stages, to characterize our target circuit into a regression model including PVT variations, and to enable a search for co- optimum design points while simultaneously checking performance sensitivity. We used an artificial neural network (ANN) to develop a regression model and divided the ANN modeling process into coarse and fine simulation steps. In addition, we applied a variational autoencoder (VAE) structure to the ANN model to reduce the training error due to an insufficient input sample. According to the proposed algorithm, the AMS circuit designer can quickly search for the co- optimum point, which results in the best performance, while the least sensitive operation as the design process uses a regression model instead of launching heavy SPICE simulations. In this study, a voltage-controlled oscillator (VCO) is selected to prove the proposed algorithm. Under various design conditions (CMOS 180 nm, 65 nm, and 45 nm processes), we proceed with the proposed design flow to obtain the best performance score that can be evaluated by a figure-of-merit (FoM). As a result, the proposed regression model-based design flow achieves twice accurate results in comparison to that of the conventional single-step design flow.

View this article on IEEE Xplore


Software Fault-Proneness Analysis based on Composite Developer-Module Networks

Existing software fault-proneness analysis and prediction models can be categorized into software metrics and visualized approaches. However, the studies of the software metrics solely rely on the quantified data, while the latter fails to reflect the human aspect, which is proven to be a main cause of many failures in various domains. In this paper, we proposed a new analysis model with an improved software network called Composite Developer-Module Network. The network is composed of the linkage of both developers to software modules and software modules to modules to reflect the characteristics and interaction between developers. After the networks of the research objects are built, several different sub-graphs in the networks are derived from analyzing the structures of the sub-graphs that are more fault-prone and further determine whether the software development is in a bad structure, thus predicting the fault-proneness. Our research shows that the different sub-structures are not only a factor in fault-proneness, but also that the complexity of the sub-structure can affect the production of bugs.

*Published in the IEEE Reliability Society Section within IEEE Access.

View this article on IEEE Xplore

 

Dynamic Network Slice Scaling Assisted by Attention-Based Prediction in 5G Core Network

Network slicing is a key technology in fifth-generation (5G) networks that allows network operators to create multiple logical networks over a shared physical infrastructure to meet the requirements of diverse use cases. Among core functions to implement network slicing, resource management and scaling are difficult challenges. Network operators must ensure the Service Level Agreement (SLA) requirements for latency, bandwidth, resources, etc for each network slice while utilizing the limited resources efficiently, i.e., optimal resource assignment and dynamic resource scaling for each network slice. Existing resource scaling approaches can be classified into reactive and proactive types. The former makes a resource scaling decision when the resource usage of virtual network functions (VNFs) exceeds a predefined threshold, and the latter forecasts the future resource usage of VNFs in network slices by utilizing classical statistical models or deep learning models. However, both have a trade-off between assurance and efficiency. For instance, the lower threshold in the reactive approach or more marginal prediction in the proactive approach can meet the requirements more certainly, but it may cause unnecessary resource wastage. To overcome the trade-off, we first propose a novel and efficient proactive resource forecasting algorithm. The proposed algorithm introduces an attention-based encoder-decoder model for multivariate time series forecasting to achieve high short-term and long-term prediction accuracies. It helps network slices be scaled up and down effectively and reduces the costs of SLA violations and resource overprovisioning. Using the attention mechanism, the model attends to every hidden state of the sequential input at every time step to select the most important time steps affecting the prediction results. We also designed an automated resource configuration mechanism responsible for monitoring resources and automatically adding or removing VNF instances.

View this article on IEEE Xplore

 

Dengue Epidemics Prediction: A Survey of the State-of-the-Art based on Data Science Processes

Dengue infection is a mosquito-borne disease caused by dengue viruses, which are carried by several species of mosquito of the genus Aedes, principally Ae. aegypti. Dengue outbreaks are endemic in tropical and sub-tropical regions of the world, mainly in urban and sub-urban areas. The outbreak is one of the top ten diseases causing the most deaths worldwide. According to the World Health Organization (WHO), dengue infection has increased 30-fold globally over the past five decades. About 50 to 100 million new infections occur annually in more than 80 countries. Many researchers are working on measures to prevent and control the spread. One avenue of research is collaboration between computer science and the epidemiology researchers in developing methods of predicting potential outbreaks of dengue infection. An important research objective is to develop models that enable, or enhance, forecasting of outbreaks of dengue, giving medical professionals the opportunity to develop plans for handling the outbreak, well in advance. Researchers have been gathering and analyzing data to better identify the relational factors driving the spread of the disease, as well as the development of a variety of methods of predictive modelling using statistical and mathematical analysis and Machine Learning. In this substantial review of the literature on the state of the art of research over the past decades, we identified six main issues to be explored and analyzed: (1) The available data sources, (2) Data preparation techniques, (3) Data representations, (4) Forecasting models and methods, (5) Dengue forecasting models evaluation approaches, and (6) Future challenges and possibilities in forecasting modelling of dengue outbreaks. Our comprehensive exploration of the issues provides a valuable information foundation for new researchers in this important area of public health research and epidemiology.

View this article on IEEE Xplore