Transmission Failure Prediction Using AI and Structural Modeling Informed by Distribution Outages
Understanding and quantifying the impact of severe weather events on the electric transmission and distribution system is crucial for ensuring its resilience in the context of the increasing frequency and intensity of extreme weather events caused by climate change. While weather impact models for the distribution system have been widely developed during the past decade, transmission system impact models lagged behind because of the scarcity of data. This study demonstrates a weather impact model for predicting the probability of failure of transmission lines. It builds upon a recently developed model and focuses on reducing model bias, through multi-model integration, feature engineering, and the development of a storm index that leverages distribution system data to aid the prediction of transmission risk. We explored three methods for integrating machine learning with mechanistic models. They consist of: (a) creating a linear combination of the outputs of the two modeling approaches, (b) including fragility curves as additional inputs to machine learning models, and (c) developing a new machine learning model that uses the outputs of the weather-based machine learning model, fragility curve estimates, and wind data to make new predictions. Moreover, due to the limited number of historical failures in transmission networks, a storm index was developed leveraging a dataset of distribution outages to learn about storm behavior to improve model skills. In the current version of the model, we substantially reduced the overestimation in the sum of predicted values of transmission line probability of failure that was present in the previously published model by a factor of 10. This has led to a reduction of model bias from 3352% to 14.46–15.43%. The model with the integrated approach and storm index demonstrates substantial improvements in the estimation of the probability of failure of transmission lines and their ranking by risk level. The improved model is able to capture 60% of the failures within the top 22.5% of the ranked power lines, compared to a value of 34.9% for the previous model. With an estimate of the probability of failure of transmission lines ahead of storms, power system planning and maintenance engineers will have critical information to make informed decisions, to create better mitigation plans and minimize power disruptions. Long term, this model can assist with resilience investments as it highlights areas of the system more susceptible to damage.
View this article on IEEE Xplore
EXplainable Artificial Intelligence (XAI)—From Theory to Methods and Applications
Intelligent applications supported by Machine Learning have achieved remarkable performance rates for a wide range of tasks in many domains. However, understanding why a trained algorithm makes a particular decision remains problematic. Given the growing interest in the application of learning-based models, some concerns arise in the dealing with sensible environments, which may impact users’ lives. The complex nature of those models’ decision mechanisms makes them the so-called “black boxes,” in which the understanding of the logic behind automated decision-making processes by humans is not trivial. Furthermore, the reasoning that leads a model to provide a specific prediction can be more important than performance metrics, which introduces a trade-off between interpretability and model accuracy. Explaining intelligent computer decisions can be regarded as a way to justify their reliability and establish trust. In this sense, explanations are critical tools that verify predictions to discover errors and biases previously hidden within the models’ complex structures, opening up vast possibilities for more responsible applications. In this review, we provide theoretical foundations of Explainable Artificial Intelligence (XAI), clarifying diffuse definitions and identifying research objectives, challenges, and future research lines related to turning opaque machine learning outputs into more transparent decisions. We also present a careful overview of the state-of-the-art explainability approaches, with a particular analysis of methods based on feature importance, such as the well-known LIME and SHAP. As a result, we highlight practical applications of the successful use of XAI.
View this article on IEEE Xplore
Bending Strength Prediction of the Cu-Sn Alloy Through a Visual Quantization Model Integrated With Microstructure Characterization and Machine Learning
The multimodal properties of the grinding wheel matrix significantly impact grinding performance, while research on the interactions among these properties remains notably limited. To investigate the latent relationship between the microstructure and the bending strength of the bronze matrix, a visual quantization model based on the microstructure of the Cu-Sn alloy samples was established. The proposed model integrated a image segmentation network module, a quantitative characterization module, and a multivariate prediction module. The enhancement of the segmentation network is based on the synergistic combination of full-scale feature fusion with attention mechanism. The quantitative characterization parameters of metallographic microstructure features are proposed, and the most prominent intercorrelation between these parameters is studied from multiple dimensions. The results show that the modified image segmentation network exhibits superior performance compared to Unet, as evidenced by a 3% increase in Mean Intersection over Union (MIOU). The optimized output strategy ( ηDd -PSO-SVR) can contribute to the model’s prediction accuracy of material bending strength (MSE = 23.558,R 2=0.934 ). Finally, this work shows that the microscopic information demonstrates great adaptability for the machine learning models in predicting bending strength.
View this article on IEEE Xplore
Effect of Data Characteristics Inconsistency on Medium and Long-Term Runoff Forecasting by Machine Learning
In the application of medium and long-term runoff forecasting, machine learning has some problems, such as high learning cost, limited computing cost, and difficulty in satisfying statistical data assumptions in some regions, leading to difficulty in popularization in the hydrology industry. In the case of a few data, it is one of the ways to solve the problem to analyze the data characteristics consistency. This paper analyzes the statistical hypothesis of machine learning and runoff data characteristics such as periodicity and mutation. Aiming at the effect of data characteristics inconsistency on three representative machine learning models (multiple linear regression, random forest, back propagation neural network), a simple correction/improvement method suitable for engineering was proposed. The model results were verified in the Danjiangkou area, China. The results show that the errors of the three models have the same distribution as the periodic characteristics of the runoff periods, and the correction/improvement based on periodicity and mutation characteristics can improve the forecasting accuracy of the three models. The back propagation neural network model is most sensitive to the data characteristics consistency.
View this article on IEEE Xplore
Efficiency Optimization Design That Considers Control of Interior Permanent Magnet Synchronous Motors Based on Machine Learning for Automotive Application
Interior permanent magnet synchronous motors have become widely used as traction motors in environmentally friendly vehicles. Interior permanent magnet synchronous motors have a high degree of design freedom and time-consuming finite element analysis is required for their characteristics analysis, which results in a long design period. Here, we propose a method for fast efficiency maximization design that uses a machine-learning-based surrogate model. The surrogate model predicts motor parameters and iron loss with the same accuracy as that of finite element analysis but in a much shorter time. Furthermore, using the current and speed conditions in addition to geometry information as input to the surrogate model enables design optimization that considers motor control. The proposed method completed multi-objective multi-constraint optimization for multi-dimensional geometric parameters, which is prohibitively time-consuming using finite element analysis, in a few hours. The proposed shapes reduced losses under a vehicle test cycle compared with the initial shape. The proposed method was applied to motors with three rotor topologies to verify its generality.
View this article on IEEE Xplore
Published in the IEEE Vehicular Technology Society Section
An Intelligent IoT Sensing System for Rail Vehicle Running States Based on TinyML
Real-time identification of the running state is one of the key technologies for a smart rail vehicle. However, it is a challenge to accurately real-time sense the complex running states of the rail vehicle on an Internet-of-Things (IoT) edge device. Traditional systems usually upload a large amount of real-time data from the vehicle to the cloud for identification, which is laborious and inefficient. In this paper, an intelligent identification method for rail vehicle running state is proposed based on Tiny Machine Learning (TinyML) technology, and an IoT system is developed with small size and low energy consumption. The system uses a Micro-Electro-Mechanical System (MEMS) sensor to collect acceleration data for machine learning training. A neural network model for recognizing the running state of rail vehicles is built and trained by defining a machine learning running state classification model. The trained recognition model is deployed to the IoT edge device at the vehicle side, and an offset time window method is utilized for real-time state sensing. In addition, the sensing results are uploaded to the IoT server for visualization. The experiments on the subway vehicle showed that the system could identify six complex running states in real-time with over 99% accuracy using only one IoT microcontroller. The model with three axes converges faster than the model with one. The model recognition accuracy remained above 98% and 95%, under different installation positions on the rail vehicle and the zero-drift phenomenon of the MEMS acceleration sensor, respectively. The presented method and system can also be extended to edge-aware applications of equipment such as automobiles and ships.
View this article on IEEE Xplore
Code Generation Using Machine Learning: A Systematic Review
Recently, machine learning (ML) methods have been used to create powerful language models for a broad range of natural language processing tasks. An important subset of this field is that of generating code of programming languages for automatic software development. This review provides a broad and detailed overview of studies for code generation using ML. We selected 37 publications indexed in arXiv and IEEE Xplore databases that train ML models on programming language data to generate code. The three paradigms of code generation we identified in these studies are description-to-code, code-to-description, and code-to-code. The most popular applications that work in these paradigms were found to be code generation from natural language descriptions, documentation generation, and automatic program repair, respectively. The most frequently used ML models in these studies include recurrent neural networks, transformers, and convolutional neural networks. Other neural network architectures, as well as non-neural techniques, were also observed. In this review, we have summarized the applications, models, datasets, results, limitations, and future work of 37 publications. Additionally, we include discussions on topics general to the literature reviewed. This includes comparing different model types, comparing tokenizers, the volume and quality of data used, and methods for evaluating synthesized code. Furthermore, we provide three suggestions for future work for code generation using ML.
View this article on IEEE Xplore
Combining Citation Network Information and Text Similarity for Research Article Recommender Systems
Researchers often need to gather a comprehensive set of papers relevant to a focused topic, but this is often difficult and time-consuming using existing search methods. For example, keyword searching suffers from difficulties with synonyms and multiple meanings. While some automated research-paper recommender systems exist, these typically depend on either a researcher’s entire library or just a single paper, resulting in either a quite broad or a quite narrow search. With these issues in mind, we built a new research-paper recommender system that utilizes both citation information and textual similarity of abstracts to provide a highly focused set of relevant results. The input to this system is a set of one or more related papers, and our system searches for papers that are closely related to the entire set. This framework helps researchers gather a set of papers that are closely related to a particular topic of interest, and allows control over which cross-section of the literature is located. We show the effectiveness of this recommender system by using it to recreate the references of review papers. We also show its utility as a general similarity metric between scientific articles by performing unsupervised clustering on sets of scientific articles. We release an implementation, ExCiteSearch (bitbucket.org/mmmontemore/excitesearch), to allow researchers to apply this framework to locate relevant scientific articles.
View this article on IEEE Xplore
Novel Multi Center and Threshold Ternary Pattern Based Method for Disease Detection Method Using Voice
Smart health is one of the most popular and important components of smart cities. It is a relatively new context-aware healthcare paradigm influenced by several fields of expertise, such as medical informatics, communications and electronics, bioengineering, ethics, to name a few. Smart health is used to improve healthcare by providing many services such as patient monitoring, early diagnosis of disease and so on. The artificial neural network (ANN), support vector machine (SVM) and deep learning models, especially the convolutional neural network (CNN), are the most commonly used machine learning approaches where they proved to be performance in most cases. Voice disorders are rapidly spreading especially with the development of medical diagnostic systems, although they are often underestimated. Smart health systems can be an easy and fast support to voice pathology detection. The identification of an algorithm that discriminates between pathological and healthy voices with more accuracy is needed to obtain a smart and precise mobile health system. The main contribution of this paper consists of proposing a multiclass-pathologic voice classification using a novel multileveled textural feature extraction with iterative feature selector. Our approach is a simple and efficient voice-based algorithm in which a multi-center and multi threshold based ternary pattern is used (MCMTTP). A more compact multileveled features are then obtained by sample-based discretization techniques and Neighborhood Component Analysis (NCA) is applied to select features iteratively. These features are finally integrated with MCMTTP to achieve an accurate voice-based features detection. Experimental results of six classifiers with three diagnostic diseases (frontal resection, cordectomy and spastic dysphonia) show that the fused features are more suitable for describing voice-based disease detection.
*Published in the IEEE Electronics Packaging Society Section within IEEE Access.
View this article on IEEE Xplore
Machine Learning Empowered Spectrum Sharing in Intelligent Unmanned Swarm Communication Systems: Challenges, Requirements and Solutions
The unmanned swarm system (USS) has been seen as a promising technology, and will play an extremely important role in both the military and civilian fields such as military strikes, disaster relief and transportation business. As the “nerve center” of USS, the unmanned swarm communication system (USCS) provides the necessary information transmission medium so as to ensure the system stability and mission implementation. However, challenges caused by multiple tasks, distributed collaboration, high dynamics, ultra-dense and jamming threat make it hard for USCS to manage limited spectrum resources. To tackle with such problems, the machine learning (ML) empowered intelligent spectrum management technique is introduced in this paper. First, based on the challenges of the spectrum resource management in USCS, the requirement of spectrum sharing is analyzed from the perspective of spectrum collaboration and spectrum confrontation. We found that suitable multi-agent collaborative decision making is promising to realize effective spectrum sharing in both two perspectives. Therefore, a multi-agent learning framework is proposed which contains mobile-computing-assisted and distributed structures. Based on the framework, we provide case studies. Finally, future research directions are discussed.
Follow us: