On the Cyber-Physical Needs of DER-Based Voltage Control/Optimization Algorithms in Active Distribution Network

With the increasing penetration of distributed energy resources (DERs) and extensive usage of information and communications technology (ICT) in decision-making, mechanisms to control/optimize transmission and distribution grid voltage would experience a paradigm shift. Given the introduction of inverter-based DERs with vastly different dynamics, real-world performance characterization of the cyber-physical system (CPS) in terms of dynamical performance, scalability, robustness, and resiliency with the new control algorithms require precise algorithmic classification and suitable metrics. It has been identified that classical controller definitions along with three inter-disciplinary domains, such as (i) power system, (ii) optimization, control, and decision-making, and (iii) networking and cyber-security, would provide a systematic basis for the development of an extended metric for algorithmic performance evaluation; while providing the taxonomy. Furthermore, a majority of these control algorithms operate in multiple time scales, and therefore, algorithmic time decomposition facilitates a new way of performance analysis. Extended discussion on communication requirements while focusing on the architectural subtleties of algorithms is expected to identify the real-world deployment challenges of voltage control/optimization algorithms in the presence of cyber vulnerabilities and associated mitigation mechanisms affecting the controller performance with DERs. Finally, the detailed discussion provided in this paper identifies the modeling requirements of the CPS for real-world deployment, specific to voltage control, facilitating the development of a unified test-bed.

View this article on IEEE Xplore

Published in the IEEE Power & Energy Society Section within IEEE Access

An Intelligent IoT Sensing System for Rail Vehicle Running States Based on TinyML

Real-time identification of the running state is one of the key technologies for a smart rail vehicle. However, it is a challenge to accurately real-time sense the complex running states of the rail vehicle on an Internet-of-Things (IoT) edge device. Traditional systems usually upload a large amount of real-time data from the vehicle to the cloud for identification, which is laborious and inefficient. In this paper, an intelligent identification method for rail vehicle running state is proposed based on Tiny Machine Learning (TinyML) technology, and an IoT system is developed with small size and low energy consumption. The system uses a Micro-Electro-Mechanical System (MEMS) sensor to collect acceleration data for machine learning training. A neural network model for recognizing the running state of rail vehicles is built and trained by defining a machine learning running state classification model. The trained recognition model is deployed to the IoT edge device at the vehicle side, and an offset time window method is utilized for real-time state sensing. In addition, the sensing results are uploaded to the IoT server for visualization. The experiments on the subway vehicle showed that the system could identify six complex running states in real-time with over 99% accuracy using only one IoT microcontroller. The model with three axes converges faster than the model with one. The model recognition accuracy remained above 98% and 95%, under different installation positions on the rail vehicle and the zero-drift phenomenon of the MEMS acceleration sensor, respectively. The presented method and system can also be extended to edge-aware applications of equipment such as automobiles and ships.

View this article on IEEE Xplore

 

Most Popular Article of 2017: Machine Learning With Big Data: Challenges and Approaches

The Big Data revolution promises to transform how we live, work, and think by enabling process optimization, empowering insight discovery and improving decision making. The realization of this grand potential relies on the ability to extract value from such massive data through data analytics; machine learning is at its core because of its ability to learn from data and provide data driven insights, decisions, and predictions. However, traditional machine learning approaches were developed in a different era, and thus are based upon multiple assumptions, such as the data set fitting entirely into memory, what unfortunately no longer holds true in this new context. These broken assumptions, together with the Big Data characteristics, are creating obstacles for the traditional techniques. Consequently, this paper compiles, summarizes, and organizes machine learning challenges with Big Data. In contrast to other research that discusses challenges, this work highlights the cause-effect relationship by organizing challenges according to Big Data Vs or dimensions that instigated the issue: volume, velocity, variety, or veracity. Moreover, emerging machine learning approaches and techniques are discussed in terms of how they are capable of handling the various challenges with the ultimate objective of helping practitioners select appropriate solutions for their use cases. Finally, a matrix relating the challenges and approaches is presented. Through this process, this paper provides a perspective on the domain, identifies research gaps and opportunities, and provides a strong foundation and encouragement for further research in the field of machine learning with Big Data.

View this article on IEEE Xplore