EXplainable Artificial Intelligence (XAI)—From Theory to Methods and Applications

Intelligent applications supported by Machine Learning have achieved remarkable performance rates for a wide range of tasks in many domains. However, understanding why a trained algorithm makes a particular decision remains problematic. Given the growing interest in the application of learning-based models, some concerns arise in the dealing with sensible environments, which may impact users’ lives. The complex nature of those models’ decision mechanisms makes them the so-called “black boxes,” in which the understanding of the logic behind automated decision-making processes by humans is not trivial. Furthermore, the reasoning that leads a model to provide a specific prediction can be more important than performance metrics, which introduces a trade-off between interpretability and model accuracy. Explaining intelligent computer decisions can be regarded as a way to justify their reliability and establish trust. In this sense, explanations are critical tools that verify predictions to discover errors and biases previously hidden within the models’ complex structures, opening up vast possibilities for more responsible applications. In this review, we provide theoretical foundations of Explainable Artificial Intelligence (XAI), clarifying diffuse definitions and identifying research objectives, challenges, and future research lines related to turning opaque machine learning outputs into more transparent decisions. We also present a careful overview of the state-of-the-art explainability approaches, with a particular analysis of methods based on feature importance, such as the well-known LIME and SHAP. As a result, we highlight practical applications of the successful use of XAI.

View this article on IEEE Xplore


AMS Circuit Design Optimization Technique Based on ANN Regression Model With VAE Structure

The advanced design of an analog mixed-signal circuit is not simple enough to meet the requirements of the performance matrix as well as robust operations under process-voltage-temperature (PVT) changes. Even commercial products demand stringent specifications while maintaining the system’s performance. The main objectives of this study are to increase the efficiency of the design optimization process by configuring the design process in multiple regression modeling stages, to characterize our target circuit into a regression model including PVT variations, and to enable a search for co- optimum design points while simultaneously checking performance sensitivity. We used an artificial neural network (ANN) to develop a regression model and divided the ANN modeling process into coarse and fine simulation steps. In addition, we applied a variational autoencoder (VAE) structure to the ANN model to reduce the training error due to an insufficient input sample. According to the proposed algorithm, the AMS circuit designer can quickly search for the co- optimum point, which results in the best performance, while the least sensitive operation as the design process uses a regression model instead of launching heavy SPICE simulations. In this study, a voltage-controlled oscillator (VCO) is selected to prove the proposed algorithm. Under various design conditions (CMOS 180 nm, 65 nm, and 45 nm processes), we proceed with the proposed design flow to obtain the best performance score that can be evaluated by a figure-of-merit (FoM). As a result, the proposed regression model-based design flow achieves twice accurate results in comparison to that of the conventional single-step design flow.

View this article on IEEE Xplore


Software Fault-Proneness Analysis based on Composite Developer-Module Networks

Existing software fault-proneness analysis and prediction models can be categorized into software metrics and visualized approaches. However, the studies of the software metrics solely rely on the quantified data, while the latter fails to reflect the human aspect, which is proven to be a main cause of many failures in various domains. In this paper, we proposed a new analysis model with an improved software network called Composite Developer-Module Network. The network is composed of the linkage of both developers to software modules and software modules to modules to reflect the characteristics and interaction between developers. After the networks of the research objects are built, several different sub-graphs in the networks are derived from analyzing the structures of the sub-graphs that are more fault-prone and further determine whether the software development is in a bad structure, thus predicting the fault-proneness. Our research shows that the different sub-structures are not only a factor in fault-proneness, but also that the complexity of the sub-structure can affect the production of bugs.

*Published in the IEEE Reliability Society Section within IEEE Access.

View this article on IEEE Xplore

 

Dynamic Network Slice Scaling Assisted by Attention-Based Prediction in 5G Core Network

Network slicing is a key technology in fifth-generation (5G) networks that allows network operators to create multiple logical networks over a shared physical infrastructure to meet the requirements of diverse use cases. Among core functions to implement network slicing, resource management and scaling are difficult challenges. Network operators must ensure the Service Level Agreement (SLA) requirements for latency, bandwidth, resources, etc for each network slice while utilizing the limited resources efficiently, i.e., optimal resource assignment and dynamic resource scaling for each network slice. Existing resource scaling approaches can be classified into reactive and proactive types. The former makes a resource scaling decision when the resource usage of virtual network functions (VNFs) exceeds a predefined threshold, and the latter forecasts the future resource usage of VNFs in network slices by utilizing classical statistical models or deep learning models. However, both have a trade-off between assurance and efficiency. For instance, the lower threshold in the reactive approach or more marginal prediction in the proactive approach can meet the requirements more certainly, but it may cause unnecessary resource wastage. To overcome the trade-off, we first propose a novel and efficient proactive resource forecasting algorithm. The proposed algorithm introduces an attention-based encoder-decoder model for multivariate time series forecasting to achieve high short-term and long-term prediction accuracies. It helps network slices be scaled up and down effectively and reduces the costs of SLA violations and resource overprovisioning. Using the attention mechanism, the model attends to every hidden state of the sequential input at every time step to select the most important time steps affecting the prediction results. We also designed an automated resource configuration mechanism responsible for monitoring resources and automatically adding or removing VNF instances.

View this article on IEEE Xplore

 

Dengue Epidemics Prediction: A Survey of the State-of-the-Art based on Data Science Processes

Dengue infection is a mosquito-borne disease caused by dengue viruses, which are carried by several species of mosquito of the genus Aedes, principally Ae. aegypti. Dengue outbreaks are endemic in tropical and sub-tropical regions of the world, mainly in urban and sub-urban areas. The outbreak is one of the top ten diseases causing the most deaths worldwide. According to the World Health Organization (WHO), dengue infection has increased 30-fold globally over the past five decades. About 50 to 100 million new infections occur annually in more than 80 countries. Many researchers are working on measures to prevent and control the spread. One avenue of research is collaboration between computer science and the epidemiology researchers in developing methods of predicting potential outbreaks of dengue infection. An important research objective is to develop models that enable, or enhance, forecasting of outbreaks of dengue, giving medical professionals the opportunity to develop plans for handling the outbreak, well in advance. Researchers have been gathering and analyzing data to better identify the relational factors driving the spread of the disease, as well as the development of a variety of methods of predictive modelling using statistical and mathematical analysis and Machine Learning. In this substantial review of the literature on the state of the art of research over the past decades, we identified six main issues to be explored and analyzed: (1) The available data sources, (2) Data preparation techniques, (3) Data representations, (4) Forecasting models and methods, (5) Dengue forecasting models evaluation approaches, and (6) Future challenges and possibilities in forecasting modelling of dengue outbreaks. Our comprehensive exploration of the issues provides a valuable information foundation for new researchers in this important area of public health research and epidemiology.

View this article on IEEE Xplore