StrikeNet: Deep Convolutional LSTM-Based Road Lane Reconstruction With Spatiotemporal Inference for Lane Keeping Control

This paper presents a Spatio-Temporal Road Inference for a KEeping NETwork (StrikeNet), aimed at enhancing Road Lane Reconstruction (RLR) and lateral motion control in Autonomous Vehicles (AV) using deep neural networks. Accurate road lane model coefficients are essential for an effective Lane Keeping System (LKS), but the traditional vision system often fails in situations where lane markers are absent or faint and cannot be properly recognized. To overcome this, a driving dataset was restructured, combining road information from a vision system and forward images for spatial training of RLR. Sequential spatial learning outputs were then processed with in-vehicle sensor data for temporal inference via Long Short-Term Memory (LSTM). The StrikeNet was rigorously tested in both typical and uncertain driving environments. Comprehensive statistical and visualization analyses were conducted to evaluate the performance of various RLR methods and lateral motion control strategies. Remarkably, the RLR demonstrated its capability to derive reliable road coefficients even in the absence of lane markers. Upon performance comparison with four alternative techniques, our method yields the lowest error and variance between human steering inputs and the control input. Specifically, under high and low lane quality conditions, the proposed method maximally reduced the control input error by up to 72% and 66%, respectively, and decreased the variance by 54% and 94%, respectively. The findings highlight StrikeNet’s effectiveness in bolstering the fail-operational performance, and reliability of lane-keeping or lane departure warning systems in autonomous driving, thereby enhancing control continuity and mitigating path deviation-induced traffic accidents.

View this article on IEEE Xplore


Software Fault-Proneness Analysis based on Composite Developer-Module Networks

Existing software fault-proneness analysis and prediction models can be categorized into software metrics and visualized approaches. However, the studies of the software metrics solely rely on the quantified data, while the latter fails to reflect the human aspect, which is proven to be a main cause of many failures in various domains. In this paper, we proposed a new analysis model with an improved software network called Composite Developer-Module Network. The network is composed of the linkage of both developers to software modules and software modules to modules to reflect the characteristics and interaction between developers. After the networks of the research objects are built, several different sub-graphs in the networks are derived from analyzing the structures of the sub-graphs that are more fault-prone and further determine whether the software development is in a bad structure, thus predicting the fault-proneness. Our research shows that the different sub-structures are not only a factor in fault-proneness, but also that the complexity of the sub-structure can affect the production of bugs.

*Published in the IEEE Reliability Society Section within IEEE Access.

View this article on IEEE Xplore

 

Exponential Loss Minimization for Learning Weighted Naive Bayes Classifiers

The naive Bayesian classification method has received significant attention in the field of supervised learning. This method has an unrealistic assumption in that it views all attributes as equally important. Attribute weighting is one of the methods used to alleviate this assumption and consequently improve the performance of the naive Bayes classification. This study, with a focus on nonlinear optimization problems, proposes four attribute weighting methods by minimizing four different loss functions. The proposed loss functions belong to a family of exponential functions that makes the optimization problems more straightforward to solve, provides analytical properties of the trained classifier, and allows for the simple modification of the loss function such that the naive Bayes classifier becomes robust to noisy instances. This research begins with a typical exponential loss which is sensitive to noise and provides a series of its modifications to make naive Bayes classifiers more robust to noisy instances. Based on numerical experiments conducted using 28 datasets from the UCI machine learning repository, we confirmed that the proposed scheme successfully determines optimal attribute weights and improves the classification performance.

View this article on IEEE Xplore

 

A Data Compression Strategy for the Efficient Uncertainty Quantification of Time-Domain Circuit Responses

This paper presents an innovative modeling strategy for the construction of efficient and compact surrogate models for the uncertainty quantification of time-domain responses of digital links. The proposed approach relies on a two-step methodology. First, the initial dataset of available training responses is compressed via principal component analysis (PCA). Then, the compressed dataset is used to train compact surrogate models for the reduced PCA variables using advanced techniques for uncertainty quantification and parametric macromodeling. Specifically, in this work sparse polynomial chaos expansion and least-square support-vector machine regression are used, although the proposed methodology is general and applicable to any surrogate modeling strategy. The preliminary compression allows limiting the number and complexity of the surrogate models, thus leading to a substantial improvement in the efficiency. The feasibility and performance of the proposed approach are investigated by means of two digital link designs with 54 and 115 uncertain parameters, respectively.

Published in the IEEE Electronics Packaging Society Section within IEEE Access.

View this article on IEEE Xplore