AD-VILS: Implementation and Reliability Validation of Vehicle-in-the-Loop Simulation Platform for Evaluating Autonomous Driving Systems

Vehicle-in-the-loop simulation (VILS) is a vehicle-testing technique that integrates high-fidelity simulation environments with real-world vehicles. Among existing simulation approaches for evaluating autonomous driving systems (ADS), VILS is particularly noteworthy because it faithfully reflects the dynamic characteristics of real-world vehicles and ensures repeatable and reproducible testing in diverse virtual scenarios. While researchers strive to implement a VILS platform that closely approximates real-world vehicle-testing environments, the performance of vehicles in VILS testing may differ from that observed in real-world testing, depending on the platform’s reliability. Therefore, methods must be established to validate the reliability of VILS platforms. Herein, we present the essential components of a VILS platform for evaluating ADS (AD-VILS) and propose a metho dology to validate the reliability of the implemented AD-VILS platform. This methodology includes scenario definition, techniques for VILS testing and real-world vehicle testing, and procedures for evaluating consistency and correlation based on statistical and mathematical comparisons between the datasets from virtual and real-world tests. Moreover, we empirically derive reliability evaluation criteria through iterative testing. This methodology aims to enhance the precision and reliability of ADS evaluations conducted on AD-VILS platforms.

View this article on IEEE Xplore


StrikeNet: Deep Convolutional LSTM-Based Road Lane Reconstruction With Spatiotemporal Inference for Lane Keeping Control

This paper presents a Spatio-Temporal Road Inference for a KEeping NETwork (StrikeNet), aimed at enhancing Road Lane Reconstruction (RLR) and lateral motion control in Autonomous Vehicles (AV) using deep neural networks. Accurate road lane model coefficients are essential for an effective Lane Keeping System (LKS), but the traditional vision system often fails in situations where lane markers are absent or faint and cannot be properly recognized. To overcome this, a driving dataset was restructured, combining road information from a vision system and forward images for spatial training of RLR. Sequential spatial learning outputs were then processed with in-vehicle sensor data for temporal inference via Long Short-Term Memory (LSTM). The StrikeNet was rigorously tested in both typical and uncertain driving environments. Comprehensive statistical and visualization analyses were conducted to evaluate the performance of various RLR methods and lateral motion control strategies. Remarkably, the RLR demonstrated its capability to derive reliable road coefficients even in the absence of lane markers. Upon performance comparison with four alternative techniques, our method yields the lowest error and variance between human steering inputs and the control input. Specifically, under high and low lane quality conditions, the proposed method maximally reduced the control input error by up to 72% and 66%, respectively, and decreased the variance by 54% and 94%, respectively. The findings highlight StrikeNet’s effectiveness in bolstering the fail-operational performance, and reliability of lane-keeping or lane departure warning systems in autonomous driving, thereby enhancing control continuity and mitigating path deviation-induced traffic accidents.

View this article on IEEE Xplore