RL-Based Cache Replacement: A Modern Interpretation of Belady’s Algorithm With Bypass Mechanism and Access Type Analysis

Belady’s algorithm is widely known as an optimal cache replacement policy. It has been the foundation of numerous recent studies on cache replacement policies, and most studies assume this as an upper limit. Despite its widespread adoption, we discovered opportunities to unleash the headroom by addressing cache access types and implementing cache bypass. In this study, we propose Stormbird, a cache replacement policy that synergistically integrates the extensions of Belady’s algorithm and the power of reinforcement learning. Reinforcement learning is well-suited for cache replacement policy problems owing to its ability to interact dynamically with the environment, adapt to changing access patterns, and optimize the maximum cumulative rewards. Stormbird utilizes several selected features from the reinforcement learning model to enhance the instructions per cycle efficiency while maintaining a low hardware area overhead. Furthermore, it considers cache access types and integrates dynamic set dueling techniques to improve the cache performance. For 2 MB last-level cache per core, Stormbird achieves an average instructions per cycle improvement of 0.13% over the previous state-of-the-art on a single-core system and 0.02% on a four-core system while simultaneously reducing hardware overhead by 62.5%. Stormbird incurs a low hardware overhead of only 10.5 KB for 2 MB last-level cache and can be implemented without using program counter values.

View this article on IEEE Xplore

 

Collision Avoidance in Pedestrian-Rich Environments With Deep Reinforcement Learning

Collision avoidance algorithms are essential for safe and efficient robot operation among pedestrians. This work proposes using deep reinforcement (RL) learning as a framework to model the complex interactions and cooperation with nearby, decision-making agents, such as pedestrians and other robots. Existing RL-based works assume homogeneity of agent properties, use specific motion models over short timescales, or lack a principled method to handle a large, possibly varying number of agents. Therefore, this work develops an algorithm that learns collision avoidance among a variety of heterogeneous, non-communicating, dynamic agents without assuming they follow any particular behavior rules. It extends our previous work by introducing a strategy using Long Short-Term Memory (LSTM) that enables the algorithm to use observations of an arbitrary number of other agents, instead of a small, fixed number of neighbors. The proposed algorithm is shown to outperform a classical collision avoidance algorithm, another deep RL-based algorithm, and scales with the number of agents better (fewer collisions, shorter time to goal) than our previously published learning-based approach. Analysis of the LSTM provides insights into how observations of nearby agents affect the hidden state and quantifies the performance impact of various agent ordering heuristics. The learned policy generalizes to several applications beyond the training scenarios: formation control (arrangement into letters), demonstrations on a fleet of four multirotors and on a fully autonomous robotic vehicle capable of traveling at human walking speed among pedestrians.

View this article on IEEE Xplore