Replay buffer

Replay buffer                              Machine Learning

noun phrase

Definition: A memory buffer used in reinforcement learning to store previously collected experience, typically transitions or trajectories, so that samples can be drawn later for training updates; PyTorch’s RL documentation describes replay buffers as a central part of off-policy RL algorithms, and its DQN tutorial defines replay memory as a cyclic buffer that stores transitions and supports random sampling for training [PyTorch].

Examples in context: “The important changes are similar to our discussion with the ADMIRAL-DM case, where the algorithm uses two networks and a replay buffer for training.” [Subramanian et al. 2022]

“We use a standard experience replay buffer (Mnih et al., 2013) to store and resample tuples ⟨s, a, s′, r⟩ for training the DQN network.” [Prasad et al. 2025]

Synonym: experience replay buffer; replay memory

Related terms: experience replay, trajectory buffer, prioritized replay buffer, off-policy learning, transition tuple

 

Добавить комментарий 0

Ваш электронный адрес не будет опубликован. Обязательные поля помечены *