SVL: Goal-Conditioned Reinforcement Learning as Survival Learning
Published in arXiv preprint, 2026
Recommended citation: Nguimatsia Tiofack, F., Schramm, F., Le Hellard, T., & Carpentier, J. (2026). "SVL: Goal-Conditioned Reinforcement Learning as Survival Learning." arXiv:2604.17551. https://arxiv.org/abs/2604.17551
Standard approaches to goal-conditioned reinforcement learning (GCRL) that rely on temporal-difference learning can be unstable and sample-inefficient due to bootstrapping. While recent work has explored contrastive and supervised formulations to improve stability, we present a probabilistic alternative, called survival value learning (SVL), that reframes GCRL as a survival learning problem by modeling the time-to-goal from each state as a probability distribution. This structured distributional Monte Carlo perspective yields a closed-form identity that expresses the goal-conditioned value function as a discounted sum of survival probabilities, enabling value estimation via a hazard model trained via maximum likelihood on both event and right-censored trajectories. We introduce three practical value estimators, including finite-horizon truncation and two binned infinite-horizon approximations to capture long-horizon objectives. Experiments on offline GCRL benchmarks show that SVL combined with hierarchical actors matches or surpasses strong hierarchical TD and Monte Carlo baselines, excelling on complex, long-horizon tasks.
