Publications

PROXQP: an Efficient and Versatile Quadratic Programming Solver for Real-Time Robotics Applications and Beyond

Antoine Bambade, Fabian Schramm, Sarah El Kazdadi, Stéphane Caron, Adrien Taylor, Justin Carpentier

Published in IEEE Transactions on Robotics, 2025

This paper presents ProxQP, a new and efficient QP solver for robotics and beyond.

Recommended citation: Bambade et al. (2025). "PROXQP: an Efficient and Versatile Quadratic Programming Solver for Real-Time Robotics Applications and Beyond." IEEE Transactions on Robotics. https://ieeexplore.ieee.org/stamp/stamp.jsp?arnumber=11027562

Leveraging Randomized Smoothing for Optimal Control of Nonsmooth Dynamical Systems

Quentin Le Lidec, Fabian Schramm, Louis Montaut, Cordelia Schmid, Ivan Laptev, Justin Carpentier

Published in Nonlinear Analysis: Hybrid Systems, International Federation of Automatic Control (IFAC) journal, 2024

This paper presents randomized smoothing to tackle non-smoothness issues commonly encountered in optimal control and provides key insights on the interplay between Reinforcement Learning and Optimal Control.

Recommended citation: Le Lidec et al. (2024). "Leveraging Randomized Smoothing for Optimal Control of Nonsmooth Dynamical Systems." NAHS24. https://arxiv.org/abs/2203.03986

Leveraging augmented-Lagrangian techniques for differentiating over infeasible quadratic programs in machine learning

Antoine Bambade, Fabian Schramm, Adrien Taylor, Justin Carpentier

Published in Twelfth International Conference on Learning Representations (ICLR), 2023

This paper presents primal-dual augmented Lagrangian techniques for computing derivatives of both feasible and infeasible QPs.

Recommended citation: Bambade et al. (2023). "Leveraging augmented-Lagrangian techniques for differentiating over infeasible quadratic programs in machine learning." ICLR24. https://hal.laas.fr/PRAIRIE-IA/hal-04133055v1

Reactive Stepping for Humanoid Robots using Reinforcement Learning: Application to Standing Push Recovery on the Exoskeleton Atalante

Alexis Duburcq, Fabian Schramm, Guilhem Boéris, Nicolas Bredeche, Yann Chevaleyre

Published in IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), 2022

Video

This paper presents a reinforcement learning framework capable of learning robust standing push recovery for bipedal robots with a smooth out-of-the-box transfer to reality, requiring only instantaneous proprioceptive observations.

Recommended citation: Duburcq et al. (2022). "Reactive Stepping for Humanoid Robots using Reinforcement Learning: Application to Standing Push Recovery on the Exoskeleton Atalante." IROS22. https://arxiv.org/abs/2203.01148