Leveraging augmented-Lagrangian techniques for differentiating over infeasible quadratic programs in machine learning

Published in Twelfth International Conference on Learning Representations (ICLR), 2023

Recommended citation: Bambade et al. (2023). "Leveraging augmented-Lagrangian techniques for differentiating over infeasible quadratic programs in machine learning." ICLR24. https://hal.laas.fr/PRAIRIE-IA/hal-04133055v1

Optimization layers within neural network architectures have become increasingly popular for their ability to solve a wide range of machine learning tasks and to model domain-specific knowledge. However, designing optimization layers requires careful consideration as the underlying optimization problems might be infeasible during training. Motivated by applications in learning, control, and robotics, this work focuses on convex quadratic programming (QP) layers. The specific structure of this type of optimization layer can be efficiently exploited for faster computations while still allowing rich modeling capabilities. We leverage primal-dual augmented Lagrangian techniques for computing derivatives of both feasible and infeasible QPs. Not requiring feasibility allows, as a byproduct, for more flexibility in the QP to be learned. The effectiveness of our approach is demonstrated in a few standard learning experiments, obtaining three to ten times faster computations than alternative state-of-the-art methods while being more accurate and numerically robust. Along with these contributions, we provide an open-source C++ software package called QPLayer for efficiently differentiating convex QPs and which can be interfaced with modern learning frameworks.

Download paper here