We introduce a machine-learning-based approach to solving convex optimization problems, inspired by the Bundle method. While Bundle methods generally exhibit faster convergence compared to gradient descent, they require manual parameter tuning to achieve good algorithmic behaviors. Our framework eliminates the need for such hyperparameter tuning, addressing this limitation of the Bundle method. The predictions generated by our framework can serve as either approximations of solutions produced by iterative algorithms or as informed starting points. We present numerical results for solving the Lagrangian Dual of the Lagrangian Relaxation for an MILP, removing the need for manual hyperparameter tuning and obtaining performances comparable to the classic approach.

