Treffer: Learning in Inverse Optimization: Incenter Cost, Augmented Suboptimality Loss, and Algorithms.

Title:
Learning in Inverse Optimization: Incenter Cost, Augmented Suboptimality Loss, and Algorithms.
Authors:
Zattoni Scroccaro, Pedro1 (AUTHOR) P.ZattoniScroccaro@tudelft.nl, Atasoy, Bilge2 (AUTHOR) b.atasoy@tudelft.nl, Mohajerin Esfahani, Peyman1 (AUTHOR) P.MohajerinEsfahani@tudelft.nl
Source:
Operations Research. Sep/Oct2025, Vol. 73 Issue 5, p2661-2679. 19p.
Database:
Business Source Elite

Weitere Informationen

Enhancing the Efficiency and Accuracy of Inverse Optimization Inverse optimization (IO) is used to model the behavior of decision-making agents who solve optimization problems in response to external signals. Inspired by the geometry of IO problems, in "Learning in Inverse Optimization: Incenter Cost, Augmented Suboptimality Loss, and Algorithms," Zattoni Scroccaro, Atasoy, and Mohajerin Esfahani propose the "incenter" concept to solve IO problems, which unlike previously proposed approaches, can be used to derive computationally tractable solutions to this modeling problem. Moreover, they also propose a novel loss function for IO problems and a tailored optimization algorithm to optimize it. Extensive numerical experiments showcase the improved efficiency and accuracy of the proposed IO formulations and algorithm. In inverse optimization (IO), an expert agent solves an optimization problem parametric in an exogenous signal. From a learning perspective, the goal is to learn the expert's cost function given a data set of signals and corresponding optimal actions. Motivated by the geometry of the IO set of consistent cost vectors, we introduce the "incenter" concept, a new notion akin to the recently proposed circumcenter concept. Discussing the geometric and robustness interpretation of the incenter cost vector, we develop corresponding tractable convex reformulations that are in contrast with the circumcenter, which we show is equivalent to an intractable optimization program. We further propose a novel loss function called augmented suboptimality loss (ASL), a relaxation of the incenter concept for problems with inconsistent data. Exploiting the structure of the ASL, we propose a novel first-order algorithm, which we name stochastic approximate mirror descent. This algorithm combines stochastic and approximate subgradient evaluations, together with mirror descent update steps, which are provably efficient for the IO problems with discrete feasible sets with high cardinality. We implement the IO approaches developed in this paper as a Python package called InvOpt. Our numerical experiments are reproducible, and the underlying source code is available as examples in the InvOpt package. [ABSTRACT FROM AUTHOR]

Copyright of Operations Research is the property of INFORMS: Institute for Operations Research & the Management Sciences and its content may not be copied or emailed to multiple sites without the copyright holder's express written permission. Additionally, content may not be used with any artificial intelligence tools or machine learning technologies. However, users may print, download, or email articles for individual use. This abstract may be abridged. No warranty is given about the accuracy of the copy. Users should refer to the original published version of the material for the full abstract. (Copyright applies to all Abstracts.)

Volltext ist im Gastzugang nicht verfügbar.