2020-01-10 | Jonathan Yu-Meng Li: Learning the choice function: a non-parametric inverse optimization approach
One common assumption behind many decision models is that a choice function, i.e. a function used to rank prospects, is readily available. Specifying a choice function is however not necessarily straightforward in many applications. For instance, in risk minimization problems, the function has to well represent an individual’s preference over random variables, but the latter is often not readily available and can only be partially known through observed decisions made by the individual. The question of how to infer a choice function from observed decisions leads to the study of inverse optimization, whose goal is to determine a choice function that renders the observed decisions (approximately) optimal. Most existing studies rely on parametric assumptions of the choice function to prove the tractability of the inverse optimization problem. This, unfortunately, can lead to a potentially biased estimation of the choice function. In this talk, I will start by presenting a general inverse optimization framework and then show how the problem can be efficiently solved as convex optimization problems without resorting to any parametric assumption. Our method exploits the theory of conjugate duality, which provides the necessary characterization of a function from both primal and dual perspectives. Finally, we stress the “data-driven” aspect of our approach and demonstrate through an example of learning risk measures the convergence behavior of the learning process.
Jonathan Yu-Meng Li is an associate professor in the Telfer School of Management at the University of Ottawa, Canada. He received his Ph.D. from the University of Toronto in operations research. His research focuses on the interplay between optimization theory (stochastic, robust, and inverse optimization as well as hybrids thereof) and decision theory and its application in finance and operations management, with a particular emphasis on risk management.