Stanford University, USA

Mathematical Optimization for AI 

Abstract

This talk aims to present several mathematical optimization and Mathematical Programming problems, models, and algorithms for AI such as the LLM training, tunning and inferencing. In particular, we describe how classic optimization models/theories, such as online resource allocation and gradient-conditioning, may be applicable to accelerate and improve the Training/Tunning/Inferencing processes that are popularly used in LLMs.

Biography

Yinyu Ye, the 2009 John von Neumann Theory Prize recipient and formally the K.T. Li Professor of Stanford University, is now the Visiting Professor of Corvinus and Professor Emeritus of Stanford. His current research topics include Continuous and Discrete Optimization, Data Science and Applications, Numerical Algorithm Design and Analyses, Algorithmic Game/Market Equilibrium, Operations Research and Management Science etc.. He was one of the pioneers on Interior-Point Methods, Conic Linear Programming, Distributionally Robust Optimization, Online Linear Programming and Learning, Algorithm Analyses for Reinforcement Learning\Markov Decision Process, nonconvex optimization, and etc. He and his students have received numerous scientific awards, himself including the 2006 INFORMS Farkas Prize (Inaugural Recipient) for fundamental contributions to optimization, the 2009 John von Neumann Theory Prize for fundamental sustained contributions to theory in Operations Research and the Management Sciences, the inaugural 2012 ISMP Tseng Lectureship Prize for outstanding contribution to continuous optimization (every three years), the 2014 SIAM Optimization Prize awarded (every three years), etc.