{"id":1352,"date":"2025-09-01T07:04:28","date_gmt":"2025-09-01T07:04:28","guid":{"rendered":"https:\/\/scitope.com\/ait25\/?page_id=1352"},"modified":"2025-09-09T19:47:32","modified_gmt":"2025-09-09T19:47:32","slug":"prof-yinyu-ye","status":"publish","type":"page","link":"https:\/\/scitope.com\/ait25\/?page_id=1352","title":{"rendered":"Prof. Yinyu Ye"},"content":{"rendered":"<p>[vc_row][vc_column][vc_single_image image=&#8221;1353&#8243; alignment=&#8221;center&#8221; style=&#8221;vc_box_circle_2&#8243;][vc_column_text]<\/p>\n<h5 style=\"text-align: center;\"><span style=\"font-size: 20px;\"><em>Stanford University, USA<\/em><\/span><\/h5>\n<p>[\/vc_column_text][vc_column_text]<\/p>\n<h2 style=\"text-align: center;\"><span data-olk-copy-source=\"MessageBody\">Mathematical Optimization for AI\u00a0<\/span><\/h2>\n<p>[\/vc_column_text][\/vc_column][\/vc_row][vc_row][vc_column][vc_column_text]<span style=\"font-size: 15px;\"><strong>Abstract<\/strong><\/span><\/p>\n<p><span data-olk-copy-source=\"MessageBody\">This talk aims to present several mathematical optimization and Mathematical Programming problems, models, and algorithms for AI such as the LLM training, tunning and inferencing. In particular, we describe how classic optimization models\/theories, such as online resource allocation and gradient-conditioning, may be applicable to accelerate and improve the Training\/Tunning\/Inferencing processes that are popularly used in LLMs.<\/span><\/p>\n<p><span style=\"font-size: 15px;\"><strong>Biography<\/strong><\/span><\/p>\n<p>Yinyu Ye, the 2009 John von Neumann Theory Prize recipient and formally the K.T. Li Professor of Stanford University, is now the Visiting Professor of Corvinus and Professor Emeritus of Stanford. His current research topics include Continuous and Discrete Optimization, Data Science and Applications, Numerical Algorithm Design and Analyses, Algorithmic Game\/Market Equilibrium, Operations Research and Management Science etc.. He was one of the pioneers on Interior-Point Methods, Conic Linear Programming, Distributionally Robust Optimization, Online Linear Programming and Learning, Algorithm Analyses for Reinforcement Learning\\Markov Decision Process, nonconvex optimization, and etc. He and his students have received numerous scientific awards, himself including the 2006 INFORMS Farkas Prize (Inaugural Recipient) for fundamental contributions to optimization, the 2009 John von Neumann Theory Prize for fundamental sustained contributions to theory in Operations Research and the Management Sciences, the inaugural 2012 ISMP Tseng Lectureship Prize for outstanding contribution to continuous optimization (every three years), the 2014 SIAM Optimization Prize awarded (every three years), etc.[\/vc_column_text][vc_column_text]<\/p>\n<h3 style=\"text-align: center;\"><a href=\"https:\/\/scholar.google.hu\/citations?user=BgOXDogAAAAJ&amp;hl=hu&amp;oi=ao\"><span style=\"color: #db931b;\">Scholar Profile<\/span><\/a><\/h3>\n<p>[\/vc_column_text][\/vc_column][\/vc_row]<\/p>\n","protected":false},"excerpt":{"rendered":"<p>[vc_row][vc_column][vc_single_image image=&#8221;1353&#8243; alignment=&#8221;center&#8221; style=&#8221;vc_box_circle_2&#8243;][vc_column_text] Stanford University, USA [\/vc_column_text][vc_column_text] Mathematical Optimization for AI\u00a0 [\/vc_column_text][\/vc_column][\/vc_row][vc_row][vc_column][vc_column_text]Abstract This talk aims to present several mathematical optimization and Mathematical Programming problems, models, and algorithms for AI such as the LLM training, tunning and inferencing. In particular, we describe how classic optimization models\/theories, such as online resource allocation and gradient-conditioning, may be [&hellip;]<\/p>\n","protected":false},"author":2,"featured_media":0,"parent":0,"menu_order":0,"comment_status":"closed","ping_status":"closed","template":"","meta":{"_exactmetrics_skip_tracking":false,"_exactmetrics_sitenote_active":false,"_exactmetrics_sitenote_note":"","_exactmetrics_sitenote_category":0,"footnotes":""},"class_list":["post-1352","page","type-page","status-publish","hentry"],"_links":{"self":[{"href":"https:\/\/scitope.com\/ait25\/index.php?rest_route=\/wp\/v2\/pages\/1352","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/scitope.com\/ait25\/index.php?rest_route=\/wp\/v2\/pages"}],"about":[{"href":"https:\/\/scitope.com\/ait25\/index.php?rest_route=\/wp\/v2\/types\/page"}],"author":[{"embeddable":true,"href":"https:\/\/scitope.com\/ait25\/index.php?rest_route=\/wp\/v2\/users\/2"}],"replies":[{"embeddable":true,"href":"https:\/\/scitope.com\/ait25\/index.php?rest_route=%2Fwp%2Fv2%2Fcomments&post=1352"}],"version-history":[{"count":5,"href":"https:\/\/scitope.com\/ait25\/index.php?rest_route=\/wp\/v2\/pages\/1352\/revisions"}],"predecessor-version":[{"id":1361,"href":"https:\/\/scitope.com\/ait25\/index.php?rest_route=\/wp\/v2\/pages\/1352\/revisions\/1361"}],"wp:attachment":[{"href":"https:\/\/scitope.com\/ait25\/index.php?rest_route=%2Fwp%2Fv2%2Fmedia&parent=1352"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}