Optimization, Control and Reinforcement Learning Session
9:00am-12:00pm, February 24 in person at CSL B02
In the past decade, many exciting research results emerged from optimization, control, and reinforcement learning. A host of challenges arise when using data to solve real-time problems, which requires new thinking from both engineering and theoretical perspective. This excites the further development and intersections for both classical and incipient disciplines including classic control, optimization, and reinforcement learning.
The CSL Student Conference provides opportunities for students to gather, discuss their works, and collaborate with the advancement of their fields. This year, the Optimization, Control, and Reinforcement Learning session have a keynote speech by Prof. Nicolas Loizou, a prominent researcher in the area, and an invited talk by Jingqi Li, a rising star. We invite all to submit related (but not restricted) work to optimization, control, and reinforcement learning.
“Large-Scale Optimization for Machine Learning and Data Science”
Time: 11:00 am – 12:00 pm, February 24
Talk Abstract: Stochastic gradient descent (SGD) is the workhorse for training modern large-scale supervised machine learning models. In this talk, we will discuss recent developments in the convergence analysis of SGD and propose efficient and practical variants for faster convergence. We will start by presenting a general yet simple theoretical analysis describing the convergence of SGD under the arbitrary sampling paradigm. The proposed analysis describes the convergence of an infinite array of variants of SGD, each of which is associated with a specific probability law governing the data selection rule used to form minibatches. The result holds under the weakest possible assumptions providing for the first time the best combination of step-size and optimal minibatch size. We will also present a novel adaptive (no-tuning needed) learning rate for SGD. We will introduce a stochastic variant of the classical Polyak step-size (Polyak, 1987) commonly used in the subgradient method and explain why the proposed stochastic Polyak step-size (SPS) is an attractive choice for setting the learning rate for SGD. We will provide theoretical convergence guarantees for the new method in different settings, including strongly convex, convex, and non-convex functions, and demonstrate the strong performance of SGD with SPS compared to state-of-the-art optimization methods when training over-parameterized models.
Biography: Nicolas Loizou is an Assistant Professor in the Department of Applied Mathematics and Statistics and the Mathematical Institute for Data Science (MINDS) at Johns Hopkins University, where he leads the Optimization and Machine Learning Lab.
Prior to this, he was a Postdoctoral Research Fellow at Mila – Quebec Artificial Intelligence Institute and the Université de Montréal, from September 2019 to December 2021. He completed his Ph.D. studies in Optimization and Operational Research at the University of Edinburgh, School of Mathematics, in 2019. Before that, he received his undergraduate degree in Mathematics from the National and Kapodistrian University of Athens in 2014, and in 2015 obtained his M.Sc. degree in Computing from Imperial College London. During the fall of 2018, he was a research intern at Facebook AI Research, Montreal, Canada.
His research interests include large-scale optimization, machine learning, randomized numerical linear algebra, distributed and decentralized algorithms, game theory, and deep learning. His current research focuses on the theory and applications of convex and non-convex optimization in large-scale machine learning and data science problems. He has received several awards and fellowships, including OR Society’s 2019 Doctoral Award (runner-up) for the ”Most Distinguished Body of Research leading to the Award of a Doctorate in the field of Operational Research”, the IVADO Postdoctoral Fellowship and COAP 2020 Best Paper Award.
“Accommodating Intention Uncertainty in Dynamic Games”
Time: 10:20 am – 11:00 am, February 24
Talk Abstract: In multi-agent dynamic games, the optimal strategy of each agent is determined by its cost function and the information pattern of the game. However, the cost of each agent may be unavailable to the other agents. This uncertainty poses a challenge to the strategy design in safety-critical systems. In the first part of this talk, we will present our recent work on inferring the unknown and possibly nonconvex cost functions of the players in nonlinear feedback general-sum games, using only partial state observation and incomplete trajectory data. To be more specific, we first propose an inverse feedback game loss function, whose minimizer yields a feedback Nash equilibrium state trajectory closest to the observation data. Given the difficulty of obtaining the exact gradient, our main contribution in this part is an efficient gradient approximator, which enables a novel inverse feedback game solver that minimizes the loss using first-order optimization. In the second part of the talk, we focus on a class of games in robots’ motion planning, where some agents’ cost functions are hard to be inferred. We model those uncertain agents as adversary disturbances and propose a zero-sum game formulation. We then develop a novel deep RL-based Hamilton Jacobi reachability method to compute the optimal strategy against the worst-case disturbance. In thorough empirical evaluations, we demonstrate that our methods could converge reliably and accommodate the intention uncertainty by either inferring appropriate cost functions that capture agents’ intention or learning robust control ensuring safety even under the worst-case disturbance.
Biography: Jingqi Li is a Ph.D. student in Electrical Engineering and Computer Sciences at the University of California, Berkeley, advised by Prof. Claire Tomlin and Prof. Somayeh Sojoudi. His research interests include control, game theory, and reinforcement learning. Specifically, he aims to develop theoretical and algorithmic tools for controlling safety-critical systems, with applications in multi-agent control, human-robot interaction, and safe robot learning. Previously, he obtained an M.S. degree in Electrical Engineering from the University of Pennsylvania in 2019, where he received an Outstanding Research Award for his contribution to graph-theoretic tools for large-scale network controllability analysis. He earned a B.S. degree in Aerospace Engineering from Beijing University of Aeronautics and Astronautics, China, in 2016.
Time: 9:00 am – 10:20 am, February 24
“Nonlinear Controllability and Function Representation by Neural Stochastic Differential Equations”
Karan Suresh Jagdale
“Optimal Routing of Modular Agents on a Graph”
“Personalized Pricing with Group Fairness Constraint”
“Nontrivial Holonomy in Gossip Networks”
For more information, please contact the session chair, Xingang Guo (firstname.lastname@example.org).