2:00pm-5:00pm, February 23 in person at CSL B02
Poised as a fundamental solution to labor shortages and hazardous work, robotics has long been integrated into confined and controlled industrial settings. However, as robots move into less structured environments, new models and methods are necessary to ensure that they operate effectively and safely. Since robotics is a multidisciplinary field, this requires innovation in perception and control algorithms, as well as novel decision-making strategies. Further, as robots take on more collaborative and interactive roles with humans, understanding the dynamics behind their interactions becomes increasingly relevant.
The Robotics session taking place on 23rd February from 2:00 pm to 5:00 pm will have a keynote speech by Prof. Deepak Pathak, from CMU. This session will consist of topics including but not limited to the following – (1) Robotic Learning algorithms, both from demonstrations and through reinforcement learning, (2) Environment Representations for efficient robotic decision-making (3) Human Robot Interaction and (4) Generalizability and adaptability to novel tasks and environments.
“Robot Learning In The Wild: Continually Improving by Watching and Practicing”
Time: 2:00 pm to 3:00 pm, February 23
Talk Abstract: How can we train a robot that can generalize to perform thousands of tasks in thousands of environments? This poses a chicken and egg problem: to train robots for generalization, we need large amounts of robotic data from diverse environments, but it is impractical to collect such data unless we can deploy robots that generalize. Passive human videos on the internet can help alleviate this issue by providing diverse scenarios to pretrain robotic skills. However, just watching humans is not enough, the robot needs to learn and improve by autonomously practicing in the real world and adapting its learning to new scenarios. We will unify these three mechanisms — learning by watching others (passive learning), practicing by exploration (curiosity), and adapting already learned skills in real-time (adaptation) — to define a continually adaptive robotic framework. I will demonstrate the potential of this framework for scaling up robot learning via case studies of controlling dextrous robotic hands from monocular vision, dynamic-legged robots walking from vision on unseen challenging hikes, and mobile manipulators performing lots of diverse manipulation tasks in the wild.
Biography: Deepak Pathak is a faculty in the School of Computer Science at Carnegie Mellon University. He received his Ph.D. from UC Berkeley and his research spans computer vision, machine learning, and robotics. He is a recipient of the Okawa research award, the IIT Kanpur Young Alumnus award, CoRL Paper Award, and faculty awards from Google, Samsung, Sony, and GoodAI. Deepak’s research has been featured in popular press outlets, including The Economist, The Wall Street Journal, Forbes, Quanta Magazine, Washington Post, CNET, Wired, and MIT Technology Review among others. Earlier, he received his Bachelor’s from IIT Kanpur with a Gold Medal in Computer Science. He co-founded VisageMap Inc. later acquired by FaceFirst Inc.
“Towards Robust and Adaptable Real-World Reinforcement Learning”
Abstract: The past decade has witnessed a rapid development of reinforcement learning (RL) techniques. However, there is still a gap between employing RL in simulators and applying RL models to challenging and diverse real-world systems. On the one hand, existing RL approaches have been shown to be fragile under perturbations in the environment, making it risky to deploy RL models in real-world applications where unexpected noise and interference exist. On the other hand, most RL methods focus on learning a policy in a fixed environment, and need to re-train a policy if the environment gets changed. For real-world environments whose specifications and dynamics can be ever-changing, these methods become less practical as they require a large amount of data and computations to adapt to a changed environment. This talk focuses on the above two challenges, and introduces a series of solutions to improve the robustness and adaptability of RL methods. For robustness, the proposed approaches explore the vulnerability of RL agents from multiple aspects, and achieve state-of-the-art performance on robustifying RL policies. For adaptability, the proposed transfer learning and pretraining frameworks address challenging multi-task learning problems that are important yet rarely studied.
Biography: Yanchao Sun is a 5th-year Ph.D. student at the University of Maryland, College Park, advised by Dr. Furong Huang. Her research interests lie in reinforcement learning, adversarial learning, representation learning, transfer learning, and their intersections. Yanchao’s Ph.D. thesis focuses on improving the robustness and adaptability of reinforcement learning algorithms for sequential decision making. Yanchao has published over 10 papers at top conferences, and has won a best paper award for her work on robust reinforcement learning.
“Beyond RGB: Scene-Property Synthesis With Neural Radiance Fields”
“One-shot Visual Imitation via Attributed Waypoints and Demonstration Augmentation”
“Towards Systems for Adaptive Modes of Communication between Drivers and their Advanced Driver Assistive Systems”
“FAST: Few-shot Adaptation for Scooping Tasks”
For more information, please contact the session chairs, João Marques (email@example.com) and Shivani Kamtikar (firstname.lastname@example.org) .