Robotics is a rapidly evolving field with great potential to benefit our society. Robots enhance efficiency, safety, and precision across various industries, contribute to scientific exploration, improve medical practices, support education and research. Consequently, developing adaptive, resilient, and safe systems requires multi-disciplinary synergy among broad areas of research not limited to control, perception, and decision-making. We are now accepting abstract submissions to present at the Robotics Session and engage in a meaningful exchange of views with fellow researchers. We encourage robotics research in software or hardware. Some of the topics of discussion include: 1. Robot learning algorithms, both from demonstrations or through reinforcement learning. 2. Novel robot designs that enable new robot capabilities. 3. Robot perception using multi-sensory data. 4. Representation learning for efficient robotic decision-making. 5. Safety of autonomous robotics systems.

Keynote Speaker – Dr Andy Zeng, Google DeepMind

“From words to actions”

Time: 11 AM-12 PM, Feburary 15

Abstract: The rise of recent Foundation models (and applications e.g. ChatGPT, Gemini) offer an exciting glimpse into the capabilities of large deep networks trained on Internet-scale data. They hint at a possible blueprint for building generalist robot brains that can do anything, anywhere, for anyone. Nevertheless, robot data is expensive – and until we can bring robots out into the world (already) doing useful things in unstructured places, it may be challenging to match the same amount of diverse data being used to train e.g. large language models today. In this talk, I will briefly discuss some of the lessons we’ve learned while scaling real robot data collection, how we’ve been thinking about Foundation models, and how we might bootstrap off of them (and modularity) to make our robots useful sooner.

Biography: Andy Zeng is a Staff Research Scientist at Google DeepMind working on machine learning and robotics. He received his Bachelors in Computer Science and Mathematics at UC Berkeley, and his PhD in Computer Science at Princeton. He is interested in building algorithms that enable machines to intelligently interact with the world and improve themselves over time. Andy received Outstanding Paper Awards from CoRL ’23, ICRA ’23, T-RO ’20, and RSS’19, and has been a finalist for paper awards at RSS ’23, CoRL ’20 – ’22, ICRA ’20, RSS ’19, IROS ’18. He led perception as part of Team MIT-Princeton at the Amazon Picking Challenge ’16 and ’17. Andy is a recipient of the Princeton SEAS Award for Excellence, Japan Foundation Paper Award, NVIDIA Fellowship ’18, and Gordon Y.S. Wu Fellowship in Engineering and Wu Prize. His work has been featured in the press, including the New York Times, BBC, and Wired.

Invited Student Speaker – Jiayuan Mao, Massachusetts Institute of Technology

“Building Generalist Robots with Integrated Learning and Planning

Time: 9-9:40 AM, Feburary 15

Abstract: In this talk, I will discuss an integrated learning and planning approach for flexible and general robotic manipulators. I will primarily focus on the technical idea of task and motion planning with compositional abstract representations. In essence, I will discuss two important spatio-temporal structures in decision-making: factorization and sparsity structures in state representations (the physical state can be represented as a collection of object states and their relationships), and hierarchical structures in plans (a high-level goal can be decomposed into subgoals). I will talk about the design of such representations and the overall architecture in the context of robot manipulation, present methods for learning them automatically from data, and showcase various types of generalization enabled by such frameworks.

Biography: Jiayuan Mao is a PhD candidate at MIT EECS, advised by Prof. Josh Tenenbaum and Prof. Leslie Kaelbling. Previously, she obtained her Bachelor’s degree from YaoClass, Tsinghua University. Jiayuan’s research goal is to build machines that can continually learn concepts (e.g., properties, relations, skills, rules and algorithms) from their experiences and apply them for reasoning and planning in the physical world. The central theme of Jiayuan’s research is to decompose the learning problem into learning a vocabulary of neuro-symbolic concepts. The symbolic part describes their structures and how different concepts can be composed; the neural part handles grounding in perception and physics.

Time: 9:45-10:50 AM, February 15

Albert Zhai

Shuijing Liu

Arjun Gupta

Aamir Hasan