Robot learning from demonstration Inverse Optimal Control & Robotic Learning from Demonstration

A proposed workshop in conjunction with Robotics: Science and Systems 2013. In many robotic domains, it is much easier to demonstrate appropriate behavior (through e.g., tele-operation, haptic feedback, or motion capture) than it is to program a controller to produce the same behavior. Driven by this observation, research in learning from demonstration and inverse optimal control has become increasingly popular in the last several years. This paradigm recasts reinforcement learning problems as supervised learning tasks, in which advances in machine learning can enable robots to learn the desired policy, utility, and/or dynamics of the robotic domain directly and efficiently from observed behavior. For example, inverse optimal control aims at identifying the unknown objective function or policy that produces a given solution of an optimal control problem. Input data can come from measurements related to the system’s state e.g. by motion capture, IMU or force plates. The identified function can then be used to generate optimal motions for robots. An important goal of this workshop is to present and discuss the state of the art of solution methods for this challenging class of problems.

In this workshop, via a mix of invited talks, posters, and discussion, we seek to bring together experts in system identification, reinforcement learning, and inverse optimal control to explore the theoretical and applied aspects of learning from demonstration and inverse optimal control. We plan to discuss open problems, state-of-the-art solution methods, and interesting applications.

Important Dates:

  • May 10th: Submission deadline
  • May 17th: Author notification
  • June 27th: Oral/poster presentations
Submission email: send extended abstracts and papers (1-8 pages in length) to:


0900 Katja Mombaur - Inverse optimal control methods and applications to human movement analysis
0930 Byron Boots - Spectral Approaches to Learning Dynamic Models from Data
1000 Sergey Levine - Learning from Demonstration for Simulation of Human Behaviors
1030 < coffee >
1100 Aude Billard - Learning what is important to imitate - estimating the objective function
1130 Jan Peters - Towards Robot Skill Learning: From Simple Tasks to Table Tennis
1200 Tim Bretl - Inverse optimal control and the importance of problem formulation
1230 < lunch >
1400 Brian Ziebart - Predictive inverse optimal control via maximum causal entropy
1430 Stefan Schaal - Inverse Reinforcement Learning for Manipulation
1500 Sidd Srinivasa - Inverse optimal control in human robot interaction
1530 < coffee >
1600 posters

  • Local Path Integral Inverse Reinforcement Learning for Autonomous Robotic Manipulation - Mrinal Kalakrishnan, Peter Pastor, Ludovic Righetti, and Stefan Schaal
  • Learning Navigation Policies from Human Demonstrations - Henrik Kretzschmar, Markus Kuderer, and Wolfram Burgard
  • Inverse Optimal Control for Humanoid Locomotion - Taesung Park and Sergey Levine
  • Human-like Navigation: Socially Adaptive Path Planning in Dynamic Environments - Beomjoon Kim and Joelle Pineau
  • A Geometry-Based Approach for Learning from Demonstrations for Manipulation - John D. Schulman, Jonathan Ho, Cameron Lee, Pieter Abbeel
  • Reward-Regularized Classification for Apprenticeship Learning - Bilal Piot, Matthieu Geist, Olivier Pietquin
Workshop Organizers:
  • Byron Boots, University of Washington
  • Tim Bretl, University of Illinois at Urbana-Champaign
  • Katja Mombaur, Ruprecht-Karls-Universitšt Heidelberg
  • Brian Ziebart, University of Illinois at Chicago
Copyright 2016 The Board of Trustees
of the University of
Helping Women Faculty Advance
Funded by NSF