Aerospace Controls Laboratory

Robust and Interpretable RL for Navigation in Pedestrian Crowds

Björn Lütjens, Michael Everett

Most autonomous vehicles strongly rely on black box predictions from deep neural networks (DNNs). However, DNNs can give unpredictably bad predictions on test data that is far from the training distribution. Moreover, the model tends to be overconfident for predictions on unseen data.

The importance of predictions that are robust to a distributional shift from training to test is evident for safety-critical applications. One of these applications is collision avoidance in pedestrian crowds.

Measures of model uncertainty can be used to identify unseen data, that the model has not been trained on. Our work “Safe Reinforcement Learning with Model Uncertainty Estimates” embeds model uncertainty estimates of neural networks in a Safe Reinforcement Learning framework. The resulting collision avoidance policy is sensitive to pedestrians that are observed in a previously unseen manner (e.g. runner, sheep, sensor failure,…). It identifies these pedestrians regionally and take navigatory action that avoid these pedestrians carefully.

Video

Related Publications

  • Lutjens, B., Everett, M., and How, J. P., “Safe Reinforcement Learning with Model Uncertainty Estimates,” IEEE International Conference on Robotics and Automation (ICRA), Montreal, Canada: 2019.
  • Lutjens, B., Everett, M., and How, J. P., “Safe Reinforcement Learning with Model Uncertainty Estimates,” Machine Learning in Robot Motion Planning Workshop at the IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), 2018.