Aerospace Controls Laboratory

Decentralized Control of Multi-Robot Partially Observable Markov Decision Processes using Belief Space Macro-actions

Shayegan Omidshafiei, Ali-akbar Ahga-mohammadi, Christopher Amato, Shih-Yuan Liu, Miao Liu


This work focuses on solving multi-robot planning problems in continuous spaces with partial observability given a high-level domain description. Decentralized Partially Observable Markov Decision Processes (Dec-POMDPs) are general models for multi-robot coordination problems. However, representing and solving Dec-POMDPs is often intractable for large problems.

This work extends the Dec-POMDP model to the Decentralized Partially Observable Semi-Markov Decision Process (Dec-POSMDP) to take advantage of high-level representations that are natural for multi-robot problems and to facilitate scalable solutions to large discrete and continuous problems. The Dec-POSMDP formulation uses task macro-actions created from lower-level primitive actions that allow asynchronous decision-making, which is crucial in multi-robot domains. We also present algorithms for solving Dec-POSMDPs, which are more scalable than previous methods since they can incorporate closed-loop belief space macro-actions in planning. The proposed algorithms are then evaluated on a complex multi-robot package delivery problem under uncertainty, showing that our approach can naturally represent realistic domains and provide high-quality solutions for large-scale problems.


Related Publications

  • Omidshafiei, S., Agha-Mohammadi, A.-A., Amato, C., Liu, S.-Y., and How, J. P., “Decentralized Control of Multi-Robot Partially Observable Markov Decision Processes using Belief Space Macro-actions,” International Journal of Robotics Research, vol. 36, 2017, pp. 231–258.
  • Liu, M., Amato, C., Liao, X., How, J. P., and Carin, L., “Stick-Breaking Policy Learning in DEC-POMDPs,” Proceedings of the 24rd International Joint Conference on Artificial Intelligence (IJCAI), Buenos aires, Argentina: 2015.