Our paper on multi-agent reinforcement learning was accepted to IROS 2020!
September 21, 2020
Our paper “Cooperative Control of Mobile Robots with Stackelberg Learning” was accepted to the upcoming IEEE/RSJ International Conference on Intelligent Robots and Systems! This was joint work with Guohui Ding.
The paper proposes Stackelberg Learning in Cooperative Control (SLiCC), a method for cooperative control of multi-agent systems in partially observable settings.
SLiCC is based on an asymmetric prosocial–introspective cooperation framework that links state perception with agents’ decision-making strategies.
This framework allows for agents to have different observation scopes, with prosocial and introspective behaviors assigned to agents based on the completeness of their state perception.
2020
IROS
Cooperative Control of Mobile Robots with Stackelberg Learning
Multi-robot cooperation requires agents to make decisions that are consistent with the shared goal without disregarding action-specific preferences that might arise from asymmetry in capabilities and individual objectives. To accomplish this goal, we propose a method named SLiCC: Stackelberg Learning in Cooperative Control. SLiCC models the problem as a partially observable stochastic game composed of Stackelberg bimatrix games, and uses deep reinforcement learning to obtain the payoff matrices associated with these games. Appropriate cooperative actions are then selected with the derived Stackelberg equilibria. Using a bi-robot cooperative object transportation problem, we validate the performance of SLiCC against centralized multi-agent Q-learning and demonstrate that SLiCC achieves better combined utility.