Multi-Task Reinforcement Learning with Soft Modularization

  • Yang, Ruihan*; Xu, Huazhe; Wu, Yi; Wang, Xiaolong
  • Spotlight talk
  • [PDF] [Slides] [Join poster session]
    Poster session from 15:00 to 16:00 EAT and from 20:45 to 21:45 EAT
    Obtain the zoom password from ICLR

Abstract

Multi-task learning is a very challenging problem in reinforcement learning. While training multiple tasks jointly allow the policies to share parameters across different tasks, the optimization problem becomes non-trivial: It is unclear what parameters in the network should be reused across tasks and the gradients from different tasks may interfere with each other. Thus, instead of naively sharing parameters across tasks, we introduce an explicit modularization technique on policy representation to alleviate this optimization issue. Given a base policy network, we design a routing network which estimates different routing strategy to reconfigure the base network for each task. Moreover, instead of creating a concrete route for each task, our task-specific policy is represented by a soft combination of all possible routes. We name this approach soft modularization. We conduct experiments on multiple robotics manipulation tasks in simulation and show our method improves sample efficiency by a large margin and still achieve performance on par with individual policy trained for each task.

If videos are not appearing, disable ad-block!