Planning to Explore via Latent Disagreement

  • Sekar, Ramanan; Rybkin, Oleh*; Daniilidis, Kostas; Abbeel, Pieter; Hafner, Danijar; Pathak, Deepak
  • Accepted abstract
  • [PDF] [Slides] [Join poster session]
    Poster session from 15:00 to 16:00 EAT and from 20:45 to 21:45 EAT
    Obtain the zoom password from ICLR

Abstract

To solve complex tasks, intelligent agents first need to explore their environments. However, manually providing feedback to agents during exploration can be challenging. We focus on self-supervised exploration, where an agent explores a visual environment without yet knowing the tasks it will later be asked to solve. While current methods often learn reactive exploration behaviors to maximize retrospective novelty, we learn a world model from images to plan for expected surprise. Novelty is efficiently estimated as ensemble disagreement in the latent space of the world model. Exploring and learning the world model without rewards, our approach, Plan2Explore, efficiently adapts to a variety of control tasks with high-dimensional image inputs.

If videos are not appearing, disable ad-block!