Connections to previous workshops

This is the first edition of the BeTR-RL workshop: Beyond “tabula rasa” in reinforcement learning: agents that remember, adapt, and generalize. This workshop aims to enhance the discussion and connect ideas on multi-task, meta-learning, transfer, and continual learning works with a special focus on reinforcement learning.

Previous workshops on related topics have been organized in the past, including:

  • Workshop on Multi-Task and Lifelong Reinforcement Learning (ICML 2019)
  • Continual Learning (NeurIPS 2018)
  • Meta-Learning (NeurIPS 2018)
  • Task-Agnostic Reinforcement Learning (ICLR 2019)
  • Structure and Priors in Reinforcement Learning (ICLR 2019)

This workshop focuses on all RL methods that transfer knowledge from previous experience to perform well in new situations. Structure and priors are one approach to this goal, but unlike the previous workshop, we are interested in them in the context of multiple and changing tasks and environments. Meta-learning is included as one possible methodology for transfer learning, but we include other approaches as well. In contrast to prior workshops, this workshop is uniquely positioned to bring together researchers that study varying methodologies and perspectives on generalization and adaptability in sequential decision making.