Beyond “Tabula Rasa” in Reinforcement Learning (BeTR-RL): Agents that remember, adapt, and generalize
- (2020/02/06) The deadline has been extended to Tuesday, the 11th of February!
- (2019/01/10) Submissions open!
Recent work has demonstrated that current reinforcement learning methods are able to master complex tasks given enough resources. However, these successes have mainly been confined to single and unchanging environments. By contrast, the real world is both complex and dynamic, rendering it impossible to anticipate each new scenario. Many standard learning approaches require tremendous resources in data and compute to re-train. However, learning also offers the potential to develop versatile agents that adapt and continue to learn across environment changes and shifting goals and feedback. To achieve this, agents must be able to apply knowledge gained in past experience to the situation at hand. We aim to bring together research areas that provide different perspectives on how to extract and apply this knowledge. The BeTR-RL workshop aims to bring together researchers from different backgrounds with a common interest in how to extend current reinforcement learning algorithms to operate in changing environments and tasks. The workshop aims to further develop these research directions while determining similarities and trade-offs.
|Louis Kirsch||Ignasi Clavera||Kate Rakelly|
|The Swiss AI Lab IDSIA||UC Berkeley||UC Berkeley|
|Jane Wang||Chelsea Finn||Jeff Clune|
|DeepMind||Stanford University||UberAI / OpenAI|