Theme

The theme of this year's workshop isĀ Putting it all together: Integrated Architectures for Reinforcement Learning.

Much work in reinforcement learning is focused on specific algorithms addressing individual issues (e.g., exploration, function approximation, planning, etc). In this workshop, we want to explore ways of pulling together multiple different aspects of RL for solving AI or understanding natural intelligence. We are interested in algorithms that address more than just a single aspect of RL in isolation, and which speak to our ability to build AI architectures in which RL is an important component.

Potentially relevant sub-topics include (but are not limited to):

  • Integrating acting, learning, planning and representation change
  • Connections between model-based and model-free reinforcement learning
  • Connection to real systems (be it artificial or natural)
  • Deep learning and reinforcement learning
  • Learning representations while making decisions
  • Integrating reinforcement learning and other cognitive functions
  • Neural implementations of RL architectures
  • Cognitive architectures and RL
  • RL architectures that scale up to large problems
  • Exploration in an RL architecture

Finally, although it is good to have a theme each year, there is always residual interest in previous year's themes. Some themes from past years that seem to keep recurring are life-long learning, perceptual learning and representational change, state estimation, function approximation, real-time learning, and temporal abstraction. It would not be inappropriate for there to be echoes of these themes in this year's meeting.