Networked reinforcement learning

Makito Oku*, Kazuyuki Aihara

*Corresponding author for this work

Research output: Chapter in Book/Report/Conference proceedingConference contributionpeer-review

Abstract

Recently, many models of reinforcement learning with hierarchical or modular structures have been proposed. They decompose a task into simpler sub-tasks and solve them with multiple agents. In these models, however, topological relations of agents are severely restricted. By relaxing the restrictions, we propose networked reinforcement learning where each agent in a network acts in parallel as if the other agents are parts of the environment. Although convergence to an optimal policy is no longer assured, we show by numerical simulations that our model performs well at least in some simple situations.

Original languageEnglish
Title of host publicationProceedings of the 13th International Symposium on Artificial Life and Robotics, AROB 13th'08
Pages469-472
Number of pages4
StatePublished - 2008
Event13th International Symposium on Artificial Life and Robotics, AROB 13th'08 - Oita, Japan
Duration: 2008/01/312008/02/02

Publication series

NameProceedings of the 13th International Symposium on Artificial Life and Robotics, AROB 13th'08

Conference

Conference13th International Symposium on Artificial Life and Robotics, AROB 13th'08
Country/TerritoryJapan
CityOita
Period2008/01/312008/02/02

Keywords

  • Hierarchical reinforcement learning
  • Modular reinforcement learning
  • Partially observable markov decision process

ASJC Scopus subject areas

  • Artificial Intelligence
  • Computer Vision and Pattern Recognition
  • Human-Computer Interaction

Fingerprint

Dive into the research topics of 'Networked reinforcement learning'. Together they form a unique fingerprint.

Cite this