Accelerating Q-learning through Kalman filter estimations applied in a RoboCup SSL simulation

Gabriel A. Ahumada, Cristóbal J. Nettle, Miguel A. Solis

Research output: Chapter in Book/Report/Conference proceedingConference contributionpeer-review

3 Citations (Scopus)

Abstract

Speed of convergence in reinforcement learning methods represents an important problem, especially when the agent is interacting on adversarial environments like RoboCup Soccer domains. If the agent's learning rate is too small, then the algorithm needs too many iterations in order to successfully learn the task, and this would probably lead to lose the game before the agent has learnt its optimal policy. We attempt to overcome this problem by using partial state estimations when some of the involved dynamics are known or easy to model for accelerating Q-learning convergence, illustrating the results in a RoboCup SSL simulation.

Original languageEnglish
Title of host publicationProceedings - 2013 IEEE Latin American Robotics Symposium, LARS 2013
PublisherIEEE Computer Society
Pages112-117
Number of pages6
ISBN (Print)9780769551395
DOIs
Publication statusPublished - 1 Jan 2013
Externally publishedYes
Event2013 10th IEEE Latin American Robotics Symposium, LARS 2013 - Arequipa, Peru
Duration: 21 Oct 201324 Oct 2013

Publication series

NameProceedings - 2013 IEEE Latin American Robotics Symposium, LARS 2013

Conference

Conference2013 10th IEEE Latin American Robotics Symposium, LARS 2013
Country/TerritoryPeru
CityArequipa
Period21/10/1324/10/13

Keywords

  • Reinforcement learning
  • Robocup
  • Soccer
  • Ssl

ASJC Scopus subject areas

  • Artificial Intelligence
  • Human-Computer Interaction

Fingerprint

Dive into the research topics of 'Accelerating Q-learning through Kalman filter estimations applied in a RoboCup SSL simulation'. Together they form a unique fingerprint.

Cite this