Accelerating Q-learning through Kalman filter estimations applied in a RoboCup SSL simulation

Gabriel A. Ahumada, Cristóbal J. Nettle, Miguel A. Solis

Producción científica: Contribución a los tipos de informe/libroContribución a la conferenciarevisión exhaustiva

3 Citas (Scopus)

Resumen

Speed of convergence in reinforcement learning methods represents an important problem, especially when the agent is interacting on adversarial environments like RoboCup Soccer domains. If the agent's learning rate is too small, then the algorithm needs too many iterations in order to successfully learn the task, and this would probably lead to lose the game before the agent has learnt its optimal policy. We attempt to overcome this problem by using partial state estimations when some of the involved dynamics are known or easy to model for accelerating Q-learning convergence, illustrating the results in a RoboCup SSL simulation.

Idioma originalInglés
Título de la publicación alojadaProceedings - 2013 IEEE Latin American Robotics Symposium, LARS 2013
EditorialIEEE Computer Society
Páginas112-117
Número de páginas6
ISBN (versión impresa)9780769551395
DOI
EstadoPublicada - 1 ene. 2013
Publicado de forma externa
Evento2013 10th IEEE Latin American Robotics Symposium, LARS 2013 - Arequipa, Perú
Duración: 21 oct. 201324 oct. 2013

Serie de la publicación

NombreProceedings - 2013 IEEE Latin American Robotics Symposium, LARS 2013

Conferencia

Conferencia2013 10th IEEE Latin American Robotics Symposium, LARS 2013
País/TerritorioPerú
CiudadArequipa
Período21/10/1324/10/13

Áreas temáticas de ASJC Scopus

  • Inteligencia artificial
  • Interacción persona-ordenador

Huella

Profundice en los temas de investigación de 'Accelerating Q-learning through Kalman filter estimations applied in a RoboCup SSL simulation'. En conjunto forman una huella única.

Citar esto