Stabilizing dynamic state feedback controller synthesis: A reinforcement learning approach

Miguel A. Solis, Manuel Olivares, Héctor Allende

Research output: Contribution to journalArticlepeer-review

2 Citations (Scopus)

Abstract

State feedback controllers are appealing due to their structural simplicity. Nevertheless, when stabilizing a given plant, dynamics of this type of controllers could lead the static feedback gain to take higher values than desired. On the other hand, a dynamic state feedback controller is capable of achieving the same or even better performance by introducing additional parameters into the model to be designed. In this document, the Linear Quadratic Tracking problem will be tackled using a (linear) dynamic state feedback controller, whose parameters will be chosen by means of applying reinforcement learning techniques, which have been proved to be especially useful when the model of the plant to be controlled is unknown or inaccurate.

Original languageEnglish
Pages (from-to)245-254
Number of pages10
JournalStudies in Informatics and Control
Volume25
Issue number2
Publication statusPublished - 1 Jan 2016
Externally publishedYes

Keywords

  • Adaptive control
  • Furuta pendulum
  • Reinforcement learning

ASJC Scopus subject areas

  • Computer Science(all)
  • Electrical and Electronic Engineering

Fingerprint

Dive into the research topics of 'Stabilizing dynamic state feedback controller synthesis: A reinforcement learning approach'. Together they form a unique fingerprint.

Cite this