Learning for safety
Modern nonlinear control theory seeks to endow systems with properties of stability and safety, and have been deployed successfully in multiple domains. Despite this success, model uncertainty remains a significant challenge in synthesizing safe and stable controllers, leading to degradation in the...
- Autores:
-
Montenegro González, Carlos Andrés
- Tipo de recurso:
- Fecha de publicación:
- 2021
- Institución:
- Universidad de los Andes
- Repositorio:
- Séneca: repositorio Uniandes
- Idioma:
- spa
- OAI Identifier:
- oai:repositorio.uniandes.edu.co:1992/50930
- Acceso en línea:
- http://hdl.handle.net/1992/50930
- Palabra clave:
- Algoritmos (Computadores)
Aprendizaje por refuerzo (Aprendizaje automático)
Sistemas de control por retroalimentación
Funciones de Liapunov
Robótica
Ingeniería
- Rights
- openAccess
- License
- https://repositorio.uniandes.edu.co/static/pdf/aceptacion_uso_es.pdf
Summary: | Modern nonlinear control theory seeks to endow systems with properties of stability and safety, and have been deployed successfully in multiple domains. Despite this success, model uncertainty remains a significant challenge in synthesizing safe and stable controllers, leading to degradation in the performance. Reinforcement Learning (RL) algorithms, on the other hand, have found success in controlling systems with no model at all but it is limited beyond simulated applications, and one main reason is the absence of safety and stability guarantees during the learning process. To address this issue, we complement a controller architecture that combines a model-free RL-based controller with model-based controllers utilizing control-Lyapunov and control-Barrier functions (CLFs and CBFs, respectively) and online learning of the unknown system dynamics, to guarantee stability and safety during learning. |
---|