Controlled markov chains : some stability problems
"Along this work we study some stability-related problems in the context of Controlled Markov chains. As a first problem, we consider a division of the state space, and the goal is to construct a control policy such that the chain stabilize as much as possible in each portlon of the division wi...
- Autores:
-
Avila Girardot, Daniel Felipe
- Tipo de recurso:
- Fecha de publicación:
- 2016
- Institución:
- Universidad de los Andes
- Repositorio:
- Séneca: repositorio Uniandes
- Idioma:
- eng
- OAI Identifier:
- oai:repositorio.uniandes.edu.co:1992/13901
- Acceso en línea:
- http://hdl.handle.net/1992/13901
- Palabra clave:
- Procesos de Markov
Procesos estocásticos
Programación dinámica
Matemáticas
- Rights
- openAccess
- License
- http://creativecommons.org/licenses/by-nc-sa/4.0/
Summary: | "Along this work we study some stability-related problems in the context of Controlled Markov chains. As a first problem, we consider a division of the state space, and the goal is to construct a control policy such that the chain stabilize as much as possible in each portlon of the division without leaving it. As a second problem, we characterize the domain of attraction and escape set of a controlled Markov chain via a function v, which happens to be the solution of a Bellman's equation. The interpretation of v as the solutlon of a Bellman's equation also provides a way to calculate such function via a linear program. Finally, under some assumptions, we find a policy that maximize the probability of reaching certain set A. Our approach uses certain cost functions and dynamical programming, so it can be solved using a linear program." |
---|