Intelligent Control Architecture For Motion Learning in Robotics Applications

Abstract: The investigation of this Thesis was focused on how motion abilities can be learned by a robot. The main goal was to design and test a control architecture capable of learning how to properly move different simulated robots, through the use of Arti�cial Intelligence (AI) methods. With this...

Full description

Autores:
Beltrán Pardo, Jaime Eduardo
Tipo de recurso:
Fecha de publicación:
2013
Institución:
Universidad Nacional de Colombia
Repositorio:
Universidad Nacional de Colombia
Idioma:
spa
OAI Identifier:
oai:repositorio.unal.edu.co:unal/21929
Acceso en línea:
https://repositorio.unal.edu.co/handle/unal/21929
http://bdigital.unal.edu.co/12935/
Palabra clave:
0 Generalidades / Computer science, information and general works
62 Ingeniería y operaciones afines / Engineering
Robot
Platform
Hardware
Architecture
Control
Artificial intelligence
Learning
Fuzzy
Genetic algorithm
Neural network
Plataforma
Arquitectura
Inteligencia artificial
Aprendizaje
Difuso
Red neural
Algoritmo genetico
Rights
openAccess
License
Atribución-NoComercial 4.0 Internacional
Description
Summary:Abstract: The investigation of this Thesis was focused on how motion abilities can be learned by a robot. The main goal was to design and test a control architecture capable of learning how to properly move different simulated robots, through the use of Arti�cial Intelligence (AI) methods. With this purpose, a simulation environment and a set of simulated robots were created in order to test the control architecture. The robots were constructed with a simple geometry using links and joints. A fuzzy controller was designed to control the motors position. The control architecture design was based on subsumption and some AI methods that allowed the simulated robot to find and learn a set of motions based on targets. These methods were a genetic algorithm (GA) and a set of artificial neural networks (ANN). The GA was used to find the adequate robot movements for an specific target, while the ANNs were used to learn and perform such movements eficiently. The advantage of this approach was that, no knowledge of the environment or robot model is needed. The robot learns how to move its own body in order to achieve a determined task. In addition, the learned motions can be used to achieve complex movement execution in a further research. A set of experiments were performed in the simulator in order to show the performance of the control architecture in every one of its stages. The results showed that the proposed architecture was able to learn and perform basic movements of a robot independently of the environment or the robot defined structure.