On the approximation of the inverse dynamics of a robotic manipulator by a neural network trained with a stochastic learning algorithm
The SAGA algorithm is used to ap-proximate the inverse dynamics of a robotic manipulator with two rotational joints. SAGA (Simulated Annealing Gradient Adaptation) is a stochastic strategy for additive construction of an artificial neural network of the two-layer perceptron type based on three essen...
- Autores:
-
Segura, Enrique Carlos
- Tipo de recurso:
- Article of journal
- Fecha de publicación:
- 2013
- Institución:
- Corporación Universidad de la Costa
- Repositorio:
- REDICUC - Repositorio CUC
- Idioma:
- eng
- OAI Identifier:
- oai:repositorio.cuc.edu.co:11323/2631
- Acceso en línea:
- https://hdl.handle.net/11323/2631
https://repositorio.cuc.edu.co/
- Palabra clave:
- Neural network
Robotic manipulator
Multilayer perceptron
Stochastic learning
Inverse dynamics
Neural network
Robotic manipulator
Multilayer perceptron
Stochastic learning
Inverse dynamics
- Rights
- openAccess
- License
- http://purl.org/coar/access_right/c_abf2
Summary: | The SAGA algorithm is used to ap-proximate the inverse dynamics of a robotic manipulator with two rotational joints. SAGA (Simulated Annealing Gradient Adaptation) is a stochastic strategy for additive construction of an artificial neural network of the two-layer perceptron type based on three essential ele-ments: a) network weights update by means of the information from the gradient for the cost function; b) approval or rejection of the suggested change through a technique of clas-sical simulated annealing; and c) progressive growth of the neural network as its struc-ture reveals insufficient, using a conservative strategy for adding units to the hidden layer. Experiments are performed and efficiency is analyzed in terms of the relation between mean relative errors -in the training and test-ing sets-, network size, and computation time. The ability of the proposed technique to per-form good approximations by minimizing the complexity of the network’s architecture and, hence, the required computational memory, is emphasized. Moreover, the evolution of mini-mization processes as the cost surface is modi-fied is also discussed |
---|