Artificial Neuronal Networks: A Bayesian Approach Using Parallel Computing

An Artificial Neural Network (ANN) is a learning paradigm and automatic processing inspired in the biological behavior of neurons and the brain structure. The brain is a complex system; its basic processing unit are the neurons, which are distributed massively in the brain sharing multiple connectio...

Full description

Autores:
Guzmán, Eduardo
Vázquez, Mario
Del Valle, David
Pérez-Rodríguez, Paulino
Tipo de recurso:
Article of journal
Fecha de publicación:
2018
Institución:
Universidad Nacional de Colombia
Repositorio:
Universidad Nacional de Colombia
Idioma:
spa
OAI Identifier:
oai:repositorio.unal.edu.co:unal/66483
Acceso en línea:
https://repositorio.unal.edu.co/handle/unal/66483
http://bdigital.unal.edu.co/67511/
Palabra clave:
51 Matemáticas / Mathematics
31 Colecciones de estadística general / Statistics
Empirical Bayes
Nonlinear models
Parallel processing
Bayes emp\'irico
modelos no lineales
procesamiento en paralelo
Rights
openAccess
License
Atribución-NoComercial 4.0 Internacional
Description
Summary:An Artificial Neural Network (ANN) is a learning paradigm and automatic processing inspired in the biological behavior of neurons and the brain structure. The brain is a complex system; its basic processing unit are the neurons, which are distributed massively in the brain sharing multiple connections between them. The ANNs try to emulate some characteristics of humans, and can be thought as intelligent systems that perform some tasks in a different way that actual computer does. The ANNs can be used to perform complex activities, for example: pattern recognition and classification, weather prediction, genetic values prediction, etc. The algorithms used to train the ANN, are in general complex, so therefore there is a need to have alternatives which lead to a significant reduction of times employed to train an ANN. In this work, we present an algorithm based in the strategy ``divide and conquer'' which allows to train an ANN with a single hidden layer. Part of the sub problems of the general algorithm used for training are solved by using parallel computing techniques, which allows to improve the performance of the resulting application. The proposed algorithm was implemented using the C++ programming language, and the libraries Open MPI and ScaLAPACK. We present some application examples and we asses the application performance. The results shown that it is possible to reduce significantly the time necessary to execute the program that implements the algorithm to train the ANN.