Extreme learning machine adapted to noise based on optimization algorithms

The extreme learning machine for neural networks of feedforward of a single hidden layer randomly assigns the weights of entry and analytically determines the weights the output by means the Moore-Penrose inverse, this algorithm tends to provide an extremely fast learning speed preserving the adjust...

Full description

Autores:
Vásquez, A
Mora, M
Salazar, E
Gelvez, E
Tipo de recurso:
Fecha de publicación:
2020
Institución:
Universidad Simón Bolívar
Repositorio:
Repositorio Digital USB
Idioma:
eng
OAI Identifier:
oai:bonga.unisimon.edu.co:20.500.12442/6380
Acceso en línea:
https://hdl.handle.net/20.500.12442/6380
https://iopscience.iop.org/article/10.1088/1742-6596/1514/1/012006/pdf
Palabra clave:
Optimization algorithm
Moore-Penrose
Learning
Rights
openAccess
License
Attribution-NonCommercial-NoDerivatives 4.0 Internacional
id USIMONBOL2_fe0ad17f06e0b9d8fb5a3eaf14dd5981
oai_identifier_str oai:bonga.unisimon.edu.co:20.500.12442/6380
network_acronym_str USIMONBOL2
network_name_str Repositorio Digital USB
repository_id_str
dc.title.eng.fl_str_mv Extreme learning machine adapted to noise based on optimization algorithms
title Extreme learning machine adapted to noise based on optimization algorithms
spellingShingle Extreme learning machine adapted to noise based on optimization algorithms
Optimization algorithm
Moore-Penrose
Learning
title_short Extreme learning machine adapted to noise based on optimization algorithms
title_full Extreme learning machine adapted to noise based on optimization algorithms
title_fullStr Extreme learning machine adapted to noise based on optimization algorithms
title_full_unstemmed Extreme learning machine adapted to noise based on optimization algorithms
title_sort Extreme learning machine adapted to noise based on optimization algorithms
dc.creator.fl_str_mv Vásquez, A
Mora, M
Salazar, E
Gelvez, E
dc.contributor.author.none.fl_str_mv Vásquez, A
Mora, M
Salazar, E
Gelvez, E
dc.subject.eng.fl_str_mv Optimization algorithm
Moore-Penrose
Learning
topic Optimization algorithm
Moore-Penrose
Learning
description The extreme learning machine for neural networks of feedforward of a single hidden layer randomly assigns the weights of entry and analytically determines the weights the output by means the Moore-Penrose inverse, this algorithm tends to provide an extremely fast learning speed preserving the adjustment levels achieved by classifiers such as multilayer perception and support vector machine. However, the Moore-Penrose inverse loses precision when using data with additive noise in training. That is why in this paper a method to robustness of extreme learning machine to additive noise proposed. The method consists in computing the weights of the output layer using non-linear optimization algorithms without restrictions. Tests are performed with the gradient descent optimization algorithm and with the Levenberg-Marquardt algorithm. From the implementation it is observed that through the use of these algorithms, smaller errors are achieved than those obtained with the Moore-Penrose inverse.
publishDate 2020
dc.date.accessioned.none.fl_str_mv 2020-08-27T23:34:51Z
dc.date.available.none.fl_str_mv 2020-08-27T23:34:51Z
dc.date.issued.none.fl_str_mv 2020
dc.type.coarversion.fl_str_mv http://purl.org/coar/version/c_970fb48d4fbd8a85
dc.type.coar.fl_str_mv http://purl.org/coar/resource_type/c_2df8fbb1
dc.type.driver.eng.fl_str_mv info:eu-repo/semantics/article
dc.type.spa.spa.fl_str_mv Artículo científico
dc.identifier.issn.none.fl_str_mv 17426588
dc.identifier.uri.none.fl_str_mv https://hdl.handle.net/20.500.12442/6380
dc.identifier.url.none.fl_str_mv https://iopscience.iop.org/article/10.1088/1742-6596/1514/1/012006/pdf
identifier_str_mv 17426588
url https://hdl.handle.net/20.500.12442/6380
https://iopscience.iop.org/article/10.1088/1742-6596/1514/1/012006/pdf
dc.language.iso.eng.fl_str_mv eng
language eng
dc.rights.*.fl_str_mv Attribution-NonCommercial-NoDerivatives 4.0 Internacional
dc.rights.coar.fl_str_mv http://purl.org/coar/access_right/c_abf2
dc.rights.uri.*.fl_str_mv http://creativecommons.org/licenses/by-nc-nd/4.0/
dc.rights.accessrights.eng.fl_str_mv info:eu-repo/semantics/openAccess
rights_invalid_str_mv Attribution-NonCommercial-NoDerivatives 4.0 Internacional
http://creativecommons.org/licenses/by-nc-nd/4.0/
http://purl.org/coar/access_right/c_abf2
eu_rights_str_mv openAccess
dc.format.mimetype.eng.fl_str_mv pdf
dc.publisher.eng.fl_str_mv IOP Publishing
dc.source.eng.fl_str_mv Journal of Physics: Conference Series
dc.source.none.fl_str_mv Vol. 1514 No. 1 (2020)
institution Universidad Simón Bolívar
bitstream.url.fl_str_mv https://bonga.unisimon.edu.co/bitstreams/8680aee8-26f0-48a9-babc-dac98dbc883d/download
https://bonga.unisimon.edu.co/bitstreams/21af1537-b5ed-464a-ac75-fad43d59b2f3/download
https://bonga.unisimon.edu.co/bitstreams/e93182d7-74a8-4b99-9ddb-2fa6bb7d6b8d/download
https://bonga.unisimon.edu.co/bitstreams/2811ecff-01dd-4a72-99a2-a06176e22a5d/download
https://bonga.unisimon.edu.co/bitstreams/2f8b86ea-60db-4d9d-b2ce-40cb99f3fad0/download
bitstream.checksum.fl_str_mv c09575f25cb6a6c9ad86f1dbe78f9eef
4460e5956bc1d1639be9ae6146a50347
733bec43a0bf5ade4d97db708e29b185
a9be7d42c2759cfe49ce50ef8246853a
c1d0b0fbe66fdee34bba200bd929ebc6
bitstream.checksumAlgorithm.fl_str_mv MD5
MD5
MD5
MD5
MD5
repository.name.fl_str_mv Repositorio Digital Universidad Simón Bolívar
repository.mail.fl_str_mv repositorio.digital@unisimon.edu.co
_version_ 1814076170944118784
spelling Vásquez, Ad8f4cb05-84e7-4272-9e5f-5744834df75fMora, M630644e5-97cc-482b-b5d0-8f65851e3399Salazar, E986e187f-0d67-43f5-81cd-ec0b48cc4766Gelvez, Ed34b29f4-5323-4e58-83ca-7ae2e85e1ce02020-08-27T23:34:51Z2020-08-27T23:34:51Z202017426588https://hdl.handle.net/20.500.12442/6380https://iopscience.iop.org/article/10.1088/1742-6596/1514/1/012006/pdfThe extreme learning machine for neural networks of feedforward of a single hidden layer randomly assigns the weights of entry and analytically determines the weights the output by means the Moore-Penrose inverse, this algorithm tends to provide an extremely fast learning speed preserving the adjustment levels achieved by classifiers such as multilayer perception and support vector machine. However, the Moore-Penrose inverse loses precision when using data with additive noise in training. That is why in this paper a method to robustness of extreme learning machine to additive noise proposed. The method consists in computing the weights of the output layer using non-linear optimization algorithms without restrictions. Tests are performed with the gradient descent optimization algorithm and with the Levenberg-Marquardt algorithm. From the implementation it is observed that through the use of these algorithms, smaller errors are achieved than those obtained with the Moore-Penrose inverse.pdfengIOP PublishingAttribution-NonCommercial-NoDerivatives 4.0 Internacionalhttp://creativecommons.org/licenses/by-nc-nd/4.0/info:eu-repo/semantics/openAccesshttp://purl.org/coar/access_right/c_abf2Journal of Physics: Conference SeriesVol. 1514 No. 1 (2020)Optimization algorithmMoore-PenroseLearningExtreme learning machine adapted to noise based on optimization algorithmsinfo:eu-repo/semantics/articleArtículo científicohttp://purl.org/coar/version/c_970fb48d4fbd8a85http://purl.org/coar/resource_type/c_2df8fbb1Al-Shayea Q K 2011 Artificial neural networks in medical diagnosis International Journal of Computer Science Issues 8(2) 150Melin P, Urias J, Solano D, Soto M, Lopez M and Castillo O 2006 Voice Recognition with Neural Networks, Type-2 Fuzzy Logic and Genetic Algorithms Engineering Letters 13(2) 108Suresh S, Babu R V and Kim H J 2009 No-reference image quality assessment using modified extreme learning machine classifier Applied Soft Computing 9(2) 541Mohammed A A, Minhas R, Wu Q J and Sid-Ahmed M A 2011 Human face recognition based on multidimensional PCA and extreme learning machine Pattern Recognition 44 2588Bai Z, Huang G B, Wang D, Wang H and Westover M B 2014 Sparse extreme learning machine for classification IEEE Transactions on Cybernetics 44 1858Huang G, Song S, Gupta J N and Wu C 2014 Semi-supervised and unsupervised extreme learning machines IEEE Transactions on Cybernetics 44 2405Huang G B, Zhou H, Ding X and Zhang R 2011 Extreme learning machine for regression and multiclass classification IEEE Transactions on Systems, Man, and Cybernetics, Part B (Cybernetics) 42 513Huang Z, Yu Y, Gu J and Liu H 2016 An efficient method for traffic sign recognition based on extreme learning machine IEEE Transactions on Cybernetics 47 920Huang G B, Zhu Q Y and Siew C K 2004 Extreme learning machine: a new learning scheme of feedforward neural networks Neural Networks 2 985Huang G B, Chen L and Siew C K 2006 Universal approximation using incremental constructive feedforward networks with random hidden nodes IEEE Trans. Neural Networks 17 879Arnold D V and Beyer H G 2003 A comparison of evolution strategies with other direct search methods in the presence of noise Computational Optimization and ApplicationsCantú-Paz E 2004 Adaptive sampling for noisy problems Genetic and Evolutionary Computation-GECCO 2004 3102 947Ridout D and Judd K 2002 Convergence properties of gradient descent noise reduction Physica D: Nonlinear Phenomena 165 26Courrieu P 2008 Fast computation of Moore-Penrose inverse matrices Neural Information Processing - Letters and Reviews 8 25Ranganathan A 2004 The levenberg-marquardt algorithm Tutoral on LM algorithm 11 101Yu H and Wilamowski B M 2011 Levenberg-marquardt training Industrial electronics handbook 5 1Gavin H 2013 The Levenberg-Marquardt method for nonlinear least squares curve-fitting problems (Durham: Duke University) 1Huang G B and Babri H A 1998 Upper bounds on the number of hidden neurons in feedforward networks with arbitrary bounded nonlinear activation functions IEEE Transactions on Neural Networks 9 224Huang G B 2003 Learning capability and storage capacity of two-hidden-layer feedforward networks IEEE Transactions on Neural Networks 14 274Tamura S I and Tateishi M 1997 Capabilities of a four-layered feedforward neural network: four layers versus three IEEE Transactions on Neural Networks 8 251ORIGINALPDF.pdfPDF.pdfPDFapplication/pdf774647https://bonga.unisimon.edu.co/bitstreams/8680aee8-26f0-48a9-babc-dac98dbc883d/downloadc09575f25cb6a6c9ad86f1dbe78f9eefMD51CC-LICENSElicense_rdflicense_rdfapplication/rdf+xml; charset=utf-8805https://bonga.unisimon.edu.co/bitstreams/21af1537-b5ed-464a-ac75-fad43d59b2f3/download4460e5956bc1d1639be9ae6146a50347MD52LICENSElicense.txtlicense.txttext/plain; charset=utf-8381https://bonga.unisimon.edu.co/bitstreams/e93182d7-74a8-4b99-9ddb-2fa6bb7d6b8d/download733bec43a0bf5ade4d97db708e29b185MD53TEXTPDF.pdf.txtPDF.pdf.txtExtracted texttext/plain20136https://bonga.unisimon.edu.co/bitstreams/2811ecff-01dd-4a72-99a2-a06176e22a5d/downloada9be7d42c2759cfe49ce50ef8246853aMD54THUMBNAILPDF.pdf.jpgPDF.pdf.jpgGenerated Thumbnailimage/jpeg1395https://bonga.unisimon.edu.co/bitstreams/2f8b86ea-60db-4d9d-b2ce-40cb99f3fad0/downloadc1d0b0fbe66fdee34bba200bd929ebc6MD5520.500.12442/6380oai:bonga.unisimon.edu.co:20.500.12442/63802024-08-14 21:54:45.718http://creativecommons.org/licenses/by-nc-nd/4.0/Attribution-NonCommercial-NoDerivatives 4.0 Internacionalopen.accesshttps://bonga.unisimon.edu.coRepositorio Digital Universidad Simón Bolívarrepositorio.digital@unisimon.edu.coPGEgcmVsPSJsaWNlbnNlIiBocmVmPSJodHRwOi8vY3JlYXRpdmVjb21tb25zLm9yZy9saWNlbnNlcy9ieS1uYy80LjAvIj48aW1nIGFsdD0iTGljZW5jaWEgQ3JlYXRpdmUgQ29tbW9ucyIgc3R5bGU9ImJvcmRlci13aWR0aDowO3dpZHRoOjEwMHB4OyIgc3JjPSJodHRwczovL2kuY3JlYXRpdmVjb21tb25zLm9yZy9sL2J5LW5jLzQuMC84OHgzMS5wbmciIC8+PC9hPjxici8+RXN0YSBvYnJhIGVzdMOhIGJham8gdW5hIDxhIHJlbD0ibGljZW5zZSIgaHJlZj0iaHR0cDovL2NyZWF0aXZlY29tbW9ucy5vcmcvbGljZW5zZXMvYnktbmMvNC4wLyI+TGljZW5jaWEgQ3JlYXRpdmUgQ29tbW9ucyBBdHJpYnVjacOzbi1Ob0NvbWVyY2lhbCA0LjAgSW50ZXJuYWNpb25hbDwvYT4u