On the approximation of the inverse dynamics of a robotic manipulator by a neural network trained with a stochastic learning algorithm

The SAGA algorithm is used to ap-proximate the inverse dynamics of a robotic manipulator with two rotational joints. SAGA (Simulated Annealing Gradient Adaptation) is a stochastic strategy for additive construction of an artificial neural network of the two-layer perceptron type based on three essen...

Full description

Autores:
Segura, Enrique Carlos
Tipo de recurso:
Article of journal
Fecha de publicación:
2013
Institución:
Corporación Universidad de la Costa
Repositorio:
REDICUC - Repositorio CUC
Idioma:
eng
OAI Identifier:
oai:repositorio.cuc.edu.co:11323/2631
Acceso en línea:
https://hdl.handle.net/11323/2631
https://repositorio.cuc.edu.co/
Palabra clave:
Neural network
Robotic manipulator
Multilayer perceptron
Stochastic learning
Inverse dynamics
Neural network
Robotic manipulator
Multilayer perceptron
Stochastic learning
Inverse dynamics
Rights
openAccess
License
http://purl.org/coar/access_right/c_abf2
id RCUC2_9b7b539f62be8327cdcf5d934337c20d
oai_identifier_str oai:repositorio.cuc.edu.co:11323/2631
network_acronym_str RCUC2
network_name_str REDICUC - Repositorio CUC
repository_id_str
dc.title.spa.fl_str_mv On the approximation of the inverse dynamics of a robotic manipulator by a neural network trained with a stochastic learning algorithm
dc.title.translated.eng.fl_str_mv On the approximation of the inverse dynamics of a robotic manipulator by a neural network trained with a stochastic learning algorithm
title On the approximation of the inverse dynamics of a robotic manipulator by a neural network trained with a stochastic learning algorithm
spellingShingle On the approximation of the inverse dynamics of a robotic manipulator by a neural network trained with a stochastic learning algorithm
Neural network
Robotic manipulator
Multilayer perceptron
Stochastic learning
Inverse dynamics
Neural network
Robotic manipulator
Multilayer perceptron
Stochastic learning
Inverse dynamics
title_short On the approximation of the inverse dynamics of a robotic manipulator by a neural network trained with a stochastic learning algorithm
title_full On the approximation of the inverse dynamics of a robotic manipulator by a neural network trained with a stochastic learning algorithm
title_fullStr On the approximation of the inverse dynamics of a robotic manipulator by a neural network trained with a stochastic learning algorithm
title_full_unstemmed On the approximation of the inverse dynamics of a robotic manipulator by a neural network trained with a stochastic learning algorithm
title_sort On the approximation of the inverse dynamics of a robotic manipulator by a neural network trained with a stochastic learning algorithm
dc.creator.fl_str_mv Segura, Enrique Carlos
dc.contributor.author.spa.fl_str_mv Segura, Enrique Carlos
dc.subject.spa.fl_str_mv Neural network
Robotic manipulator
Multilayer perceptron
Stochastic learning
Inverse dynamics
topic Neural network
Robotic manipulator
Multilayer perceptron
Stochastic learning
Inverse dynamics
Neural network
Robotic manipulator
Multilayer perceptron
Stochastic learning
Inverse dynamics
dc.subject.eng.fl_str_mv Neural network
Robotic manipulator
Multilayer perceptron
Stochastic learning
Inverse dynamics
description The SAGA algorithm is used to ap-proximate the inverse dynamics of a robotic manipulator with two rotational joints. SAGA (Simulated Annealing Gradient Adaptation) is a stochastic strategy for additive construction of an artificial neural network of the two-layer perceptron type based on three essential ele-ments: a) network weights update by means of the information from the gradient for the cost function; b) approval or rejection of the suggested change through a technique of clas-sical simulated annealing; and c) progressive growth of the neural network as its struc-ture reveals insufficient, using a conservative strategy for adding units to the hidden layer. Experiments are performed and efficiency is analyzed in terms of the relation between mean relative errors -in the training and test-ing sets-, network size, and computation time. The ability of the proposed technique to per-form good approximations by minimizing the complexity of the network’s architecture and, hence, the required computational memory, is emphasized. Moreover, the evolution of mini-mization processes as the cost surface is modi-fied is also discussed
publishDate 2013
dc.date.issued.none.fl_str_mv 2013-12-31
dc.date.accessioned.none.fl_str_mv 2019-02-19T21:46:12Z
dc.date.available.none.fl_str_mv 2019-02-19T21:46:12Z
dc.type.spa.fl_str_mv Artículo de revista
dc.type.coar.fl_str_mv http://purl.org/coar/resource_type/c_2df8fbb1
dc.type.coar.spa.fl_str_mv http://purl.org/coar/resource_type/c_6501
dc.type.content.spa.fl_str_mv Text
dc.type.driver.spa.fl_str_mv info:eu-repo/semantics/article
dc.type.redcol.spa.fl_str_mv http://purl.org/redcol/resource_type/ART
dc.type.version.spa.fl_str_mv info:eu-repo/semantics/acceptedVersion
format http://purl.org/coar/resource_type/c_6501
status_str acceptedVersion
dc.identifier.citation.spa.fl_str_mv Segura, E. (2013). On the approximation of the inverse dynamics of a robotic manipulator by a neural network trained with a stochastic learning algorithm. INGE CUC, 9(2), 39-43. Recuperado a partir de https://revistascientificas.cuc.edu.co/ingecuc/article/view/4
dc.identifier.issn.spa.fl_str_mv 0122-6517, 2382-4700 electrónico
dc.identifier.uri.spa.fl_str_mv https://hdl.handle.net/11323/2631
dc.identifier.eissn.spa.fl_str_mv 2382-4700
dc.identifier.instname.spa.fl_str_mv Corporación Universidad de la Costa
dc.identifier.pissn.spa.fl_str_mv 0122-6517
dc.identifier.reponame.spa.fl_str_mv REDICUC - Repositorio CUC
dc.identifier.repourl.spa.fl_str_mv https://repositorio.cuc.edu.co/
identifier_str_mv Segura, E. (2013). On the approximation of the inverse dynamics of a robotic manipulator by a neural network trained with a stochastic learning algorithm. INGE CUC, 9(2), 39-43. Recuperado a partir de https://revistascientificas.cuc.edu.co/ingecuc/article/view/4
0122-6517, 2382-4700 electrónico
2382-4700
Corporación Universidad de la Costa
0122-6517
REDICUC - Repositorio CUC
url https://hdl.handle.net/11323/2631
https://repositorio.cuc.edu.co/
dc.language.iso.none.fl_str_mv eng
language eng
dc.relation.ispartofseries.spa.fl_str_mv INGE CUC; Vol. 9, Núm. 2 (2013)
dc.relation.ispartofjournal.spa.fl_str_mv INGE CUC
INGE CUC
dc.relation.references.spa.fl_str_mv [1] V. I. Arnold, “On Functions of three Variables”, Dokl. Akad. Nauk, no.114, pp. 679-681, 1957.
[2] G. Cybenko, “Approximation by superpositions of a sigmoidal function”, Math. Control, Signals and Systems, vol.2, no.4, pp. 303-314, 1989.
[3] K. Funahashi, “On the approximate realization of continuous mappings by neural networks”, Neural Networks, vol.2, no.3, pp. 183-92, 1989.
[4] S. Haykin, Neural Networks and Learning Machines. Upper Saddle River, Pearson-Prentice Hall, 2009.
[5] Y. Ito, “Extension of Approximation Capability of Three Layered Neural Networks to Derivatives”, Proc. IEEE Int. Conf. Neural Networks, San Francisco, 1993, pp. 377-381.
[6] S. Kirkpatrick, C. D. Gelatt, and M. P. Vecchi, “Optimization by Simulated Annealing”, Science, vol. 220, pp. 671-680, 1983.
[7] A. N. Kolmogorov, On the Representation of Functions of Many Variables by Superposition of Continuous Functions of one Variable and Addition (1957), Am. Math. Soc. Tr., vol.28, pp. 55-59, 1963.
[8] P. J. Van Laarhoven and E. H. Aarts, Simulated Annealing: Theory and Applications. Dordrech: Kluwer, 2010.
[9] M. Leshno, V. Y. Lin, A. Pinkus and S. Schocken, “Multilayer Feedforward Networks with a Nonpolynomial Activation Function Can Approximate Any Function”, Neural Networks, vol.6, no 6, pp. 861-867, 1993.
[10] A. B. Martínez, R. M. Planas, and E. C. Segura, “Disposición anular de cámaras sobre un robot móvil”, en Actas XVII Jornadas de Automática Santander96, Santander, 1996.
[11] N. Metropolis, A. Rosenbluth, M. Rosenbluth, A. H. Teller, and E. Teller, “Equation of State Calculations by Fast Computing Machines”, J. Chem. Phys, vol. 21, no 6, pp. 1087-91, 1953.
[12] D. E. Rumelhart, G. E Hinton, and R. J. Williams, “Learning representations by back-propagating errors”, Nature no.323, pp. 533-536, 1986.
[13] P. Salamon, P. Paolo Sibani, and R. Frost, Facts, Conjectures and Improvements for Simulated Annealing. SIAM Monographs on Mathematical Modeling and Computation, 2002.
[14] E. C. Segura, A non parametric method for video camera calibration using a neural network, Int. Symp. Multi-Technology Information Processing, Hsinchu, Taiwan, 1996.
[15] E. C. Segura, Optimisation with Simulated Annealing through Regularisation of the Target Function, Proc. XII Congreso Arg. de Ciencias de la Computación, Potrero de los Funes, 2006.
[16] D. A. Sprecher, “On the Structure of Continuous Functions of Several Variables”, Tr. Am. Math. Soc., vol.115, pp. 340-355, 1963.
dc.relation.ispartofjournalabbrev.spa.fl_str_mv INGE CUC
dc.rights.accessrights.spa.fl_str_mv info:eu-repo/semantics/openAccess
dc.rights.coar.spa.fl_str_mv http://purl.org/coar/access_right/c_abf2
eu_rights_str_mv openAccess
rights_invalid_str_mv http://purl.org/coar/access_right/c_abf2
dc.format.mimetype.spa.fl_str_mv application/pdf
dc.publisher.spa.fl_str_mv Corporación Universidad de la Costa
dc.source.spa.fl_str_mv INGE CUC
institution Corporación Universidad de la Costa
dc.source.url.spa.fl_str_mv https://revistascientificas.cuc.edu.co/ingecuc/article/view/4
bitstream.url.fl_str_mv https://repositorio.cuc.edu.co/bitstreams/5891acee-5567-4d97-8602-e6f937a0c617/download
https://repositorio.cuc.edu.co/bitstreams/1d44db71-939a-46f4-869a-a7364245e81e/download
https://repositorio.cuc.edu.co/bitstreams/4a4d43bf-a1ab-45f1-a521-f43986e52135/download
https://repositorio.cuc.edu.co/bitstreams/4bae33eb-0697-418d-9558-4b244a60e232/download
bitstream.checksum.fl_str_mv 33defc4fe5cc3bc6ea54ee55c61d7434
8a4605be74aa9ea9d79846c1fba20a33
4cbe83bef88b31ca4b46d86bc3d41f6e
abbe919ba0a6a3efc3582f73c038e747
bitstream.checksumAlgorithm.fl_str_mv MD5
MD5
MD5
MD5
repository.name.fl_str_mv Repositorio de la Universidad de la Costa CUC
repository.mail.fl_str_mv repdigital@cuc.edu.co
_version_ 1828166827063115776
spelling Segura, Enrique Carlos2019-02-19T21:46:12Z2019-02-19T21:46:12Z2013-12-31Segura, E. (2013). On the approximation of the inverse dynamics of a robotic manipulator by a neural network trained with a stochastic learning algorithm. INGE CUC, 9(2), 39-43. Recuperado a partir de https://revistascientificas.cuc.edu.co/ingecuc/article/view/40122-6517, 2382-4700 electrónicohttps://hdl.handle.net/11323/26312382-4700Corporación Universidad de la Costa0122-6517REDICUC - Repositorio CUChttps://repositorio.cuc.edu.co/The SAGA algorithm is used to ap-proximate the inverse dynamics of a robotic manipulator with two rotational joints. SAGA (Simulated Annealing Gradient Adaptation) is a stochastic strategy for additive construction of an artificial neural network of the two-layer perceptron type based on three essential ele-ments: a) network weights update by means of the information from the gradient for the cost function; b) approval or rejection of the suggested change through a technique of clas-sical simulated annealing; and c) progressive growth of the neural network as its struc-ture reveals insufficient, using a conservative strategy for adding units to the hidden layer. Experiments are performed and efficiency is analyzed in terms of the relation between mean relative errors -in the training and test-ing sets-, network size, and computation time. The ability of the proposed technique to per-form good approximations by minimizing the complexity of the network’s architecture and, hence, the required computational memory, is emphasized. Moreover, the evolution of mini-mization processes as the cost surface is modi-fied is also discussedSe utiliza el algoritmo SAGA para aproximar la dinámica inversa de un manipula-dor robótico con dos juntas rotacionales. SAGA (Simulated Annealing + Gradiente + Adapta-ción) es una estrategia estocástica para la cons-trucción aditiva de una red neuronal artificial de tipo perceptrón de dos capas, basada en tres elementos esenciales: a) actualización de los pe-sos de la red por medio de información del gra-diente de la función de costo; b) aceptación o re-chazo del cambio propuesto por una técnica de recocido simulado (simulated annealing) clási-ca; y c) crecimiento progresivo de la red neuro-nal, en la medida en que su estructura resulta insuficiente, usando una estrategia conserva-dora para agregar unidades a la capa oculta. Se realizan experimentos y se analiza la eficien-cia en términos de la relación entre error rela-tivo medio -en los conjuntos de entrenamien-to y de testeo-, tamaño de la red y tiempos de cómputo. Se hace énfasis en la habilidad de la técnica propuesta para obtener buenas aproxi-maciones, minimizando la complejidad de la ar-quitectura de la red y, por lo tanto, la memoria computacional requerida. Además, se discute la evolución del proceso de minimización a medi-da que la superficie de costo se modificaSegura, Enrique Carlos-a0ec3846-ab36-46dc-88e3-942341576987-0application/pdfengCorporación Universidad de la CostaINGE CUC; Vol. 9, Núm. 2 (2013)INGE CUCINGE CUC[1] V. I. Arnold, “On Functions of three Variables”, Dokl. Akad. Nauk, no.114, pp. 679-681, 1957.[2] G. Cybenko, “Approximation by superpositions of a sigmoidal function”, Math. Control, Signals and Systems, vol.2, no.4, pp. 303-314, 1989.[3] K. Funahashi, “On the approximate realization of continuous mappings by neural networks”, Neural Networks, vol.2, no.3, pp. 183-92, 1989.[4] S. Haykin, Neural Networks and Learning Machines. Upper Saddle River, Pearson-Prentice Hall, 2009.[5] Y. Ito, “Extension of Approximation Capability of Three Layered Neural Networks to Derivatives”, Proc. IEEE Int. Conf. Neural Networks, San Francisco, 1993, pp. 377-381.[6] S. Kirkpatrick, C. D. Gelatt, and M. P. Vecchi, “Optimization by Simulated Annealing”, Science, vol. 220, pp. 671-680, 1983.[7] A. N. Kolmogorov, On the Representation of Functions of Many Variables by Superposition of Continuous Functions of one Variable and Addition (1957), Am. Math. Soc. Tr., vol.28, pp. 55-59, 1963.[8] P. J. Van Laarhoven and E. H. Aarts, Simulated Annealing: Theory and Applications. Dordrech: Kluwer, 2010.[9] M. Leshno, V. Y. Lin, A. Pinkus and S. Schocken, “Multilayer Feedforward Networks with a Nonpolynomial Activation Function Can Approximate Any Function”, Neural Networks, vol.6, no 6, pp. 861-867, 1993.[10] A. B. Martínez, R. M. Planas, and E. C. Segura, “Disposición anular de cámaras sobre un robot móvil”, en Actas XVII Jornadas de Automática Santander96, Santander, 1996.[11] N. Metropolis, A. Rosenbluth, M. Rosenbluth, A. H. Teller, and E. Teller, “Equation of State Calculations by Fast Computing Machines”, J. Chem. Phys, vol. 21, no 6, pp. 1087-91, 1953.[12] D. E. Rumelhart, G. E Hinton, and R. J. Williams, “Learning representations by back-propagating errors”, Nature no.323, pp. 533-536, 1986.[13] P. Salamon, P. Paolo Sibani, and R. Frost, Facts, Conjectures and Improvements for Simulated Annealing. SIAM Monographs on Mathematical Modeling and Computation, 2002.[14] E. C. Segura, A non parametric method for video camera calibration using a neural network, Int. Symp. Multi-Technology Information Processing, Hsinchu, Taiwan, 1996.[15] E. C. Segura, Optimisation with Simulated Annealing through Regularisation of the Target Function, Proc. XII Congreso Arg. de Ciencias de la Computación, Potrero de los Funes, 2006.[16] D. A. Sprecher, “On the Structure of Continuous Functions of Several Variables”, Tr. Am. Math. Soc., vol.115, pp. 340-355, 1963.INGE CUCINGE CUChttps://revistascientificas.cuc.edu.co/ingecuc/article/view/4Neural networkRobotic manipulatorMultilayer perceptronStochastic learningInverse dynamicsNeural networkRobotic manipulatorMultilayer perceptronStochastic learningInverse dynamicsOn the approximation of the inverse dynamics of a robotic manipulator by a neural network trained with a stochastic learning algorithmOn the approximation of the inverse dynamics of a robotic manipulator by a neural network trained with a stochastic learning algorithmArtículo de revistahttp://purl.org/coar/resource_type/c_6501http://purl.org/coar/resource_type/c_2df8fbb1Textinfo:eu-repo/semantics/articlehttp://purl.org/redcol/resource_type/ARTinfo:eu-repo/semantics/acceptedVersioninfo:eu-repo/semantics/openAccesshttp://purl.org/coar/access_right/c_abf2PublicationORIGINALOn the Approximation of the Inverse Dynamics of a Robotic Manipulator by a Neural Network Trained with a Stochastic Learning Algorithm.pdfOn the Approximation of the Inverse Dynamics of a Robotic Manipulator by a Neural Network Trained with a Stochastic Learning Algorithm.pdfapplication/pdf687531https://repositorio.cuc.edu.co/bitstreams/5891acee-5567-4d97-8602-e6f937a0c617/download33defc4fe5cc3bc6ea54ee55c61d7434MD51LICENSElicense.txtlicense.txttext/plain; charset=utf-81748https://repositorio.cuc.edu.co/bitstreams/1d44db71-939a-46f4-869a-a7364245e81e/download8a4605be74aa9ea9d79846c1fba20a33MD52THUMBNAILOn the Approximation of the Inverse Dynamics of a Robotic Manipulator by a Neural Network Trained with a Stochastic Learning Algorithm.pdf.jpgOn the Approximation of the Inverse Dynamics of a Robotic Manipulator by a Neural Network Trained with a Stochastic Learning Algorithm.pdf.jpgimage/jpeg63601https://repositorio.cuc.edu.co/bitstreams/4a4d43bf-a1ab-45f1-a521-f43986e52135/download4cbe83bef88b31ca4b46d86bc3d41f6eMD54TEXTOn the Approximation of the Inverse Dynamics of a Robotic Manipulator by a Neural Network Trained with a Stochastic Learning Algorithm.pdf.txtOn the Approximation of the Inverse Dynamics of a Robotic Manipulator by a Neural Network Trained with a Stochastic Learning Algorithm.pdf.txttext/plain15924https://repositorio.cuc.edu.co/bitstreams/4bae33eb-0697-418d-9558-4b244a60e232/downloadabbe919ba0a6a3efc3582f73c038e747MD5511323/2631oai:repositorio.cuc.edu.co:11323/26312024-09-17 14:14:43.245open.accesshttps://repositorio.cuc.edu.coRepositorio de la Universidad de la Costa CUCrepdigital@cuc.edu.coTk9URTogUExBQ0UgWU9VUiBPV04gTElDRU5TRSBIRVJFClRoaXMgc2FtcGxlIGxpY2Vuc2UgaXMgcHJvdmlkZWQgZm9yIGluZm9ybWF0aW9uYWwgcHVycG9zZXMgb25seS4KCk5PTi1FWENMVVNJVkUgRElTVFJJQlVUSU9OIExJQ0VOU0UKCkJ5IHNpZ25pbmcgYW5kIHN1Ym1pdHRpbmcgdGhpcyBsaWNlbnNlLCB5b3UgKHRoZSBhdXRob3Iocykgb3IgY29weXJpZ2h0Cm93bmVyKSBncmFudHMgdG8gRFNwYWNlIFVuaXZlcnNpdHkgKERTVSkgdGhlIG5vbi1leGNsdXNpdmUgcmlnaHQgdG8gcmVwcm9kdWNlLAp0cmFuc2xhdGUgKGFzIGRlZmluZWQgYmVsb3cpLCBhbmQvb3IgZGlzdHJpYnV0ZSB5b3VyIHN1Ym1pc3Npb24gKGluY2x1ZGluZwp0aGUgYWJzdHJhY3QpIHdvcmxkd2lkZSBpbiBwcmludCBhbmQgZWxlY3Ryb25pYyBmb3JtYXQgYW5kIGluIGFueSBtZWRpdW0sCmluY2x1ZGluZyBidXQgbm90IGxpbWl0ZWQgdG8gYXVkaW8gb3IgdmlkZW8uCgpZb3UgYWdyZWUgdGhhdCBEU1UgbWF5LCB3aXRob3V0IGNoYW5naW5nIHRoZSBjb250ZW50LCB0cmFuc2xhdGUgdGhlCnN1Ym1pc3Npb24gdG8gYW55IG1lZGl1bSBvciBmb3JtYXQgZm9yIHRoZSBwdXJwb3NlIG9mIHByZXNlcnZhdGlvbi4KCllvdSBhbHNvIGFncmVlIHRoYXQgRFNVIG1heSBrZWVwIG1vcmUgdGhhbiBvbmUgY29weSBvZiB0aGlzIHN1Ym1pc3Npb24gZm9yCnB1cnBvc2VzIG9mIHNlY3VyaXR5LCBiYWNrLXVwIGFuZCBwcmVzZXJ2YXRpb24uCgpZb3UgcmVwcmVzZW50IHRoYXQgdGhlIHN1Ym1pc3Npb24gaXMgeW91ciBvcmlnaW5hbCB3b3JrLCBhbmQgdGhhdCB5b3UgaGF2ZQp0aGUgcmlnaHQgdG8gZ3JhbnQgdGhlIHJpZ2h0cyBjb250YWluZWQgaW4gdGhpcyBsaWNlbnNlLiBZb3UgYWxzbyByZXByZXNlbnQKdGhhdCB5b3VyIHN1Ym1pc3Npb24gZG9lcyBub3QsIHRvIHRoZSBiZXN0IG9mIHlvdXIga25vd2xlZGdlLCBpbmZyaW5nZSB1cG9uCmFueW9uZSdzIGNvcHlyaWdodC4KCklmIHRoZSBzdWJtaXNzaW9uIGNvbnRhaW5zIG1hdGVyaWFsIGZvciB3aGljaCB5b3UgZG8gbm90IGhvbGQgY29weXJpZ2h0LAp5b3UgcmVwcmVzZW50IHRoYXQgeW91IGhhdmUgb2J0YWluZWQgdGhlIHVucmVzdHJpY3RlZCBwZXJtaXNzaW9uIG9mIHRoZQpjb3B5cmlnaHQgb3duZXIgdG8gZ3JhbnQgRFNVIHRoZSByaWdodHMgcmVxdWlyZWQgYnkgdGhpcyBsaWNlbnNlLCBhbmQgdGhhdApzdWNoIHRoaXJkLXBhcnR5IG93bmVkIG1hdGVyaWFsIGlzIGNsZWFybHkgaWRlbnRpZmllZCBhbmQgYWNrbm93bGVkZ2VkCndpdGhpbiB0aGUgdGV4dCBvciBjb250ZW50IG9mIHRoZSBzdWJtaXNzaW9uLgoKSUYgVEhFIFNVQk1JU1NJT04gSVMgQkFTRUQgVVBPTiBXT1JLIFRIQVQgSEFTIEJFRU4gU1BPTlNPUkVEIE9SIFNVUFBPUlRFRApCWSBBTiBBR0VOQ1kgT1IgT1JHQU5JWkFUSU9OIE9USEVSIFRIQU4gRFNVLCBZT1UgUkVQUkVTRU5UIFRIQVQgWU9VIEhBVkUKRlVMRklMTEVEIEFOWSBSSUdIVCBPRiBSRVZJRVcgT1IgT1RIRVIgT0JMSUdBVElPTlMgUkVRVUlSRUQgQlkgU1VDSApDT05UUkFDVCBPUiBBR1JFRU1FTlQuCgpEU1Ugd2lsbCBjbGVhcmx5IGlkZW50aWZ5IHlvdXIgbmFtZShzKSBhcyB0aGUgYXV0aG9yKHMpIG9yIG93bmVyKHMpIG9mIHRoZQpzdWJtaXNzaW9uLCBhbmQgd2lsbCBub3QgbWFrZSBhbnkgYWx0ZXJhdGlvbiwgb3RoZXIgdGhhbiBhcyBhbGxvd2VkIGJ5IHRoaXMKbGljZW5zZSwgdG8geW91ciBzdWJtaXNzaW9uLgo=