Aportes al modelado de sistemas dinámicos no lineales usando modelos generativos
En este trabajo de tesis se presenta un aporte al modelado de sistemas dinámicos usando modelos generativos de aprendizaje profundo, específicamente Autocodificadores y Autocodificadores Variacionales. En primer lugar, se realiza una revisión acerca de la intersección entre las temáticas de identifi...
- Autores:
-
Paniagua Jaramillo, José Luis
- Tipo de recurso:
- Doctoral thesis
- Fecha de publicación:
- 2022
- Institución:
- Universidad Autónoma de Occidente
- Repositorio:
- RED: Repositorio Educativo Digital UAO
- Idioma:
- spa
- OAI Identifier:
- oai:red.uao.edu.co:10614/15466
- Acceso en línea:
- https://hdl.handle.net/10614/15466
https://red.uao.edu.co/
- Palabra clave:
- Doctorado en Ingeniería
Identificacion de sistemas
Aprendizaje profundo
Modelos generativos
Sistemas dinámicos no lineales
System identification
Deep learning
Generative modeling
Nonlinear dynamic system
- Rights
- openAccess
- License
- Derechos reservados - Universidad Autónoma de Occidente, 2022
id |
REPOUAO2_10fd77b862bc06149259f89d3511816d |
---|---|
oai_identifier_str |
oai:red.uao.edu.co:10614/15466 |
network_acronym_str |
REPOUAO2 |
network_name_str |
RED: Repositorio Educativo Digital UAO |
repository_id_str |
|
dc.title.spa.fl_str_mv |
Aportes al modelado de sistemas dinámicos no lineales usando modelos generativos |
title |
Aportes al modelado de sistemas dinámicos no lineales usando modelos generativos |
spellingShingle |
Aportes al modelado de sistemas dinámicos no lineales usando modelos generativos Doctorado en Ingeniería Identificacion de sistemas Aprendizaje profundo Modelos generativos Sistemas dinámicos no lineales System identification Deep learning Generative modeling Nonlinear dynamic system |
title_short |
Aportes al modelado de sistemas dinámicos no lineales usando modelos generativos |
title_full |
Aportes al modelado de sistemas dinámicos no lineales usando modelos generativos |
title_fullStr |
Aportes al modelado de sistemas dinámicos no lineales usando modelos generativos |
title_full_unstemmed |
Aportes al modelado de sistemas dinámicos no lineales usando modelos generativos |
title_sort |
Aportes al modelado de sistemas dinámicos no lineales usando modelos generativos |
dc.creator.fl_str_mv |
Paniagua Jaramillo, José Luis |
dc.contributor.advisor.none.fl_str_mv |
López Sotelo, Jesús Alfonso |
dc.contributor.author.none.fl_str_mv |
Paniagua Jaramillo, José Luis |
dc.contributor.corporatename.spa.fl_str_mv |
Universidad Autónoma de Occidente |
dc.contributor.jury.none.fl_str_mv |
Romero Cano, Víctor Peña, Carlos |
dc.subject.proposal.spa.fl_str_mv |
Doctorado en Ingeniería Identificacion de sistemas Aprendizaje profundo Modelos generativos Sistemas dinámicos no lineales |
topic |
Doctorado en Ingeniería Identificacion de sistemas Aprendizaje profundo Modelos generativos Sistemas dinámicos no lineales System identification Deep learning Generative modeling Nonlinear dynamic system |
dc.subject.proposal.eng.fl_str_mv |
System identification Deep learning Generative modeling Nonlinear dynamic system |
description |
En este trabajo de tesis se presenta un aporte al modelado de sistemas dinámicos usando modelos generativos de aprendizaje profundo, específicamente Autocodificadores y Autocodificadores Variacionales. En primer lugar, se realiza una revisión acerca de la intersección entre las temáticas de identificación de sistemas y el aprendizaje profundo. Esto con el fin de proponer métodos y arquitecturas actuales de modelado de sistemas usando redes neuronales profundas. Se propone una arquitectura de Autocodificador basada en redes neuronales MLP para el modelado de sistemas lineales y no lineales. Además, se introduce el concepto de reducción de dimensionalidad para lograr una representación compacta de la respuesta temporal del sistema, lo cual ayuda a la predicción ante señales afectadas por ruido. Por otra parte, se plantea una arquitectura de Autocodificador Variacional modificada para el modelado de sistemas no lineales exclusivamente. La parte del codificador consiste en una red MLP, mientras que el decodificador se basa en una estructura NARX neuronal. Las arquitecturas propuestas son validadas con sistemas dinámicos de referencia seleccionados a partir de una revisión de literatura. Además, los resultados son contrastados con los obtenidos con arquitecturas clásicas de modelado con redes neuronales artificiales |
publishDate |
2022 |
dc.date.issued.none.fl_str_mv |
2022-11-08 |
dc.date.accessioned.none.fl_str_mv |
2024-02-29T16:03:52Z |
dc.date.available.none.fl_str_mv |
2024-02-29T16:03:52Z |
dc.type.spa.fl_str_mv |
Trabajo de grado - Doctorado |
dc.type.coarversion.fl_str_mv |
http://purl.org/coar/version/c_970fb48d4fbd8a85 |
dc.type.coar.none.fl_str_mv |
http://purl.org/coar/resource_type/c_db06 |
dc.type.content.none.fl_str_mv |
Text |
dc.type.driver.none.fl_str_mv |
info:eu-repo/semantics/doctoralThesis |
dc.type.redcol.none.fl_str_mv |
http://purl.org/redcol/resource_type/TD |
dc.type.version.none.fl_str_mv |
info:eu-repo/semantics/publishedVersion |
format |
http://purl.org/coar/resource_type/c_db06 |
status_str |
publishedVersion |
dc.identifier.citation.spa.fl_str_mv |
Paniagua Jaramillo, J. L. (2022). Aportes al modelado de sistemas dinámicos no lineales usando modelos generativos. (Tesis). Universidad Autónoma de Occidente. Cali. Colombia. https://hdl.handle.net/10614/15466 |
dc.identifier.uri.none.fl_str_mv |
https://hdl.handle.net/10614/15466 |
dc.identifier.instname.spa.fl_str_mv |
Universidad Autónoma de Occidente |
dc.identifier.reponame.spa.fl_str_mv |
Respositorio Educativo Digital UAO |
dc.identifier.repourl.none.fl_str_mv |
https://red.uao.edu.co/ |
identifier_str_mv |
Paniagua Jaramillo, J. L. (2022). Aportes al modelado de sistemas dinámicos no lineales usando modelos generativos. (Tesis). Universidad Autónoma de Occidente. Cali. Colombia. https://hdl.handle.net/10614/15466 Universidad Autónoma de Occidente Respositorio Educativo Digital UAO |
url |
https://hdl.handle.net/10614/15466 https://red.uao.edu.co/ |
dc.language.iso.spa.fl_str_mv |
spa |
language |
spa |
dc.relation.references.none.fl_str_mv |
[1] E. Caicedo, J. López, y A. Muñoz, Control Inteligente, 2012. [en línea]. disponible en: https://dl.acm.org/citation.cfm?id=2721661 [2] O. Nelles, Nonlinear system identification: from classical approaches to neural networks, fuzzy models, and gaussian processes. Springer Nature, 2020. [3] L. Ljung, “System identification-theory for the user 2nd edition ptr prenticehall,” Upper Saddle River, NJ, 1999. [4] G. Pillonetto, F. Dinuzzo, T. Chen, G. D. Nicolao, y L. Ljung, “Kernel methods in system identification, machine learning and function estimation: A survey,” Automatica, vol. 50, pp. 657–682, 2014. [5] L. Ljung, T. Chen, y B. Mu, “A shift in paradigm for system identification,” International Journal of Control, vol. 93, pp. 173–180, 2 2020. [6] F. B. Carlson, Machine Learning and System Identification for Estimation in Phy-sical Systems, 2019. [7] J. M. Sandin, A. Albiol, y R. Paredes, “Contributions to Deep Learning Models,” 2015. [en línea]. disponible en: https://riunet.upv.es/bitstream/handle/10251/ 61296/MANSANET-ContributionstoDeepLearningModels.pdf?sequence=1& isAllowed=y [8] L. Ljung, C. Andersson, K. Tiels, y T. B. Schön, “Deep Learning and System Identification,” 21st IFAC World Congress, p. 8, 2020. [9] A. M. Reppen y H. M. Soner, “Bias-variance trade-off and overlearning in dynamic decision problems,” 11 2020. [en línea]. disponible en: http://arxiv.org/abs/ 2011.09349 [10] B. Neal, S. Mittal, A. Baratin, V. Tantia, M. Scicluna, S. Lacoste-Julien, and Mitliagkas, “A modern take on the bias-variance tradeoff in neural networks,” 10 2018. [en línea]. disponible en: http://arxiv.org/abs/1810.08591 [11] A. S. Willsky y I. T. Young, Signals and systems. Prentice-Hall International, 1997. [12] K. Ogata, System dynamics / Katsuhiko Ogata., 4th ed. Upper Saddle River, NJ: Pearson/Prentice Hall, 2004. [13] L. A. Zadeh, “From circuit theory to system theory,” Proceedings of the IRE, vol. 50, no. 5, pp. 856–865, 1962. [14] X. Du, Y. Cai, S. Wang, y L. Zhang, “Overview of deep learning,” in 2016 31st Youth Academic Annual Conference of Chinese Association of Automation (YAC). IEEE, nov 2016, pp. 159–164. [en línea]. disponible en: http://ieeexplore.ieee.org/document/7804882/ [15] H. Demuth, M. Beale, O. D. Jess, y M. Hagan, Neural network design, 2014.[en línea]. disponible en: https://dl.acm.org/citation.cfm?id=2721661 [16] J. Schmidhuber, “Deep Learning in Neural Networks: An Overview,” Neural Networks, vol. 61, pp. 85–117, oct 2015. [en línea]. disponible en: http://arxiv.org/abs/1404.7828http://files/290/Schmidhuber-2015-DeepLearninginNeuralNetworksAnOverview.pdfhttp://files/291/1404.html [17] W. Liu, Z. Wang, X. Liu, N. Zeng, Y. Liu, y F. E. Alsaadi, “A survey of deep neural network architectures and their applications,” Neurocomputing, vol. 234, pp. 11–26, apr 2017. [en línea]. disponible en: https://www.sciencedirect.com/science/article/pii/S0925231216315533 [18] I. Goodfellow, Y. Bengio, A. Courville, e Y Bengio, “Deep learning,” 2016. [en línea]. disponible en: https://www.synapse.koreamed.org/Synapse/Data/PDFData/1088HIR/hir-22-351.pdf [19] J. Krohn, G. Beyleveld, y A. Bassens, Deep learning illustrated: a visual, interactive guide to artificial intelligence. Addison-Wesley Professional, 2019. [20] S. University, “Stanford University CS231n: Convolutional Neural Networks for Visual Recognition.” [en línea]. disponible en: http://cs231n.stanford.edu/ [21] J. Gonzalez y W. Yu, “Non-linear system modeling using lstm neural networks,” IFAC-PapersOnLine, vol. 51, no. 13, pp. 485–489, 2018. [22] Yu Wang, “A new concept using LSTM Neural Networks for dynamic system identification,” in 2017 American Control Conference (ACC). IEEE, may 2017, pp. 5324–5329. [en línea]. disponible en: http://ieeexplore.ieee.org/document/7963782/ [23] Y. Bengio, P. Simard, y P. Frasconi, “Learning long-term dependencies with gradient descent is difficult,” IEEE Transactions on Neural Networks, vol. 5, no. 2, pp. 157–166, mar 1994. [en línea]. disponible en: http://www.ncbi.nlm.nih.gov/pubmed/18267787http://ieeexplore.ieee.org/document/279181/ [24] S. Hochreiter y J. Schmidhuber, “Long short-term memory,” Neural computation, vol. 9, no. 8, pp. 1735–1780, 1997. [25] C. Olah, “Understanding LSTM Networks – colah’s blog.” [en línea]. disponible en: https://colah.github.io/posts/2015-08-Understanding-LSTMs/ [26] A. C. Müller y S. Guido, Introduction to Machine Learning with Python: A Guide for Data Scientists. . O ’Reilly Media, Inc.", 2016. [en línea]. disponible en: https://books.google.com.co/books?id=vbQlDQAAQBAJ [27] C. Doersch, “Tutorial on Variational Autoencoders,” jun 2016. [en línea]. disponible en: http://arxiv.org/abs/1606.05908 [28] S. T. Andrew Ng, Jiquan Ngiam, Chuan Yu Foo, Yifan Mai, Caroline Suen, Adam Coates, Andrew Maas, Awni Hannun, Brody Huval, Tao Wang, “Unsupervised Feature Learning and Deep Learning Tutorial.” [en línea]. disponible en: http://ufldl.stanford.edu/tutorial/unsupervised/Autoencoders/ [29] P. Baldi, “Autoencoders, Unsupervised Learning, and Deep Architectures,” Tech. Rep., jun 2012. [en línea]. disponible en: http://proceedings.mlr.press/v27/baldi12a.html [30] G. van den Burg, “Simple MNIST Autoencoder in TensorFlow · Gertjan van den Burg.” [en línea]. disponible en: https://gertjanvandenburg.com/blog/autoencoder/ [31] A. Dertat, “Applied Deep Learning - Part 3: Autoencoders To-wards Data Science.” [en línea]. disponible en: https://towardsdatascience.com/applied-deep-learning-part-3- autoencoders-1c083af4d798 [32] I. Shafkat, “Intuitively Understanding Variational Autoencoders To-wards Data Science.” [en línea]. disponible en: https://towardsdatascience.com/intuitively-understanding-variationalautoencoders-1bfe67eb5daf [33] Q. Zheng, M. Yang, J. Yang, Q. Zhang, y X. Zhang, “Improvement of generalization ability of deep cnn via implicit regularization in two-stage training process,” IEEE Access, vol. 6, pp. 15 844–15 869, 2018. [34] A. Achille y S. Soatto, “Emergence of invariance and disentanglement in deep representations,” The Journal of Machine Learning Research, vol. 19, no. 1, pp. 1947–1980, 2018. [35] N. Tishby, F. C. Pereira, y W. Bialek, “The information bottleneck method,” arXiv preprint physics/0004057, 2000. [36] N. Tishby y N. Zaslavsky, “Deep learning and the information bottleneck principle,” in 2015 ieee information theory workshop (itw). IEEE, 2015, pp. 1–5. [37] C. M. Bishop y N. M. Nasrabadi, Pattern recognition and machine learning. Springer, 2006, vol. 4, no. 4. [38] I. J. Goodfellow, J. Pouget-Abadie, M. Mirza, B. Xu, D. Warde-Farley, S. Ozair, Courville, e Y. Bengio, “Generative Adversarial Networks,” pp. 1–9, 2014.[en línea]. disponible en: http://arxiv.org/abs/1406.2661 [39] D. P. Kingma, D. J. Rezende, S. Mohamed, y M. Welling, “Semi-Supervised Learning with Deep Generative Models,” jun 2014. [en línea]. disponible en: http://arxiv.org/abs/1406.5298 [40] A. Chiuso y G. Pillonetto, “System Identification: A Machine Learning Perspective,” Annual Review of Control, Robotics, and Autonomous Systems, vol. 2, no. 1, pp. 281–304, 2019. [41] K. Narendra y K. Parthasarathy, “Identification and control of dynamical systems using neural networks,” IEEE Transactions on Neural Networks, vol. 1, no. 1, pp. 4–27, mar 1990. [en línea]. disponible en: http://ieeexplore.ieee.org/document/80202/ [42] J. L. Elman, “Finding structure in time,” Cognitive Science, vol. 14, no. 2, pp. 179–211, apr 1990. [en línea]. disponible en: https://www.sciencedirect.com/science/article/pii/036402139090002E [43] X. Gao, X. Gao, y S. Ovaska, “A modified Elman neural network model with application to dynamical systems identification,” in 1996 IEEE International Conference on Systems, Man and Cybernetics. Information Intelligence and Systems (Cat. No.96CH35929), vol. 2. IEEE, pp. 1376–1381.[en línea]. disponible en: http://ieeexplore.ieee.org/document/571312/ [44] R. Grino, G. Cembrano, y C. Torras, “Nonlinear system identification using additive dynamic neural networks-two on-line approaches,” IEEE Transactions on Circuits and Systems I: Fundamental Theory and Applications, vol. 47, no. 2, pp. 150–165, 2000. [en línea]. disponible en: http://ieeexplore.ieee.org/document/ 828569/ [45] N. Todorovic y P. Klan, “State of the Art in Nonlinear Dynamical System Identification using Artificial Neural Networks,” in 2006 8th Seminar on Neural Network Applications in Electrical Engineering. IEEE, 2006, pp. 103–108.[en línea]. disponible en: http://ieeexplore.ieee.org/document/4147175/ [46] J. de Canete, S. G.-P. W. A. of . . . , and undefined 2008, “Software tools for system identification and control using neural networks in process engineering,” pdfs.semanticscholar.org. [en línea]. disponible en: https://pdfs.semanticscholar.org/ 2414/a7b690ac52f69a581b7e1643817384bb8ea6.pdf [47] A. Punjani y P. Abbeel, “Deep learning helicopter dynamics models,” in 2015 IEEE International Conference on Robotics and Automation (ICRA). IEEE, may 2015, pp. 3223–3230. [en línea]. disponible en: http://ieeexplore.ieee.org/document/7139643/ [48] K. Cheon, J. Kim, M. Hamadache, y D. Lee, “On Replacing PID Controller with Deep Learning Controller for DC Motor System,” Journal of Automation and Control Engineering, vol. 3, no. 6, pp. 452–456, nov 2015. [en línea]. disponible en: http://www.joace.org/index.php?m=content&c=index& a=show&catid=50&id=273http://files/418/20150408032023512.pdf [49] G. E. Hinton, S. Osindero, e Y.-W. Teh, “A fast learning algorithm for deep belief nets,” Neural computation, vol. 18, no. 7, pp. 1527–1554, 2006. [50] S. P. K. Spielberg, R. B. Gopaluni, y P. D. Loewen, “Deep reinforcement learning approaches for process control.” IEEE, 2017, pp. 201–206. [en línea]. disponible en: http://files/374/07983780.pdf [51] E. De La Rosa, W. Yu, y X. Li, “Nonlinear system modeling with deep neural networks and autoencoders algorithm,” 2016 IEEE International Conference on Systems, Man, and Cybernetics, SMC 2016 - Conference Proceedings, pp. 2157–2162, 2017. [52] Z. Han y K. S. Narendra, “New concepts in adaptive control using multiple models,” IEEE Transactions on Automatic Control, vol. 57, no. 1, pp. 78–89, 2011. [53] M. Raissi, P. Perdikaris, y G. E. Karniadakis, “Multistep Neural Networks for Data-driven Discovery of Nonlinear Dynamical Systems,” jan 2018. [en línea]. disponible en: http://arxiv.org/abs/1801.01236 [54] R. Iten, T. Metger, H. Wilming, L. Del Rio, y R. Renner, “Discovering physical concepts with neural networks,” Tech. Rep. [en línea]. disponible en: https://arxiv.org/pdf/1807.10300.pdf [55] M. Lopez y W. Yu, “Nonlinear system modeling using convolutional neural networks,” in 2017 14th International Conference on Electrical Engineering, Computing Science and Automatic Control (CCE). IEEE, oct 2017, pp. 1–5.[en línea]. disponible en: http://ieeexplore.ieee.org/document/8108894/ [56] Y. Lecun, L. Bottou, Y. Bengio, y P. Haffner, “Gradient-based learning applied to document recognition,” Proceedings of the IEEE, vol. 86, no. 11, pp. 2278–2324, 1998. [en línea]. disponible en: http://ieeexplore.ieee.org/document/ 726791/ [57] W. Yu y M. Pacheco, “Impact of random weights on nonlinear system identification using convolutional neural networks,” Information Sciences, vol. 477, pp. 1–14, mar 2019. [en línea]. disponible en: https://linkinghub.elsevier.com/retrieve/pii/S0020025518308168 [58] S. Genc, “Parametric system identification using deep convolutional neural networks,” in 2017 International Joint Conference on Neural Networks (IJCNN). IEEE, may 2017, pp. 2112–2119. [en línea]. disponible en: http://ieeexplore.ieee.org/document/7966110/ [59] C. Andersson, A. H. Ribeiro, K. Tiels, N. Wahlstrom, y T. B. Schon, “Deep Convolutional Networks in System Identification,” Proceedings of the IEEE Con-ference on Decision and Control, vol. 2019-Decem, pp. 3670–3676, 2019. [60] M. Thill, W. Konen, y T. Bäck, “Time Series Encodings with Temporal Convolutional Networks,” Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics), vol. 12438 LNCS, pp. 161–173, 2020. [61] G. E. Hinton y R. R. Salakhutdinov, “Reducing the dimensionality of data with neural networks,” science, vol. 313, no. 5786, pp. 504–507, 2006. [62] Y. LeCun, L. Bottou, Y. Bengio, y P. Haffner, “Gradient-based learning applied to document recognition,” Proceedings of the IEEE, vol. 86, no. 11, pp. 2278–2324, 1998. [63] J. Wang, H. He, y D. V. Prokhorov, “A folded neural network autoencoder for dimensionality reduction,” Procedia Computer Science, vol. 13, pp. 120–127, 2012. [64] S. E. Otto y C. W. Rowley, “Linearly recurrent autoencoder networks for learning dynamics,” SIAM Journal on Applied Dynamical Systems, vol. 18, pp. 558–593, 2019. [65] T. Duriez, S. L. Brunton, y B. R. Noack, Machine learning control-taming nonlinear dynamics and turbulence. Springer, 2017. [66] T. Peters, “Data-driven science and engineering: machine learning, dynamical systems, and control: by sl brunton and jn kutz, 2019, cambridge, cambridge university press, 472 pp.,£ 49.99 (hardback), isbn 9781108422093. level: post-graduate. scope: textbook.” 2019. [67] M. Schoukens, P. Mattson, T. Wigren, y J.-P. Noel, “Cascaded tanks benchmark combining soft and hard nonlinearities,” in Workshop on nonlinear system identification benchmarks, 2016, pp. 20–23. [68] G. E. Box, G. M. Jenkins, G. C. Reinsel, y G. M. Ljung, Time series analysis: forecasting and control. John Wiley & Sons, 2015. [69] A. Marconato, J. Sjöberg, J. Suykens, y J. Schoukens, “Identification of the silverbox benchmark using nonlinear state-space models,” IFAC Proceedings Volumes, vol. 45, no. 16, pp. 632–637, 2012. [70] J. Schoukens y L. Ljung, “Wiener-hammerstein benchmark,” 2009. [71] D. P. Kingma y J. Ba, “Adam: A method for stochastic optimization,” arXiv preprint arXiv:1412.6980, 2014. |
dc.rights.spa.fl_str_mv |
Derechos reservados - Universidad Autónoma de Occidente, 2022 |
dc.rights.coar.fl_str_mv |
http://purl.org/coar/access_right/c_abf2 |
dc.rights.uri.none.fl_str_mv |
https://creativecommons.org/licenses/by-nc-nd/4.0/ |
dc.rights.accessrights.eng.fl_str_mv |
info:eu-repo/semantics/openAccess |
dc.rights.creativecommons.none.fl_str_mv |
Atribución-NoComercial-SinDerivadas 4.0 Internacional (CC BY-NC-ND 4.0) |
rights_invalid_str_mv |
Derechos reservados - Universidad Autónoma de Occidente, 2022 https://creativecommons.org/licenses/by-nc-nd/4.0/ Atribución-NoComercial-SinDerivadas 4.0 Internacional (CC BY-NC-ND 4.0) http://purl.org/coar/access_right/c_abf2 |
eu_rights_str_mv |
openAccess |
dc.format.extent.spa.fl_str_mv |
189 páginas |
dc.format.mimetype.none.fl_str_mv |
application/pdf |
dc.publisher.spa.fl_str_mv |
Universidad Autónoma de Occidente |
dc.publisher.program.spa.fl_str_mv |
Doctorado en Ingeniería |
dc.publisher.faculty.spa.fl_str_mv |
Facultad de Ingeniería |
dc.publisher.place.spa.fl_str_mv |
Cali |
institution |
Universidad Autónoma de Occidente |
bitstream.url.fl_str_mv |
https://red.uao.edu.co/bitstreams/f4618bd9-a6d4-44cd-aac0-e06b1fb80e9d/download https://red.uao.edu.co/bitstreams/4c4ffe80-bd7f-43fc-a960-905601588797/download https://red.uao.edu.co/bitstreams/6d97cfa2-902a-44d3-93e1-64feca2b1462/download https://red.uao.edu.co/bitstreams/1fd6d3b9-cb4e-414f-970d-f3f482ed3b67/download https://red.uao.edu.co/bitstreams/7fc0d066-b44f-4d69-abf0-3c4bc15ee793/download https://red.uao.edu.co/bitstreams/c7ba2a1c-a065-4998-884d-f3fe0b2ae3b8/download https://red.uao.edu.co/bitstreams/54833503-42b9-4ad7-b457-6bbcb9da341b/download |
bitstream.checksum.fl_str_mv |
6b541d1e81f8836d414a1bca6b3e9174 fae3ab4dc125da71a0540810ab08afa8 6987b791264a2b5525252450f99b10d1 9ad64484e8fb5dff374a17cbbf0da900 091c80ea762646bd723bd0c6308966b9 01482431ddfd07036b5c1f3ad4c721e8 3369c48bd580b526d1dadcac68c0b142 |
bitstream.checksumAlgorithm.fl_str_mv |
MD5 MD5 MD5 MD5 MD5 MD5 MD5 |
repository.name.fl_str_mv |
Repositorio Digital Universidad Autonoma de Occidente |
repository.mail.fl_str_mv |
repositorio@uao.edu.co |
_version_ |
1814259881255895040 |
spelling |
López Sotelo, Jesús Alfonsovirtual::867-1Paniagua Jaramillo, José LuisUniversidad Autónoma de OccidenteRomero Cano, VíctorPeña, Carlos2024-02-29T16:03:52Z2024-02-29T16:03:52Z2022-11-08Paniagua Jaramillo, J. L. (2022). Aportes al modelado de sistemas dinámicos no lineales usando modelos generativos. (Tesis). Universidad Autónoma de Occidente. Cali. Colombia. https://hdl.handle.net/10614/15466https://hdl.handle.net/10614/15466Universidad Autónoma de OccidenteRespositorio Educativo Digital UAOhttps://red.uao.edu.co/En este trabajo de tesis se presenta un aporte al modelado de sistemas dinámicos usando modelos generativos de aprendizaje profundo, específicamente Autocodificadores y Autocodificadores Variacionales. En primer lugar, se realiza una revisión acerca de la intersección entre las temáticas de identificación de sistemas y el aprendizaje profundo. Esto con el fin de proponer métodos y arquitecturas actuales de modelado de sistemas usando redes neuronales profundas. Se propone una arquitectura de Autocodificador basada en redes neuronales MLP para el modelado de sistemas lineales y no lineales. Además, se introduce el concepto de reducción de dimensionalidad para lograr una representación compacta de la respuesta temporal del sistema, lo cual ayuda a la predicción ante señales afectadas por ruido. Por otra parte, se plantea una arquitectura de Autocodificador Variacional modificada para el modelado de sistemas no lineales exclusivamente. La parte del codificador consiste en una red MLP, mientras que el decodificador se basa en una estructura NARX neuronal. Las arquitecturas propuestas son validadas con sistemas dinámicos de referencia seleccionados a partir de una revisión de literatura. Además, los resultados son contrastados con los obtenidos con arquitecturas clásicas de modelado con redes neuronales artificialesThis research work presents contributions to dynamic systems modeling using Deep learning generative models, specifically Autoencoders and Variational Autoencoders. In order to propose current methods and structures for system modeling using Deep neural networks, a review about the intersection between system identification and deep learning is performed. An Autoencoder structure based on MLP neural networks is proposed for linear and nonlinear systems modeling. In addition, dimensionality reduction concept is introduced to achieve a compact representation of the time response of systems, allowing the prediction of signals affected by noise. On the other hand, a modified Variational Autoencoder structure is proposed for modeling nonlinear systems exclusively. The encoder module consists of an MLP network, while the decoder is based on a neural NARX structure. The proposed structures are validated with benchmark dynamical systems selected from a literature review. Finally, the results are contrasted with those obtained with classical artificial neural network modeling structuresTesis (Doctor en Ingeniería)-- Universidad Autónoma de Occidente, 2022DoctoradoDoctor(a) en Ingeniería189 páginasapplication/pdfspaUniversidad Autónoma de OccidenteDoctorado en IngenieríaFacultad de IngenieríaCaliDerechos reservados - Universidad Autónoma de Occidente, 2022https://creativecommons.org/licenses/by-nc-nd/4.0/info:eu-repo/semantics/openAccessAtribución-NoComercial-SinDerivadas 4.0 Internacional (CC BY-NC-ND 4.0)http://purl.org/coar/access_right/c_abf2Aportes al modelado de sistemas dinámicos no lineales usando modelos generativosTrabajo de grado - Doctoradohttp://purl.org/coar/resource_type/c_db06Textinfo:eu-repo/semantics/doctoralThesishttp://purl.org/redcol/resource_type/TDinfo:eu-repo/semantics/publishedVersionhttp://purl.org/coar/version/c_970fb48d4fbd8a85[1] E. Caicedo, J. López, y A. Muñoz, Control Inteligente, 2012. [en línea]. disponible en: https://dl.acm.org/citation.cfm?id=2721661[2] O. Nelles, Nonlinear system identification: from classical approaches to neural networks, fuzzy models, and gaussian processes. Springer Nature, 2020.[3] L. Ljung, “System identification-theory for the user 2nd edition ptr prenticehall,” Upper Saddle River, NJ, 1999.[4] G. Pillonetto, F. Dinuzzo, T. Chen, G. D. Nicolao, y L. Ljung, “Kernel methods in system identification, machine learning and function estimation: A survey,” Automatica, vol. 50, pp. 657–682, 2014.[5] L. Ljung, T. Chen, y B. Mu, “A shift in paradigm for system identification,” International Journal of Control, vol. 93, pp. 173–180, 2 2020.[6] F. B. Carlson, Machine Learning and System Identification for Estimation in Phy-sical Systems, 2019.[7] J. M. Sandin, A. Albiol, y R. Paredes, “Contributions to Deep Learning Models,” 2015. [en línea]. disponible en: https://riunet.upv.es/bitstream/handle/10251/ 61296/MANSANET-ContributionstoDeepLearningModels.pdf?sequence=1& isAllowed=y[8] L. Ljung, C. Andersson, K. Tiels, y T. B. Schön, “Deep Learning and System Identification,” 21st IFAC World Congress, p. 8, 2020.[9] A. M. Reppen y H. M. Soner, “Bias-variance trade-off and overlearning in dynamic decision problems,” 11 2020. [en línea]. disponible en: http://arxiv.org/abs/ 2011.09349[10] B. Neal, S. Mittal, A. Baratin, V. Tantia, M. Scicluna, S. Lacoste-Julien, and Mitliagkas, “A modern take on the bias-variance tradeoff in neural networks,” 10 2018. [en línea]. disponible en: http://arxiv.org/abs/1810.08591[11] A. S. Willsky y I. T. Young, Signals and systems. Prentice-Hall International, 1997.[12] K. Ogata, System dynamics / Katsuhiko Ogata., 4th ed. Upper Saddle River, NJ: Pearson/Prentice Hall, 2004.[13] L. A. Zadeh, “From circuit theory to system theory,” Proceedings of the IRE, vol. 50, no. 5, pp. 856–865, 1962.[14] X. Du, Y. Cai, S. Wang, y L. Zhang, “Overview of deep learning,” in 2016 31st Youth Academic Annual Conference of Chinese Association of Automation (YAC). IEEE, nov 2016, pp. 159–164. [en línea]. disponible en: http://ieeexplore.ieee.org/document/7804882/[15] H. Demuth, M. Beale, O. D. Jess, y M. Hagan, Neural network design, 2014.[en línea]. disponible en: https://dl.acm.org/citation.cfm?id=2721661[16] J. Schmidhuber, “Deep Learning in Neural Networks: An Overview,” Neural Networks, vol. 61, pp. 85–117, oct 2015. [en línea]. disponible en: http://arxiv.org/abs/1404.7828http://files/290/Schmidhuber-2015-DeepLearninginNeuralNetworksAnOverview.pdfhttp://files/291/1404.html[17] W. Liu, Z. Wang, X. Liu, N. Zeng, Y. Liu, y F. E. Alsaadi, “A survey of deep neural network architectures and their applications,” Neurocomputing, vol. 234, pp. 11–26, apr 2017. [en línea]. disponible en: https://www.sciencedirect.com/science/article/pii/S0925231216315533[18] I. Goodfellow, Y. Bengio, A. Courville, e Y Bengio, “Deep learning,” 2016. [en línea]. disponible en: https://www.synapse.koreamed.org/Synapse/Data/PDFData/1088HIR/hir-22-351.pdf[19] J. Krohn, G. Beyleveld, y A. Bassens, Deep learning illustrated: a visual, interactive guide to artificial intelligence. Addison-Wesley Professional, 2019.[20] S. University, “Stanford University CS231n: Convolutional Neural Networks for Visual Recognition.” [en línea]. disponible en: http://cs231n.stanford.edu/[21] J. Gonzalez y W. Yu, “Non-linear system modeling using lstm neural networks,” IFAC-PapersOnLine, vol. 51, no. 13, pp. 485–489, 2018.[22] Yu Wang, “A new concept using LSTM Neural Networks for dynamic system identification,” in 2017 American Control Conference (ACC). IEEE, may 2017, pp. 5324–5329. [en línea]. disponible en: http://ieeexplore.ieee.org/document/7963782/[23] Y. Bengio, P. Simard, y P. Frasconi, “Learning long-term dependencies with gradient descent is difficult,” IEEE Transactions on Neural Networks, vol. 5, no. 2, pp. 157–166, mar 1994. [en línea]. disponible en: http://www.ncbi.nlm.nih.gov/pubmed/18267787http://ieeexplore.ieee.org/document/279181/[24] S. Hochreiter y J. Schmidhuber, “Long short-term memory,” Neural computation, vol. 9, no. 8, pp. 1735–1780, 1997.[25] C. Olah, “Understanding LSTM Networks – colah’s blog.” [en línea]. disponible en: https://colah.github.io/posts/2015-08-Understanding-LSTMs/[26] A. C. Müller y S. Guido, Introduction to Machine Learning with Python: A Guide for Data Scientists. . O ’Reilly Media, Inc.", 2016. [en línea]. disponible en: https://books.google.com.co/books?id=vbQlDQAAQBAJ[27] C. Doersch, “Tutorial on Variational Autoencoders,” jun 2016. [en línea]. disponible en: http://arxiv.org/abs/1606.05908[28] S. T. Andrew Ng, Jiquan Ngiam, Chuan Yu Foo, Yifan Mai, Caroline Suen, Adam Coates, Andrew Maas, Awni Hannun, Brody Huval, Tao Wang, “Unsupervised Feature Learning and Deep Learning Tutorial.” [en línea]. disponible en: http://ufldl.stanford.edu/tutorial/unsupervised/Autoencoders/[29] P. Baldi, “Autoencoders, Unsupervised Learning, and Deep Architectures,” Tech. Rep., jun 2012. [en línea]. disponible en: http://proceedings.mlr.press/v27/baldi12a.html[30] G. van den Burg, “Simple MNIST Autoencoder in TensorFlow · Gertjan van den Burg.” [en línea]. disponible en: https://gertjanvandenburg.com/blog/autoencoder/[31] A. Dertat, “Applied Deep Learning - Part 3: Autoencoders To-wards Data Science.” [en línea]. disponible en: https://towardsdatascience.com/applied-deep-learning-part-3-autoencoders-1c083af4d798[32] I. Shafkat, “Intuitively Understanding Variational Autoencoders To-wards Data Science.” [en línea]. disponible en: https://towardsdatascience.com/intuitively-understanding-variationalautoencoders-1bfe67eb5daf[33] Q. Zheng, M. Yang, J. Yang, Q. Zhang, y X. Zhang, “Improvement of generalization ability of deep cnn via implicit regularization in two-stage training process,” IEEE Access, vol. 6, pp. 15 844–15 869, 2018.[34] A. Achille y S. Soatto, “Emergence of invariance and disentanglement in deep representations,” The Journal of Machine Learning Research, vol. 19, no. 1, pp. 1947–1980, 2018.[35] N. Tishby, F. C. Pereira, y W. Bialek, “The information bottleneck method,”arXiv preprint physics/0004057, 2000.[36] N. Tishby y N. Zaslavsky, “Deep learning and the information bottleneck principle,” in 2015 ieee information theory workshop (itw). IEEE, 2015, pp. 1–5.[37] C. M. Bishop y N. M. Nasrabadi, Pattern recognition and machine learning. Springer, 2006, vol. 4, no. 4.[38] I. J. Goodfellow, J. Pouget-Abadie, M. Mirza, B. Xu, D. Warde-Farley, S. Ozair, Courville, e Y. Bengio, “Generative Adversarial Networks,” pp. 1–9, 2014.[en línea]. disponible en: http://arxiv.org/abs/1406.2661[39] D. P. Kingma, D. J. Rezende, S. Mohamed, y M. Welling, “Semi-Supervised Learning with Deep Generative Models,” jun 2014. [en línea]. disponible en: http://arxiv.org/abs/1406.5298[40] A. Chiuso y G. Pillonetto, “System Identification: A Machine Learning Perspective,” Annual Review of Control, Robotics, and Autonomous Systems, vol. 2, no. 1, pp. 281–304, 2019.[41] K. Narendra y K. Parthasarathy, “Identification and control of dynamical systems using neural networks,” IEEE Transactions on Neural Networks, vol. 1, no. 1, pp. 4–27, mar 1990. [en línea]. disponible en: http://ieeexplore.ieee.org/document/80202/[42] J. L. Elman, “Finding structure in time,” Cognitive Science, vol. 14, no. 2, pp. 179–211, apr 1990. [en línea]. disponible en: https://www.sciencedirect.com/science/article/pii/036402139090002E[43] X. Gao, X. Gao, y S. Ovaska, “A modified Elman neural network model with application to dynamical systems identification,” in 1996 IEEE International Conference on Systems, Man and Cybernetics. Information Intelligence and Systems (Cat. No.96CH35929), vol. 2. IEEE, pp. 1376–1381.[en línea]. disponible en: http://ieeexplore.ieee.org/document/571312/[44] R. Grino, G. Cembrano, y C. Torras, “Nonlinear system identification using additive dynamic neural networks-two on-line approaches,” IEEE Transactions on Circuits and Systems I: Fundamental Theory and Applications, vol. 47, no. 2, pp. 150–165, 2000. [en línea]. disponible en: http://ieeexplore.ieee.org/document/ 828569/[45] N. Todorovic y P. Klan, “State of the Art in Nonlinear Dynamical System Identification using Artificial Neural Networks,” in 2006 8th Seminar on Neural Network Applications in Electrical Engineering. IEEE, 2006, pp. 103–108.[en línea]. disponible en: http://ieeexplore.ieee.org/document/4147175/[46] J. de Canete, S. G.-P. W. A. of . . . , and undefined 2008, “Software tools for system identification and control using neural networks in process engineering,” pdfs.semanticscholar.org. [en línea]. disponible en: https://pdfs.semanticscholar.org/ 2414/a7b690ac52f69a581b7e1643817384bb8ea6.pdf[47] A. Punjani y P. Abbeel, “Deep learning helicopter dynamics models,” in 2015 IEEE International Conference on Robotics and Automation (ICRA). IEEE, may 2015, pp. 3223–3230. [en línea]. disponible en: http://ieeexplore.ieee.org/document/7139643/[48] K. Cheon, J. Kim, M. Hamadache, y D. Lee, “On Replacing PID Controller with Deep Learning Controller for DC Motor System,” Journal of Automation and Control Engineering, vol. 3, no. 6, pp. 452–456, nov 2015. [en línea]. disponible en: http://www.joace.org/index.php?m=content&c=index& a=show&catid=50&id=273http://files/418/20150408032023512.pdf[49] G. E. Hinton, S. Osindero, e Y.-W. Teh, “A fast learning algorithm for deep belief nets,” Neural computation, vol. 18, no. 7, pp. 1527–1554, 2006.[50] S. P. K. Spielberg, R. B. Gopaluni, y P. D. Loewen, “Deep reinforcement learning approaches for process control.” IEEE, 2017, pp. 201–206. [en línea]. disponible en: http://files/374/07983780.pdf[51] E. De La Rosa, W. Yu, y X. Li, “Nonlinear system modeling with deep neural networks and autoencoders algorithm,” 2016 IEEE International Conference on Systems, Man, and Cybernetics, SMC 2016 - Conference Proceedings, pp. 2157–2162, 2017.[52] Z. Han y K. S. Narendra, “New concepts in adaptive control using multiple models,” IEEE Transactions on Automatic Control, vol. 57, no. 1, pp. 78–89, 2011.[53] M. Raissi, P. Perdikaris, y G. E. Karniadakis, “Multistep Neural Networks for Data-driven Discovery of Nonlinear Dynamical Systems,” jan 2018. [en línea]. disponible en: http://arxiv.org/abs/1801.01236[54] R. Iten, T. Metger, H. Wilming, L. Del Rio, y R. Renner, “Discovering physical concepts with neural networks,” Tech. Rep. [en línea]. disponible en: https://arxiv.org/pdf/1807.10300.pdf[55] M. Lopez y W. Yu, “Nonlinear system modeling using convolutional neural networks,” in 2017 14th International Conference on Electrical Engineering, Computing Science and Automatic Control (CCE). IEEE, oct 2017, pp. 1–5.[en línea]. disponible en: http://ieeexplore.ieee.org/document/8108894/[56] Y. Lecun, L. Bottou, Y. Bengio, y P. Haffner, “Gradient-based learning applied to document recognition,” Proceedings of the IEEE, vol. 86, no. 11, pp. 2278–2324, 1998. [en línea]. disponible en: http://ieeexplore.ieee.org/document/ 726791/[57] W. Yu y M. Pacheco, “Impact of random weights on nonlinear system identification using convolutional neural networks,” Information Sciences, vol. 477, pp. 1–14, mar 2019. [en línea]. disponible en: https://linkinghub.elsevier.com/retrieve/pii/S0020025518308168[58] S. Genc, “Parametric system identification using deep convolutional neural networks,” in 2017 International Joint Conference on Neural Networks (IJCNN). IEEE, may 2017, pp. 2112–2119. [en línea]. disponible en: http://ieeexplore.ieee.org/document/7966110/[59] C. Andersson, A. H. Ribeiro, K. Tiels, N. Wahlstrom, y T. B. Schon, “Deep Convolutional Networks in System Identification,” Proceedings of the IEEE Con-ference on Decision and Control, vol. 2019-Decem, pp. 3670–3676, 2019.[60] M. Thill, W. Konen, y T. Bäck, “Time Series Encodings with Temporal Convolutional Networks,” Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics), vol. 12438 LNCS, pp. 161–173, 2020.[61] G. E. Hinton y R. R. Salakhutdinov, “Reducing the dimensionality of data with neural networks,” science, vol. 313, no. 5786, pp. 504–507, 2006.[62] Y. LeCun, L. Bottou, Y. Bengio, y P. Haffner, “Gradient-based learning applied to document recognition,” Proceedings of the IEEE, vol. 86, no. 11, pp. 2278–2324, 1998.[63] J. Wang, H. He, y D. V. Prokhorov, “A folded neural network autoencoder for dimensionality reduction,” Procedia Computer Science, vol. 13, pp. 120–127, 2012.[64] S. E. Otto y C. W. Rowley, “Linearly recurrent autoencoder networks for learning dynamics,” SIAM Journal on Applied Dynamical Systems, vol. 18, pp. 558–593, 2019.[65] T. Duriez, S. L. Brunton, y B. R. Noack, Machine learning control-taming nonlinear dynamics and turbulence. Springer, 2017.[66] T. Peters, “Data-driven science and engineering: machine learning, dynamical systems, and control: by sl brunton and jn kutz, 2019, cambridge, cambridge university press, 472 pp.,£ 49.99 (hardback), isbn 9781108422093. level: post-graduate. scope: textbook.” 2019.[67] M. Schoukens, P. Mattson, T. Wigren, y J.-P. Noel, “Cascaded tanks benchmark combining soft and hard nonlinearities,” in Workshop on nonlinear system identification benchmarks, 2016, pp. 20–23.[68] G. E. Box, G. M. Jenkins, G. C. Reinsel, y G. M. Ljung, Time series analysis: forecasting and control. John Wiley & Sons, 2015.[69] A. Marconato, J. Sjöberg, J. Suykens, y J. Schoukens, “Identification of the silverbox benchmark using nonlinear state-space models,” IFAC Proceedings Volumes, vol. 45, no. 16, pp. 632–637, 2012.[70] J. Schoukens y L. Ljung, “Wiener-hammerstein benchmark,” 2009.[71] D. P. Kingma y J. Ba, “Adam: A method for stochastic optimization,” arXiv preprint arXiv:1412.6980, 2014.Doctorado en IngenieríaIdentificacion de sistemasAprendizaje profundoModelos generativosSistemas dinámicos no linealesSystem identificationDeep learningGenerative modelingNonlinear dynamic systemComunidad generalPublicationhttps://scholar.google.com.au/citations?user=7PIjh_MAAAAJ&hl=envirtual::867-10000-0002-9731-8458virtual::867-1https://scienti.minciencias.gov.co/cvlac/visualizador/generarCurriculoCv.do?cod_rh=0000249106virtual::867-1fc227fb1-22ec-47f0-afe7-521c61fddd32virtual::867-1fc227fb1-22ec-47f0-afe7-521c61fddd32virtual::867-1ORIGINALT10994_Aportes al modelado de sistemas dinámicos no lineales usando modelos generativos.pdfT10994_Aportes al modelado de sistemas dinámicos no lineales usando modelos generativos.pdfArchivo texto completo del trabajo de grado, PDFapplication/pdf4814616https://red.uao.edu.co/bitstreams/f4618bd9-a6d4-44cd-aac0-e06b1fb80e9d/download6b541d1e81f8836d414a1bca6b3e9174MD51TA10994_Autorización trabajo de grado.pdfTA10994_Autorización trabajo de grado.pdfAutorización publicación del trabajo de gradoapplication/pdf319691https://red.uao.edu.co/bitstreams/4c4ffe80-bd7f-43fc-a960-905601588797/downloadfae3ab4dc125da71a0540810ab08afa8MD52LICENSElicense.txtlicense.txttext/plain; charset=utf-81672https://red.uao.edu.co/bitstreams/6d97cfa2-902a-44d3-93e1-64feca2b1462/download6987b791264a2b5525252450f99b10d1MD54TEXTT10994_Aportes al modelado de sistemas dinámicos no lineales usando modelos generativos.pdf.txtT10994_Aportes al modelado de sistemas dinámicos no lineales usando modelos generativos.pdf.txtExtracted texttext/plain101688https://red.uao.edu.co/bitstreams/1fd6d3b9-cb4e-414f-970d-f3f482ed3b67/download9ad64484e8fb5dff374a17cbbf0da900MD55TA10994_Autorización trabajo de grado.pdf.txtTA10994_Autorización trabajo de grado.pdf.txtExtracted texttext/plain5213https://red.uao.edu.co/bitstreams/7fc0d066-b44f-4d69-abf0-3c4bc15ee793/download091c80ea762646bd723bd0c6308966b9MD57THUMBNAILT10994_Aportes al modelado de sistemas dinámicos no lineales usando modelos generativos.pdf.jpgT10994_Aportes al modelado de sistemas dinámicos no lineales usando modelos generativos.pdf.jpgGenerated Thumbnailimage/jpeg6851https://red.uao.edu.co/bitstreams/c7ba2a1c-a065-4998-884d-f3fe0b2ae3b8/download01482431ddfd07036b5c1f3ad4c721e8MD56TA10994_Autorización trabajo de grado.pdf.jpgTA10994_Autorización trabajo de grado.pdf.jpgGenerated Thumbnailimage/jpeg13136https://red.uao.edu.co/bitstreams/54833503-42b9-4ad7-b457-6bbcb9da341b/download3369c48bd580b526d1dadcac68c0b142MD5810614/15466oai:red.uao.edu.co:10614/154662024-03-04 03:00:58.623https://creativecommons.org/licenses/by-nc-nd/4.0/Derechos reservados - Universidad Autónoma de Occidente, 2022open.accesshttps://red.uao.edu.coRepositorio Digital Universidad Autonoma de Occidenterepositorio@uao.edu.coPHA+RUwgQVVUT1IgYXV0b3JpemEgYSBsYSBVbml2ZXJzaWRhZCBBdXTDs25vbWEgZGUgT2NjaWRlbnRlLCBkZSBmb3JtYSBpbmRlZmluaWRhLCBwYXJhIHF1ZSBlbiBsb3MgdMOpcm1pbm9zIGVzdGFibGVjaWRvcyBlbiBsYSBMZXkgMjMgZGUgMTk4MiwgbGEgTGV5IDQ0IGRlIDE5OTMsIGxhIERlY2lzacOzbiBhbmRpbmEgMzUxIGRlIDE5OTMsIGVsIERlY3JldG8gNDYwIGRlIDE5OTUgeSBkZW3DoXMgbGV5ZXMgeSBqdXJpc3BydWRlbmNpYSB2aWdlbnRlIGFsIHJlc3BlY3RvLCBoYWdhIHB1YmxpY2FjacOzbiBkZSBlc3RlIGNvbiBmaW5lcyBlZHVjYXRpdm9zLiBQQVJBR1JBRk86IEVzdGEgYXV0b3JpemFjacOzbiBhZGVtw6FzIGRlIHNlciB2w6FsaWRhIHBhcmEgbGFzIGZhY3VsdGFkZXMgeSBkZXJlY2hvcyBkZSB1c28gc29icmUgbGEgb2JyYSBlbiBmb3JtYXRvIG8gc29wb3J0ZSBtYXRlcmlhbCwgdGFtYmnDqW4gcGFyYSBmb3JtYXRvIGRpZ2l0YWwsIGVsZWN0csOzbmljbywgdmlydHVhbCwgcGFyYSB1c29zIGVuIHJlZCwgSW50ZXJuZXQsIGV4dHJhbmV0LCBpbnRyYW5ldCwgYmlibGlvdGVjYSBkaWdpdGFsIHkgZGVtw6FzIHBhcmEgY3VhbHF1aWVyIGZvcm1hdG8gY29ub2NpZG8gbyBwb3IgY29ub2Nlci4gRUwgQVVUT1IsIGV4cHJlc2EgcXVlIGVsIGRvY3VtZW50byAodHJhYmFqbyBkZSBncmFkbywgcGFzYW50w61hLCBjYXNvcyBvIHRlc2lzKSBvYmpldG8gZGUgbGEgcHJlc2VudGUgYXV0b3JpemFjacOzbiBlcyBvcmlnaW5hbCB5IGxhIGVsYWJvcsOzIHNpbiBxdWVicmFudGFyIG5pIHN1cGxhbnRhciBsb3MgZGVyZWNob3MgZGUgYXV0b3IgZGUgdGVyY2Vyb3MsIHkgZGUgdGFsIGZvcm1hLCBlbCBkb2N1bWVudG8gKHRyYWJham8gZGUgZ3JhZG8sIHBhc2FudMOtYSwgY2Fzb3MgbyB0ZXNpcykgZXMgZGUgc3UgZXhjbHVzaXZhIGF1dG9yw61hIHkgdGllbmUgbGEgdGl0dWxhcmlkYWQgc29icmUgw6lzdGUuIFBBUkFHUkFGTzogZW4gY2FzbyBkZSBwcmVzZW50YXJzZSBhbGd1bmEgcmVjbGFtYWNpw7NuIG8gYWNjacOzbiBwb3IgcGFydGUgZGUgdW4gdGVyY2VybywgcmVmZXJlbnRlIGEgbG9zIGRlcmVjaG9zIGRlIGF1dG9yIHNvYnJlIGVsIGRvY3VtZW50byAoVHJhYmFqbyBkZSBncmFkbywgUGFzYW50w61hLCBjYXNvcyBvIHRlc2lzKSBlbiBjdWVzdGnDs24sIEVMIEFVVE9SLCBhc3VtaXLDoSBsYSByZXNwb25zYWJpbGlkYWQgdG90YWwsIHkgc2FsZHLDoSBlbiBkZWZlbnNhIGRlIGxvcyBkZXJlY2hvcyBhcXXDrSBhdXRvcml6YWRvczsgcGFyYSB0b2RvcyBsb3MgZWZlY3RvcywgbGEgVW5pdmVyc2lkYWQgIEF1dMOzbm9tYSBkZSBPY2NpZGVudGUgYWN0w7phIGNvbW8gdW4gdGVyY2VybyBkZSBidWVuYSBmZS4gVG9kYSBwZXJzb25hIHF1ZSBjb25zdWx0ZSB5YSBzZWEgZW4gbGEgYmlibGlvdGVjYSBvIGVuIG1lZGlvIGVsZWN0csOzbmljbyBwb2Ryw6EgY29waWFyIGFwYXJ0ZXMgZGVsIHRleHRvIGNpdGFuZG8gc2llbXByZSBsYSBmdWVudGUsIGVzIGRlY2lyIGVsIHTDrXR1bG8gZGVsIHRyYWJham8geSBlbCBhdXRvci4gRXN0YSBhdXRvcml6YWNpw7NuIG5vIGltcGxpY2EgcmVudW5jaWEgYSBsYSBmYWN1bHRhZCBxdWUgdGllbmUgRUwgQVVUT1IgZGUgcHVibGljYXIgdG90YWwgbyBwYXJjaWFsbWVudGUgbGEgb2JyYS48L3A+Cg== |