Scalable kernel methods using randomized numerical linear algebra
Documento de tesis de maestria
- Autores:
-
Castellanos Martinez, Ivan Yesid
- Tipo de recurso:
- Fecha de publicación:
- 2021
- Institución:
- Universidad Nacional de Colombia
- Repositorio:
- Universidad Nacional de Colombia
- Idioma:
- eng
- OAI Identifier:
- oai:repositorio.unal.edu.co:unal/80695
- Palabra clave:
- 000 - Ciencias de la computación, información y obras generales
Machine Learning
Kernel Methods
Budget Method
Randomized Numerical Linear Algebra
Distance Based Hashing
Approximated Methods
Aprendizaje maquinal
Métodos de kernel
Método de budget
Algebra Lineal Numérica Aleatorizada
Hashing basado en distancias
Métodos Aproximados
- Rights
- openAccess
- License
- Reconocimiento 4.0 Internacional
id |
UNACIONAL2_07553c137724bf32443e2a93227a306b |
---|---|
oai_identifier_str |
oai:repositorio.unal.edu.co:unal/80695 |
network_acronym_str |
UNACIONAL2 |
network_name_str |
Universidad Nacional de Colombia |
repository_id_str |
|
dc.title.eng.fl_str_mv |
Scalable kernel methods using randomized numerical linear algebra |
dc.title.translated.spa.fl_str_mv |
Métodos de kernel escalables utilizando álgebra lineal numérica aleatorizada |
title |
Scalable kernel methods using randomized numerical linear algebra |
spellingShingle |
Scalable kernel methods using randomized numerical linear algebra 000 - Ciencias de la computación, información y obras generales Machine Learning Kernel Methods Budget Method Randomized Numerical Linear Algebra Distance Based Hashing Approximated Methods Aprendizaje maquinal Métodos de kernel Método de budget Algebra Lineal Numérica Aleatorizada Hashing basado en distancias Métodos Aproximados |
title_short |
Scalable kernel methods using randomized numerical linear algebra |
title_full |
Scalable kernel methods using randomized numerical linear algebra |
title_fullStr |
Scalable kernel methods using randomized numerical linear algebra |
title_full_unstemmed |
Scalable kernel methods using randomized numerical linear algebra |
title_sort |
Scalable kernel methods using randomized numerical linear algebra |
dc.creator.fl_str_mv |
Castellanos Martinez, Ivan Yesid |
dc.contributor.advisor.none.fl_str_mv |
Gonzalez Osorio, Fabio Augusto |
dc.contributor.author.none.fl_str_mv |
Castellanos Martinez, Ivan Yesid |
dc.contributor.researchgroup.spa.fl_str_mv |
MindLab |
dc.subject.ddc.spa.fl_str_mv |
000 - Ciencias de la computación, información y obras generales |
topic |
000 - Ciencias de la computación, información y obras generales Machine Learning Kernel Methods Budget Method Randomized Numerical Linear Algebra Distance Based Hashing Approximated Methods Aprendizaje maquinal Métodos de kernel Método de budget Algebra Lineal Numérica Aleatorizada Hashing basado en distancias Métodos Aproximados |
dc.subject.proposal.eng.fl_str_mv |
Machine Learning Kernel Methods Budget Method Randomized Numerical Linear Algebra Distance Based Hashing Approximated Methods |
dc.subject.proposal.spa.fl_str_mv |
Aprendizaje maquinal Métodos de kernel Método de budget Algebra Lineal Numérica Aleatorizada Hashing basado en distancias Métodos Aproximados |
description |
Documento de tesis de maestria |
publishDate |
2021 |
dc.date.accessioned.none.fl_str_mv |
2021-11-18T04:30:09Z |
dc.date.available.none.fl_str_mv |
2021-11-18T04:30:09Z |
dc.date.issued.none.fl_str_mv |
2021 |
dc.type.spa.fl_str_mv |
Trabajo de grado - Maestría |
dc.type.driver.spa.fl_str_mv |
info:eu-repo/semantics/masterThesis |
dc.type.version.spa.fl_str_mv |
info:eu-repo/semantics/acceptedVersion |
dc.type.content.spa.fl_str_mv |
Text |
dc.type.redcol.spa.fl_str_mv |
http://purl.org/redcol/resource_type/TM |
status_str |
acceptedVersion |
dc.identifier.uri.none.fl_str_mv |
https://repositorio.unal.edu.co/handle/unal/80695 |
dc.identifier.instname.spa.fl_str_mv |
Universidad Nacional de Colombia |
dc.identifier.reponame.spa.fl_str_mv |
Repositorio Institucional Universidad Nacional de Colombia |
dc.identifier.repourl.spa.fl_str_mv |
https://repositorio.unal.edu.co/ |
url |
https://repositorio.unal.edu.co/handle/unal/80695 https://repositorio.unal.edu.co/ |
identifier_str_mv |
Universidad Nacional de Colombia Repositorio Institucional Universidad Nacional de Colombia |
dc.language.iso.spa.fl_str_mv |
eng |
language |
eng |
dc.relation.references.spa.fl_str_mv |
Ahuja, S. and Angra, S. (2017). Machine learning and its applications: A review. Baveye, Y., Dellandr´ea, E., Chamaret, C., and Chen, L. (2015). Deep learning vs. kernel methods: Performance for emotion prediction in videos. In 2015 International Conference on Affective Computing and Intelligent Interaction (ACII), pages 77–83. Belkin, M., Ma, S., and Mandal, S. (2018). To understand deep learning we need to understand kernel learning. Bengio, Y., Delalleau, O., and Le Roux, N. (2005). The curse of dimensionality for local kernel machines. Techn. Rep, 1258:12. Borgwardt, K. M. (2011). Kernel Methods in Bioinformatics, pages 317– 334. Springer Berlin Heidelberg, Berlin, Heidelberg. Bousquet, O. and Herrmann, D. J. (2003). On the complexity of learning the kernel matrix. Advances in neural information processing systems, pages 415–422. Boutsidis, C., Mahoney, M. W., and Drineas, P. (2009). An Improved Approximation Algorithm for the Column Subset Selection Problem, pages 968–977. Chen, D., Jacob, L., and Mairal, J. (2020). Convolutional kernel networks for graph-structured data. In III, H. D. and Singh, A., editors, Proceedings of the 37th International Conference on Machine Learning, volume 119 of Proceedings of Machine Learning Research, pages 1576–1586. PMLR. Chitta, R., Jin, R., Havens, T. C., and Jain, A. K. (2014). Scalable kernel clustering: Approximate kernel k-means. CoRR, abs/1402.3849. Chitta, R., Jin, R., and Jain, A. K. (2012). Efficient kernel clustering using random fourier features. In 2012 IEEE 12th International Conference on Data Mining, pages 161–170. IEEE. Wu, L., Chen, P.-Y., Yen, I. E.-H., Xu, F., Xia, Y., and Aggarwal, C. (2018). Scalable spectral clustering using random binning features. In Proceedings of the 24th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining, pages 2506–2515. Xiao, H., Rasul, K., and Vollgraf, R. (2017). Fashion-mnist: a novel image dataset for benchmarking machine learning algorithms. Yang, M.-H. (2002). Kernel eigenfaces vs. kernel fisherfaces: Face recognition using kernel methods. In Fgr, volume 2, page 215. Yu, F. X., Suresh, A. T., Choromanski, K., Holtmann-Rice, D. N., and Kumar, S. (2016). Orthogonal random features. CoRR, abs/1610.09072. Zhang, D., Wang, J., Cai, D., and Lu, J. (2010). Self-taught hashing for fast similarity search. In Proceedings of the 33rd international ACM SIGIR conference on Research and development in information retrieval, pages 18–25. Zhang, Q., Filippi, S., Gretton, A., and Sejdinovic, D. (2017). Largescale kernel methods for independence testing. Statistics and Computing, 28(1):113–130. Trokicic, A. and Todorovic, B. (2020). On expected error of randomized nystrom kernel regression. Filomat, 34(11):3871–3884. Vanegas, J. A., Escalante, H. J., and González, F. A. (2018). Semisupervised online kernel semantic embedding for multi-label annotation. In Mendoza, M. and Velastín, S., editors, Progress in Pattern Recognition, Image Analysis, Computer Vision, and Applications, pages 693–701, Cham. Springer International Publishing. Wang, D. E. J. (2006). Fast approximation of centrality. Graph algorithms and applications, 5(5):39. Wang, J., Cao, B., Yu, P., Sun, L., Bao, W., and Zhu, X. (2018). Deep learning towards mobile applications. In 2018 IEEE 38th International Conference on Distributed Computing Systems (ICDCS), pages 1385–1393. Wang, J., Shen, H. T., Song, J., and Ji, J. (2014). Hashing for similarity search: A survey. CoRR, abs/1408.2927. Wang, S., Gittens, A., and Mahoney, M. W. (2019). Scalable kernel k-means clustering with nyström approximation: relative-error bounds. The Journal of Machine Learning Research, 20(1):431–479. Wang, S., Luo, L., and Zhang, Z. (2016). Spsd matrix approximation via column selection: Theories, algorithms, and extensions. Wang, S. and Zhang, Z. (2013). Improving cur matrix decomposition and the nyström approximation via adaptive sampling. The Journal of Machine Learning Research, 14(1):2729–2769. Wang, Y., Liu, X., Dou, Y., Lv, Q., and Lu, Y. (2017). Multiple kernel learning with hybrid kernel alignment maximization. Pattern Recognition, 70:104–111. Wang, Z., Crammer, K., and Vucetic, S. (2012). Breaking the curse of kernelization: Budgeted stochastic gradient descent for large-scale svm training. Journal of Machine Learning Research, 13(100):3103–3131. Williams, C. K. I. and Seeger, M. (2001). Using the nyström method to speed up kernel machines. In Leen, T. K., Dietterich, T. G., and Tresp, V., editors, Advances in Neural Information Processing Systems 13, pages 682–688. MIT Press. Witten, R. and Candes, E. (2015). Randomized algorithms for low-rank matrix factorizations: sharp performance bounds. Algorithmica, 72(1):264–281. |
dc.rights.coar.fl_str_mv |
http://purl.org/coar/access_right/c_abf2 |
dc.rights.license.spa.fl_str_mv |
Reconocimiento 4.0 Internacional |
dc.rights.uri.spa.fl_str_mv |
http://creativecommons.org/licenses/by/4.0/ |
dc.rights.accessrights.spa.fl_str_mv |
info:eu-repo/semantics/openAccess |
rights_invalid_str_mv |
Reconocimiento 4.0 Internacional http://creativecommons.org/licenses/by/4.0/ http://purl.org/coar/access_right/c_abf2 |
eu_rights_str_mv |
openAccess |
dc.format.extent.spa.fl_str_mv |
xv, 58 páginas |
dc.format.mimetype.spa.fl_str_mv |
application/pdf |
dc.publisher.spa.fl_str_mv |
Universidad Nacional de Colombia |
dc.publisher.program.spa.fl_str_mv |
Bogotá - Ingeniería - Maestría en Ingeniería - Ingeniería de Sistemas y Computación |
dc.publisher.department.spa.fl_str_mv |
Departamento de Ingeniería de Sistemas e Industrial |
dc.publisher.faculty.spa.fl_str_mv |
Facultad de Ingeniería |
dc.publisher.place.spa.fl_str_mv |
Bogotá, Colombia |
dc.publisher.branch.spa.fl_str_mv |
Universidad Nacional de Colombia - Sede Bogotá |
institution |
Universidad Nacional de Colombia |
bitstream.url.fl_str_mv |
https://repositorio.unal.edu.co/bitstream/unal/80695/1/license.txt https://repositorio.unal.edu.co/bitstream/unal/80695/3/1032463787.2021.pdf https://repositorio.unal.edu.co/bitstream/unal/80695/4/1032463787.2021.pdf.jpg |
bitstream.checksum.fl_str_mv |
8153f7789df02f0a4c9e079953658ab2 bcfbc522c5a4ef7c44a4688b51dd8525 9025f17b276f8b25f0f01f10ce8aed6c |
bitstream.checksumAlgorithm.fl_str_mv |
MD5 MD5 MD5 |
repository.name.fl_str_mv |
Repositorio Institucional Universidad Nacional de Colombia |
repository.mail.fl_str_mv |
repositorio_nal@unal.edu.co |
_version_ |
1814090201329303552 |
spelling |
Reconocimiento 4.0 Internacionalhttp://creativecommons.org/licenses/by/4.0/info:eu-repo/semantics/openAccesshttp://purl.org/coar/access_right/c_abf2Gonzalez Osorio, Fabio Augusto0e9d70b5c1d7448338ca4467ccb27e59Castellanos Martinez, Ivan Yesid1f7fc7e21c61b0e95212b9f0892a4ad9MindLab2021-11-18T04:30:09Z2021-11-18T04:30:09Z2021https://repositorio.unal.edu.co/handle/unal/80695Universidad Nacional de ColombiaRepositorio Institucional Universidad Nacional de Colombiahttps://repositorio.unal.edu.co/Documento de tesis de maestriailustraciones, tablasLos métodos de kernel corresponden a un grupo de algoritmos de aprendizaje maquinal que hacen uso de una función de kernel para representar implicitamente datos en un espacio de alta dimensionalidad, donde sistemas de optimización lineal guíen a relaciones no lineales en el espacio original de los datos y por lo tanto encontrando patrones complejos dento de los datos. La mayor desventaja que tienen estos métodos es su pobre capacidad de escalamiento, pues muchos algoritmos basados en kernel requiren calcular una matriz de orden cuadrática respecto al numero de ejemplos en los datos, esta limitación ha provocado que los metodos de kernel sean evitados en configuraciones de datos a gran escala y utilicen en su lugar tecnicas como el aprendizaje profundo. Sin embargo, los metodos de kernel todavía son relevantes para entender mejor los métodos de aprendizaje profundo y ademas pueden mejorarlos haciendo uso de estrategias híbridas que combinen lo mejor de ambos mundos. El principal objetivo de esta tesis es explorar maneras eficientes de utilizar métodos de kernel sin una gran pérdida en precisión. Para realizar esto, diferentes enfoque son presentados y formulados, dentro de los cuales, nosotros proponemos la estrategía de aprendizaje utilizando budget, la cual es presentada en detalle desde una perspectiva teórica, incluyendo un procedimiento novedoso para la selección del budget, esta estrategia muestra en la evaluación experimental un rendimiento competitivo y mejoras respecto al método estandar de aprendizaje utilizando budget, especialmente cuando se seleccionan aproximaciones mas pequeñas, las cuales son las mas útiles en ambientes de gran escala. (Texto tomado de la fuente)Kernel methods are a set of machine learning algorithms that make use of a kernel function in order to represent data in an implicit high dimensional space, where linear optimization systems lead to non-linear relationships in the data original space and therefore finding complex patterns in the data. The main disadvantage of these methods is their poor scalability, as most kernel based algorithms need to calculate a matrix of quadratic order regarding the number of data samples. This limitation has caused kernel methods to be avoided for large scale datasets and use approaches such as deep learning instead. However, kernel methods are still relevant to better understand deep learning methods and can improve them through hybrid settings that combine the best of both worlds. The main goal of this thesis is to explore efficient ways to use kernel methods without a big loss in accuracy performance. In order to do this, different approaches are presented and formulated, from which, we propose the learning-on-a-budget strategy, which is presented in detail from a theoretical perspective, including a novel procedure of budget selection. This strategy shows, in the experimental evaluation competitive performance and improvements to the standard learning-on-a-budget method, especially when selecting smaller approximations, which are the most useful in large scale environments.MaestríaMagíster en Ingeniería - Ingeniería de Sistemas y ComputaciónCiencias de la computaciónxv, 58 páginasapplication/pdfengUniversidad Nacional de ColombiaBogotá - Ingeniería - Maestría en Ingeniería - Ingeniería de Sistemas y ComputaciónDepartamento de Ingeniería de Sistemas e IndustrialFacultad de IngenieríaBogotá, ColombiaUniversidad Nacional de Colombia - Sede Bogotá000 - Ciencias de la computación, información y obras generalesMachine LearningKernel MethodsBudget MethodRandomized Numerical Linear AlgebraDistance Based HashingApproximated MethodsAprendizaje maquinalMétodos de kernelMétodo de budgetAlgebra Lineal Numérica AleatorizadaHashing basado en distanciasMétodos AproximadosScalable kernel methods using randomized numerical linear algebraMétodos de kernel escalables utilizando álgebra lineal numérica aleatorizadaTrabajo de grado - Maestríainfo:eu-repo/semantics/masterThesisinfo:eu-repo/semantics/acceptedVersionTexthttp://purl.org/redcol/resource_type/TMAhuja, S. and Angra, S. (2017). Machine learning and its applications: A review.Baveye, Y., Dellandr´ea, E., Chamaret, C., and Chen, L. (2015). Deep learning vs. kernel methods: Performance for emotion prediction in videos. In 2015 International Conference on Affective Computing and Intelligent Interaction (ACII), pages 77–83.Belkin, M., Ma, S., and Mandal, S. (2018). To understand deep learning we need to understand kernel learning.Bengio, Y., Delalleau, O., and Le Roux, N. (2005). The curse of dimensionality for local kernel machines. Techn. Rep, 1258:12.Borgwardt, K. M. (2011). Kernel Methods in Bioinformatics, pages 317– 334. Springer Berlin Heidelberg, Berlin, Heidelberg.Bousquet, O. and Herrmann, D. J. (2003). On the complexity of learning the kernel matrix. Advances in neural information processing systems, pages 415–422.Boutsidis, C., Mahoney, M. W., and Drineas, P. (2009). An Improved Approximation Algorithm for the Column Subset Selection Problem, pages 968–977.Chen, D., Jacob, L., and Mairal, J. (2020). Convolutional kernel networks for graph-structured data. In III, H. D. and Singh, A., editors, Proceedings of the 37th International Conference on Machine Learning, volume 119 of Proceedings of Machine Learning Research, pages 1576–1586. PMLR.Chitta, R., Jin, R., Havens, T. C., and Jain, A. K. (2014). Scalable kernel clustering: Approximate kernel k-means. CoRR, abs/1402.3849.Chitta, R., Jin, R., and Jain, A. K. (2012). Efficient kernel clustering using random fourier features. In 2012 IEEE 12th International Conference on Data Mining, pages 161–170. IEEE.Wu, L., Chen, P.-Y., Yen, I. E.-H., Xu, F., Xia, Y., and Aggarwal, C. (2018). Scalable spectral clustering using random binning features. In Proceedings of the 24th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining, pages 2506–2515.Xiao, H., Rasul, K., and Vollgraf, R. (2017). Fashion-mnist: a novel image dataset for benchmarking machine learning algorithms.Yang, M.-H. (2002). Kernel eigenfaces vs. kernel fisherfaces: Face recognition using kernel methods. In Fgr, volume 2, page 215.Yu, F. X., Suresh, A. T., Choromanski, K., Holtmann-Rice, D. N., and Kumar, S. (2016). Orthogonal random features. CoRR, abs/1610.09072.Zhang, D., Wang, J., Cai, D., and Lu, J. (2010). Self-taught hashing for fast similarity search. In Proceedings of the 33rd international ACM SIGIR conference on Research and development in information retrieval, pages 18–25.Zhang, Q., Filippi, S., Gretton, A., and Sejdinovic, D. (2017). Largescale kernel methods for independence testing. Statistics and Computing, 28(1):113–130.Trokicic, A. and Todorovic, B. (2020). On expected error of randomized nystrom kernel regression. Filomat, 34(11):3871–3884.Vanegas, J. A., Escalante, H. J., and González, F. A. (2018). Semisupervised online kernel semantic embedding for multi-label annotation. In Mendoza, M. and Velastín, S., editors, Progress in Pattern Recognition, Image Analysis, Computer Vision, and Applications, pages 693–701, Cham. Springer International Publishing.Wang, D. E. J. (2006). Fast approximation of centrality. Graph algorithms and applications, 5(5):39.Wang, J., Cao, B., Yu, P., Sun, L., Bao, W., and Zhu, X. (2018). Deep learning towards mobile applications. In 2018 IEEE 38th International Conference on Distributed Computing Systems (ICDCS), pages 1385–1393.Wang, J., Shen, H. T., Song, J., and Ji, J. (2014). Hashing for similarity search: A survey. CoRR, abs/1408.2927.Wang, S., Gittens, A., and Mahoney, M. W. (2019). Scalable kernel k-means clustering with nyström approximation: relative-error bounds. The Journal of Machine Learning Research, 20(1):431–479.Wang, S., Luo, L., and Zhang, Z. (2016). Spsd matrix approximation via column selection: Theories, algorithms, and extensions.Wang, S. and Zhang, Z. (2013). Improving cur matrix decomposition and the nyström approximation via adaptive sampling. The Journal of Machine Learning Research, 14(1):2729–2769.Wang, Y., Liu, X., Dou, Y., Lv, Q., and Lu, Y. (2017). Multiple kernel learning with hybrid kernel alignment maximization. Pattern Recognition, 70:104–111.Wang, Z., Crammer, K., and Vucetic, S. (2012). Breaking the curse of kernelization: Budgeted stochastic gradient descent for large-scale svm training. Journal of Machine Learning Research, 13(100):3103–3131.Williams, C. K. I. and Seeger, M. (2001). Using the nyström method to speed up kernel machines. In Leen, T. K., Dietterich, T. G., and Tresp, V., editors, Advances in Neural Information Processing Systems 13, pages 682–688. MIT Press.Witten, R. and Candes, E. (2015). Randomized algorithms for low-rank matrix factorizations: sharp performance bounds. Algorithmica, 72(1):264–281.Público generalLICENSElicense.txtlicense.txttext/plain; charset=utf-84074https://repositorio.unal.edu.co/bitstream/unal/80695/1/license.txt8153f7789df02f0a4c9e079953658ab2MD51ORIGINAL1032463787.2021.pdf1032463787.2021.pdfTesis de Maestría en Ingeniería de Sistemas y Computaciónapplication/pdf1043951https://repositorio.unal.edu.co/bitstream/unal/80695/3/1032463787.2021.pdfbcfbc522c5a4ef7c44a4688b51dd8525MD53THUMBNAIL1032463787.2021.pdf.jpg1032463787.2021.pdf.jpgGenerated Thumbnailimage/jpeg4406https://repositorio.unal.edu.co/bitstream/unal/80695/4/1032463787.2021.pdf.jpg9025f17b276f8b25f0f01f10ce8aed6cMD54unal/80695oai:repositorio.unal.edu.co:unal/806952024-08-01 23:10:21.305Repositorio Institucional Universidad Nacional de Colombiarepositorio_nal@unal.edu.coUExBTlRJTExBIERFUMOTU0lUTwoKQ29tbyBlZGl0b3IgZGUgZXN0ZSDDrXRlbSwgdXN0ZWQgcHVlZGUgbW92ZXJsbyBhIHJldmlzacOzbiBzaW4gYW50ZXMgcmVzb2x2ZXIgbG9zIHByb2JsZW1hcyBpZGVudGlmaWNhZG9zLCBkZSBsbyBjb250cmFyaW8sIGhhZ2EgY2xpYyBlbiBHdWFyZGFyIHBhcmEgZ3VhcmRhciBlbCDDrXRlbSB5IHNvbHVjaW9uYXIgZXN0b3MgcHJvYmxlbWFzIG1hcyB0YXJkZS4KClBhcmEgdHJhYmFqb3MgZGVwb3NpdGFkb3MgcG9yIHN1IHByb3BpbyBhdXRvcjoKIApBbCBhdXRvYXJjaGl2YXIgZXN0ZSBncnVwbyBkZSBhcmNoaXZvcyBkaWdpdGFsZXMgeSBzdXMgbWV0YWRhdG9zLCB5byBnYXJhbnRpem8gYWwgUmVwb3NpdG9yaW8gSW5zdGl0dWNpb25hbCBVbmFsIGVsIGRlcmVjaG8gYSBhbG1hY2VuYXJsb3MgeSBtYW50ZW5lcmxvcyBkaXNwb25pYmxlcyBlbiBsw61uZWEgZGUgbWFuZXJhIGdyYXR1aXRhLiBEZWNsYXJvIHF1ZSBsYSBvYnJhIGVzIGRlIG1pIHByb3BpZWRhZCBpbnRlbGVjdHVhbCB5IHF1ZSBlbCBSZXBvc2l0b3JpbyBJbnN0aXR1Y2lvbmFsIFVuYWwgbm8gYXN1bWUgbmluZ3VuYSByZXNwb25zYWJpbGlkYWQgc2kgaGF5IGFsZ3VuYSB2aW9sYWNpw7NuIGEgbG9zIGRlcmVjaG9zIGRlIGF1dG9yIGFsIGRpc3RyaWJ1aXIgZXN0b3MgYXJjaGl2b3MgeSBtZXRhZGF0b3MuIChTZSByZWNvbWllbmRhIGEgdG9kb3MgbG9zIGF1dG9yZXMgYSBpbmRpY2FyIHN1cyBkZXJlY2hvcyBkZSBhdXRvciBlbiBsYSBww6FnaW5hIGRlIHTDrXR1bG8gZGUgc3UgZG9jdW1lbnRvLikgRGUgbGEgbWlzbWEgbWFuZXJhLCBhY2VwdG8gbG9zIHTDqXJtaW5vcyBkZSBsYSBzaWd1aWVudGUgbGljZW5jaWE6IExvcyBhdXRvcmVzIG8gdGl0dWxhcmVzIGRlbCBkZXJlY2hvIGRlIGF1dG9yIGRlbCBwcmVzZW50ZSBkb2N1bWVudG8gY29uZmllcmVuIGEgbGEgVW5pdmVyc2lkYWQgTmFjaW9uYWwgZGUgQ29sb21iaWEgdW5hIGxpY2VuY2lhIG5vIGV4Y2x1c2l2YSwgbGltaXRhZGEgeSBncmF0dWl0YSBzb2JyZSBsYSBvYnJhIHF1ZSBzZSBpbnRlZ3JhIGVuIGVsIFJlcG9zaXRvcmlvIEluc3RpdHVjaW9uYWwsIHF1ZSBzZSBhanVzdGEgYSBsYXMgc2lndWllbnRlcyBjYXJhY3RlcsOtc3RpY2FzOiBhKSBFc3RhcsOhIHZpZ2VudGUgYSBwYXJ0aXIgZGUgbGEgZmVjaGEgZW4gcXVlIHNlIGluY2x1eWUgZW4gZWwgcmVwb3NpdG9yaW8sIHF1ZSBzZXLDoW4gcHJvcnJvZ2FibGVzIGluZGVmaW5pZGFtZW50ZSBwb3IgZWwgdGllbXBvIHF1ZSBkdXJlIGVsIGRlcmVjaG8gcGF0cmltb25pYWwgZGVsIGF1dG9yLiBFbCBhdXRvciBwb2Ryw6EgZGFyIHBvciB0ZXJtaW5hZGEgbGEgbGljZW5jaWEgc29saWNpdMOhbmRvbG8gYSBsYSBVbml2ZXJzaWRhZC4gYikgTG9zIGF1dG9yZXMgYXV0b3JpemFuIGEgbGEgVW5pdmVyc2lkYWQgTmFjaW9uYWwgZGUgQ29sb21iaWEgcGFyYSBwdWJsaWNhciBsYSBvYnJhIGVuIGVsIGZvcm1hdG8gcXVlIGVsIHJlcG9zaXRvcmlvIGxvIHJlcXVpZXJhIChpbXByZXNvLCBkaWdpdGFsLCBlbGVjdHLDs25pY28gbyBjdWFscXVpZXIgb3RybyBjb25vY2lkbyBvIHBvciBjb25vY2VyKSB5IGNvbm9jZW4gcXVlIGRhZG8gcXVlIHNlIHB1YmxpY2EgZW4gSW50ZXJuZXQgcG9yIGVzdGUgaGVjaG8gY2lyY3VsYSBjb24gYWxjYW5jZSBtdW5kaWFsLiBjKSBMb3MgYXV0b3JlcyBhY2VwdGFuIHF1ZSBsYSBhdXRvcml6YWNpw7NuIHNlIGhhY2UgYSB0w610dWxvIGdyYXR1aXRvLCBwb3IgbG8gdGFudG8sIHJlbnVuY2lhbiBhIHJlY2liaXIgZW1vbHVtZW50byBhbGd1bm8gcG9yIGxhIHB1YmxpY2FjacOzbiwgZGlzdHJpYnVjacOzbiwgY29tdW5pY2FjacOzbiBww7pibGljYSB5IGN1YWxxdWllciBvdHJvIHVzbyBxdWUgc2UgaGFnYSBlbiBsb3MgdMOpcm1pbm9zIGRlIGxhIHByZXNlbnRlIGxpY2VuY2lhIHkgZGUgbGEgbGljZW5jaWEgQ3JlYXRpdmUgQ29tbW9ucyBjb24gcXVlIHNlIHB1YmxpY2EuIGQpIExvcyBhdXRvcmVzIG1hbmlmaWVzdGFuIHF1ZSBzZSB0cmF0YSBkZSB1bmEgb2JyYSBvcmlnaW5hbCBzb2JyZSBsYSBxdWUgdGllbmVuIGxvcyBkZXJlY2hvcyBxdWUgYXV0b3JpemFuIHkgcXVlIHNvbiBlbGxvcyBxdWllbmVzIGFzdW1lbiB0b3RhbCByZXNwb25zYWJpbGlkYWQgcG9yIGVsIGNvbnRlbmlkbyBkZSBzdSBvYnJhIGFudGUgbGEgVW5pdmVyc2lkYWQgTmFjaW9uYWwgeSBhbnRlIHRlcmNlcm9zLiBFbiB0b2RvIGNhc28gbGEgVW5pdmVyc2lkYWQgTmFjaW9uYWwgZGUgQ29sb21iaWEgc2UgY29tcHJvbWV0ZSBhIGluZGljYXIgc2llbXByZSBsYSBhdXRvcsOtYSBpbmNsdXllbmRvIGVsIG5vbWJyZSBkZWwgYXV0b3IgeSBsYSBmZWNoYSBkZSBwdWJsaWNhY2nDs24uIGUpIExvcyBhdXRvcmVzIGF1dG9yaXphbiBhIGxhIFVuaXZlcnNpZGFkIHBhcmEgaW5jbHVpciBsYSBvYnJhIGVuIGxvcyBhZ3JlZ2Fkb3JlcywgaW5kaWNlc3MgeSBidXNjYWRvcmVzIHF1ZSBzZSBlc3RpbWVuIG5lY2VzYXJpb3MgcGFyYSBwcm9tb3ZlciBzdSBkaWZ1c2nDs24uIGYpIExvcyBhdXRvcmVzIGFjZXB0YW4gcXVlIGxhIFVuaXZlcnNpZGFkIE5hY2lvbmFsIGRlIENvbG9tYmlhIHB1ZWRhIGNvbnZlcnRpciBlbCBkb2N1bWVudG8gYSBjdWFscXVpZXIgbWVkaW8gbyBmb3JtYXRvIHBhcmEgcHJvcMOzc2l0b3MgZGUgcHJlc2VydmFjacOzbiBkaWdpdGFsLiBTSSBFTCBET0NVTUVOVE8gU0UgQkFTQSBFTiBVTiBUUkFCQUpPIFFVRSBIQSBTSURPIFBBVFJPQ0lOQURPIE8gQVBPWUFETyBQT1IgVU5BIEFHRU5DSUEgTyBVTkEgT1JHQU5JWkFDScOTTiwgQ09OIEVYQ0VQQ0nDk04gREUgTEEgVU5JVkVSU0lEQUQgTkFDSU9OQUwgREUgQ09MT01CSUEsIExPUyBBVVRPUkVTIEdBUkFOVElaQU4gUVVFIFNFIEhBIENVTVBMSURPIENPTiBMT1MgREVSRUNIT1MgWSBPQkxJR0FDSU9ORVMgUkVRVUVSSURPUyBQT1IgRUwgUkVTUEVDVElWTyBDT05UUkFUTyBPIEFDVUVSRE8uIAoKUGFyYSB0cmFiYWpvcyBkZXBvc2l0YWRvcyBwb3Igb3RyYXMgcGVyc29uYXMgZGlzdGludGFzIGEgc3UgYXV0b3I6IAoKRGVjbGFybyBxdWUgZWwgZ3J1cG8gZGUgYXJjaGl2b3MgZGlnaXRhbGVzIHkgbWV0YWRhdG9zIGFzb2NpYWRvcyBxdWUgZXN0b3kgYXJjaGl2YW5kbyBlbiBlbCBSZXBvc2l0b3JpbyBJbnN0aXR1Y2lvbmFsIFVOKSBlcyBkZSBkb21pbmlvIHDDumJsaWNvLiBTaSBubyBmdWVzZSBlbCBjYXNvLCBhY2VwdG8gdG9kYSBsYSByZXNwb25zYWJpbGlkYWQgcG9yIGN1YWxxdWllciBpbmZyYWNjacOzbiBkZSBkZXJlY2hvcyBkZSBhdXRvciBxdWUgY29ubGxldmUgbGEgZGlzdHJpYnVjacOzbiBkZSBlc3RvcyBhcmNoaXZvcyB5IG1ldGFkYXRvcy4KTk9UQTogU0kgTEEgVEVTSVMgQSBQVUJMSUNBUiBBRFFVSVJJw5MgQ09NUFJPTUlTT1MgREUgQ09ORklERU5DSUFMSURBRCBFTiBFTCBERVNBUlJPTExPIE8gUEFSVEVTIERFTCBET0NVTUVOVE8uIFNJR0EgTEEgRElSRUNUUklaIERFIExBIFJFU09MVUNJw5NOIDAyMyBERSAyMDE1LCBQT1IgTEEgQ1VBTCBTRSBFU1RBQkxFQ0UgRUwgUFJPQ0VESU1JRU5UTyBQQVJBIExBIFBVQkxJQ0FDScOTTiBERSBURVNJUyBERSBNQUVTVFLDjUEgWSBET0NUT1JBRE8gREUgTE9TIEVTVFVESUFOVEVTIERFIExBIFVOSVZFUlNJREFEIE5BQ0lPTkFMIERFIENPTE9NQklBIEVOIEVMIFJFUE9TSVRPUklPIElOU1RJVFVDSU9OQUwgVU4sIEVYUEVESURBIFBPUiBMQSBTRUNSRVRBUsONQSBHRU5FUkFMLiAqTEEgVEVTSVMgQSBQVUJMSUNBUiBERUJFIFNFUiBMQSBWRVJTScOTTiBGSU5BTCBBUFJPQkFEQS4gCgpBbCBoYWNlciBjbGljIGVuIGVsIHNpZ3VpZW50ZSBib3TDs24sIHVzdGVkIGluZGljYSBxdWUgZXN0w6EgZGUgYWN1ZXJkbyBjb24gZXN0b3MgdMOpcm1pbm9zLiBTaSB0aWVuZSBhbGd1bmEgZHVkYSBzb2JyZSBsYSBsaWNlbmNpYSwgcG9yIGZhdm9yLCBjb250YWN0ZSBjb24gZWwgYWRtaW5pc3RyYWRvciBkZWwgc2lzdGVtYS4KClVOSVZFUlNJREFEIE5BQ0lPTkFMIERFIENPTE9NQklBIC0gw5psdGltYSBtb2RpZmljYWNpw7NuIDE5LzEwLzIwMjEK |