Predicción y selección de variables con bosques aleatorios en presencia de variables correlacionadas

This thesis addresses the problem of variable selection using the random forest method when the underlying model for the response variable is linear. To this end, simulated data sets with di_erent characteristics are con_gured and then, the methodology applied, and the prediction error measured each...

Full description

Autores:
Cardona Alzate, Néstor Iván
Tipo de recurso:
Work document
Fecha de publicación:
2019
Institución:
Universidad Nacional de Colombia
Repositorio:
Universidad Nacional de Colombia
Idioma:
spa
OAI Identifier:
oai:repositorio.unal.edu.co:unal/75561
Acceso en línea:
https://repositorio.unal.edu.co/handle/unal/75561
Palabra clave:
Matemáticas::Probabilidades y matemáticas aplicadas
Prediction
Predictor variables
Análisis de regresión
Métodos de simulación
Predictor variables
Rights
openAccess
License
Atribución-NoComercial-SinDerivadas 4.0 Internacional
id UNACIONAL2_a76a073b7157ea74154e82e283989202
oai_identifier_str oai:repositorio.unal.edu.co:unal/75561
network_acronym_str UNACIONAL2
network_name_str Universidad Nacional de Colombia
repository_id_str
dc.title.spa.fl_str_mv Predicción y selección de variables con bosques aleatorios en presencia de variables correlacionadas
title Predicción y selección de variables con bosques aleatorios en presencia de variables correlacionadas
spellingShingle Predicción y selección de variables con bosques aleatorios en presencia de variables correlacionadas
Matemáticas::Probabilidades y matemáticas aplicadas
Prediction
Predictor variables
Análisis de regresión
Métodos de simulación
Predictor variables
title_short Predicción y selección de variables con bosques aleatorios en presencia de variables correlacionadas
title_full Predicción y selección de variables con bosques aleatorios en presencia de variables correlacionadas
title_fullStr Predicción y selección de variables con bosques aleatorios en presencia de variables correlacionadas
title_full_unstemmed Predicción y selección de variables con bosques aleatorios en presencia de variables correlacionadas
title_sort Predicción y selección de variables con bosques aleatorios en presencia de variables correlacionadas
dc.creator.fl_str_mv Cardona Alzate, Néstor Iván
dc.contributor.advisor.spa.fl_str_mv Ospina Arango, Juan David
Correa Morales, Juan Carlos
dc.contributor.author.spa.fl_str_mv Cardona Alzate, Néstor Iván
dc.subject.ddc.spa.fl_str_mv Matemáticas::Probabilidades y matemáticas aplicadas
topic Matemáticas::Probabilidades y matemáticas aplicadas
Prediction
Predictor variables
Análisis de regresión
Métodos de simulación
Predictor variables
dc.subject.proposal.eng.fl_str_mv Prediction
Predictor variables
dc.subject.proposal.spa.fl_str_mv Análisis de regresión
Métodos de simulación
Predictor variables
description This thesis addresses the problem of variable selection using the random forest method when the underlying model for the response variable is linear. To this end, simulated data sets with di_erent characteristics are con_gured and then, the methodology applied, and the prediction error measured each time a variable is eliminated. This is done to evaluate the selection algorithm, which leads to identifying that it is e_cient when data sets contain groups of predictor variables with a size less than 8. Also, this is done to evaluate the random forest method, which leads to identifying that the total number of predictor variables is the factor that most strongly impacts its performance.
publishDate 2019
dc.date.issued.spa.fl_str_mv 2019
dc.date.accessioned.spa.fl_str_mv 2020-02-07T15:38:58Z
dc.date.available.spa.fl_str_mv 2020-02-07T15:38:58Z
dc.type.spa.fl_str_mv Documento de trabajo
dc.type.driver.spa.fl_str_mv info:eu-repo/semantics/workingPaper
dc.type.version.spa.fl_str_mv info:eu-repo/semantics/acceptedVersion
dc.type.coar.spa.fl_str_mv http://purl.org/coar/resource_type/c_8042
dc.type.content.spa.fl_str_mv Text
dc.type.redcol.spa.fl_str_mv http://purl.org/redcol/resource_type/WP
format http://purl.org/coar/resource_type/c_8042
status_str acceptedVersion
dc.identifier.uri.none.fl_str_mv https://repositorio.unal.edu.co/handle/unal/75561
url https://repositorio.unal.edu.co/handle/unal/75561
dc.language.iso.spa.fl_str_mv spa
language spa
dc.relation.references.spa.fl_str_mv Altmann, A., Tolo si, L., Sander, O., y Lengauer, T. (2010, 04). Permutation importance: a corrected feature importance measure. Bioinforma- tics, 26(10), 1340-1347. Descargado de https://doi.org/10.1093/ bioinformatics/btq134 doi: 10.1093/bioinformatics/btq134
Archer, K. J., y Kimes, R. V. (2008). Empirical characterization of random forest variable importance measures. Computational Statistics Data Analysis, 52(4), 2249 - 2260. Descargado de http://www.sciencedirect.com/ science/article/pii/S0167947307003076 doi: https://doi.org/10 .1016/j.csda.2007.08.015
Blum, A. L., y Langley, P. (1997). Selection of relevant features and examples in machine learning. Arti cial Intelligence, 97(1), 245 - 271. Descargado de http://www.sciencedirect.com/science/article/ pii/S0004370297000635 doi: https://doi.org/10.1016/S0004-3702(97) 00063-5
Boulesteix, A.-L., Janitza, S., Kruppa, J., y K onig, I. R. (2012). Overview of random forest methodology and practical guidance with emphasis on computational biology and bioinformatics. Wiley Interdisciplinary Reviews: Data Mining and Knowledge Discovery, 2(6), 493{507. Descargado de http://dx.doi.org/10.1002/widm.1072 doi: 10.1002/widm.1072
Boulesteix, A.-L., Janitza, S., Kruppa, J., y K onig, I. R. (2012). Overview of random forest methodology and practical guidance with emphasis on computational biology and bioinformatics. Wiley Interdisciplinary Reviews: Data Mining and Knowledge Discovery, 2(6), 493{507. Descargado de http://dx.doi.org/10.1002/widm.1072 doi: 10.1002/widm.1072
Breiman, L. (2001, 01 de Oct). Random forests. Machine Learning , 45(1), 5{ 32. Descargado de https://doi.org/10.1023/A:1010933404324 doi: 10.1023/A:1010933404324
Degenhardt, F., Seifert, S., y Szymczak, S. (2017, 10). Evaluation of variable selection methods for random forests and omics data sets. Brie ngs in Bioinformatics, 20(2), 492-503. Descargado de https://doi.org/10 .1093/bib/bbx124 doi: 10.1093/bib/bbx124
D az-Uriarte, R., y Alvarez de Andr es, S. (2006, 06 de Jan). Gene selection and classi cation of microarray data using random forest. BMC Bioinformatics, 7(1), 3. Descargado de https://doi.org/10.1186/1471-2105-7-3 doi: 10.1186/1471-2105-7-3
Efron, B. (1979b). Computers and the theory of statistics: Thinking the unthinkable. SIAM Review, 21(4), 460-480. Descargado de http:// www.jstor.org/stable/2030104
Genuer, R., Poggi, J.-M., y Tuleau-Malot, C. (2015). VSURF: An R Package for Variable Selection Using Random Forests. The R Journal , 7(2), 19{ 33. Descargado de https://doi.org/10.32614/RJ-2015-018 doi: 10.32614/RJ-2015-018
Gregorutti, B., Michel, B., y Saint-Pierre, P. (2017, 01 de May). Correlation and variable importance in random forests. Statistics and Com- puting , 27(3), 659{678. Descargado de https://doi.org/10.1007/ s11222-016-9646-1 doi: 10.1007/s11222-016-9646-1
Hastie, T., Tibshirani, R., y Friedman, J. (2009). The elements of statistical learning (2.a ed.). Springer-Verlag New York. doi: 10.1007/978-0-387 -84858-7
Kim, H., y Loh, W.-Y. (2001). Classi cation trees with unbiased multiway splits. Journal of the American Statistical Association, 96(454), 589-604. Descargado de https://doi.org/10.1198/016214501753168271 doi: 10.1198/016214501753168271
Liaw, A., y Wiener, M. (2002). Classi cation and regression by randomforest. R News, 2(3), 18-22. Descargado de https://CRAN.R-project.org/ doc/Rnews/
Messenger, R., y Mandell, L. (1972). A modal search technique for predictive nominal scale multivariate analysis. Journal of the American Statistical Asso- ciation, 67(340), 768-772. Descargado de https://doi.org/10.1080/ 01621459.1972.10481290 doi: 10.1080/01621459.1972.10481290
R Core Team. (2018). R: A language and environment for statistical computing [Manual de software inform atico]. Vienna, Austria. Descargado de https://www.R-project.org/
Sandri, M., y Zuccolotto, P. (2008). A bias correction algorithm for the gini variable importance measure in classi cation trees. Journal of Computatio- nal and Graphical Statistics, 17(3), 611-628. Descargado de https://doi .org/10.1198/106186008X344522 doi: 10.1198/106186008X344522
Tolo si, L., y Lengauer, T. (2011, 05). Classi cation with correlated features: unreliability of feature ranking and solutions. Bioinformatics, 27(14), 1986- 1994. Descargado de https://doi.org/10.1093/bioinformatics/ btr300 doi: 10.1093/bioinformatics/btr300
Wright, M., y Ziegler, A. (2017). ranger: A fast implementation of random forests for high dimensional data in c++ and r. Journal of Statistical Software, Articles, 77(1), 1{17. Descargado de https://www.jstatsoft.org/ v077/i01 doi: 10.18637/jss.v077.i01
Ziegler, A., y K onig, I. R. (2014). Mining data with random forests: current options for real-world applications. Wiley Interdisciplinary Reviews: Data Mining and Knowledge Discovery, 4(1), 55-63. Descargado de https:// onlinelibrary.wiley.com/doi/abs/10.1002/widm.1114 doi: 10 .1002/widm.1114
dc.rights.spa.fl_str_mv Derechos reservados - Universidad Nacional de Colombia
dc.rights.coar.fl_str_mv http://purl.org/coar/access_right/c_abf2
dc.rights.license.spa.fl_str_mv Atribución-NoComercial-SinDerivadas 4.0 Internacional
dc.rights.spa.spa.fl_str_mv Acceso abierto
dc.rights.uri.spa.fl_str_mv http://creativecommons.org/licenses/by-nc-nd/4.0/
dc.rights.accessrights.spa.fl_str_mv info:eu-repo/semantics/openAccess
rights_invalid_str_mv Atribución-NoComercial-SinDerivadas 4.0 Internacional
Derechos reservados - Universidad Nacional de Colombia
Acceso abierto
http://creativecommons.org/licenses/by-nc-nd/4.0/
http://purl.org/coar/access_right/c_abf2
eu_rights_str_mv openAccess
dc.format.extent.spa.fl_str_mv 53
dc.format.mimetype.spa.fl_str_mv application/pdf
dc.publisher.department.spa.fl_str_mv Escuela de estadística
dc.publisher.branch.spa.fl_str_mv Universidad Nacional de Colombia - Sede Medellín
institution Universidad Nacional de Colombia
bitstream.url.fl_str_mv https://repositorio.unal.edu.co/bitstream/unal/75561/1/8063120.2019.pdf
https://repositorio.unal.edu.co/bitstream/unal/75561/2/license.txt
https://repositorio.unal.edu.co/bitstream/unal/75561/3/license_rdf
https://repositorio.unal.edu.co/bitstream/unal/75561/4/8063120.2019.pdf.jpg
bitstream.checksum.fl_str_mv aea8f8a225815ee7d0536a17c0779f0f
6f3f13b02594d02ad110b3ad534cd5df
4460e5956bc1d1639be9ae6146a50347
05277a09c29a0b1934b9cecee61411d1
bitstream.checksumAlgorithm.fl_str_mv MD5
MD5
MD5
MD5
repository.name.fl_str_mv Repositorio Institucional Universidad Nacional de Colombia
repository.mail.fl_str_mv repositorio_nal@unal.edu.co
_version_ 1806886004397703168
spelling Atribución-NoComercial-SinDerivadas 4.0 InternacionalDerechos reservados - Universidad Nacional de ColombiaAcceso abiertohttp://creativecommons.org/licenses/by-nc-nd/4.0/info:eu-repo/semantics/openAccesshttp://purl.org/coar/access_right/c_abf2Ospina Arango, Juan David5f7c93b7-ca86-46d2-8115-befe621f8a4e-1Correa Morales, Juan Carlos30d48a69-ad1e-480d-ba62-27e02579fee6-1Cardona Alzate, Néstor Iván7832ebb4-b968-4182-83c2-b0aee4f9917c2020-02-07T15:38:58Z2020-02-07T15:38:58Z2019https://repositorio.unal.edu.co/handle/unal/75561This thesis addresses the problem of variable selection using the random forest method when the underlying model for the response variable is linear. To this end, simulated data sets with di_erent characteristics are con_gured and then, the methodology applied, and the prediction error measured each time a variable is eliminated. This is done to evaluate the selection algorithm, which leads to identifying that it is e_cient when data sets contain groups of predictor variables with a size less than 8. Also, this is done to evaluate the random forest method, which leads to identifying that the total number of predictor variables is the factor that most strongly impacts its performance.El presente trabajo aborda el problema de selección de variables empleando el método de bosques aleatorios cuando el modelo subyacente para la variable respuesta es de tipo lineal. Para ello se configuran conjuntos de datos simulados con diferentes características, sobre los cuales se aplica la metodología y se mide el error de predicción al eliminar cada variable. Con esto se realiza en primera instancia, una evaluación del algoritmo de selección en la que se identifica que este es eficiente cuando los conjuntos de datos contienen grupos de variables predictoras con tamaño inferior a 8 y en segunda instancia, una evaluación del método de bosques aleatorios en la que se idéntica que el número total de variables predictoras es el factor que más fuertemente impacta su desempeño.Maestría en Ciencias - estadísticaMaestría53application/pdfspaMatemáticas::Probabilidades y matemáticas aplicadasPredictionPredictor variablesAnálisis de regresiónMétodos de simulaciónPredictor variablesPredicción y selección de variables con bosques aleatorios en presencia de variables correlacionadasDocumento de trabajoinfo:eu-repo/semantics/workingPaperinfo:eu-repo/semantics/acceptedVersionhttp://purl.org/coar/resource_type/c_8042Texthttp://purl.org/redcol/resource_type/WPEscuela de estadísticaUniversidad Nacional de Colombia - Sede MedellínAltmann, A., Tolo si, L., Sander, O., y Lengauer, T. (2010, 04). Permutation importance: a corrected feature importance measure. Bioinforma- tics, 26(10), 1340-1347. Descargado de https://doi.org/10.1093/ bioinformatics/btq134 doi: 10.1093/bioinformatics/btq134Archer, K. J., y Kimes, R. V. (2008). Empirical characterization of random forest variable importance measures. Computational Statistics Data Analysis, 52(4), 2249 - 2260. Descargado de http://www.sciencedirect.com/ science/article/pii/S0167947307003076 doi: https://doi.org/10 .1016/j.csda.2007.08.015Blum, A. L., y Langley, P. (1997). Selection of relevant features and examples in machine learning. Arti cial Intelligence, 97(1), 245 - 271. Descargado de http://www.sciencedirect.com/science/article/ pii/S0004370297000635 doi: https://doi.org/10.1016/S0004-3702(97) 00063-5Boulesteix, A.-L., Janitza, S., Kruppa, J., y K onig, I. R. (2012). Overview of random forest methodology and practical guidance with emphasis on computational biology and bioinformatics. Wiley Interdisciplinary Reviews: Data Mining and Knowledge Discovery, 2(6), 493{507. Descargado de http://dx.doi.org/10.1002/widm.1072 doi: 10.1002/widm.1072Boulesteix, A.-L., Janitza, S., Kruppa, J., y K onig, I. R. (2012). Overview of random forest methodology and practical guidance with emphasis on computational biology and bioinformatics. Wiley Interdisciplinary Reviews: Data Mining and Knowledge Discovery, 2(6), 493{507. Descargado de http://dx.doi.org/10.1002/widm.1072 doi: 10.1002/widm.1072Breiman, L. (2001, 01 de Oct). Random forests. Machine Learning , 45(1), 5{ 32. Descargado de https://doi.org/10.1023/A:1010933404324 doi: 10.1023/A:1010933404324Degenhardt, F., Seifert, S., y Szymczak, S. (2017, 10). Evaluation of variable selection methods for random forests and omics data sets. Brie ngs in Bioinformatics, 20(2), 492-503. Descargado de https://doi.org/10 .1093/bib/bbx124 doi: 10.1093/bib/bbx124D az-Uriarte, R., y Alvarez de Andr es, S. (2006, 06 de Jan). Gene selection and classi cation of microarray data using random forest. BMC Bioinformatics, 7(1), 3. Descargado de https://doi.org/10.1186/1471-2105-7-3 doi: 10.1186/1471-2105-7-3Efron, B. (1979b). Computers and the theory of statistics: Thinking the unthinkable. SIAM Review, 21(4), 460-480. Descargado de http:// www.jstor.org/stable/2030104Genuer, R., Poggi, J.-M., y Tuleau-Malot, C. (2015). VSURF: An R Package for Variable Selection Using Random Forests. The R Journal , 7(2), 19{ 33. Descargado de https://doi.org/10.32614/RJ-2015-018 doi: 10.32614/RJ-2015-018Gregorutti, B., Michel, B., y Saint-Pierre, P. (2017, 01 de May). Correlation and variable importance in random forests. Statistics and Com- puting , 27(3), 659{678. Descargado de https://doi.org/10.1007/ s11222-016-9646-1 doi: 10.1007/s11222-016-9646-1Hastie, T., Tibshirani, R., y Friedman, J. (2009). The elements of statistical learning (2.a ed.). Springer-Verlag New York. doi: 10.1007/978-0-387 -84858-7Kim, H., y Loh, W.-Y. (2001). Classi cation trees with unbiased multiway splits. Journal of the American Statistical Association, 96(454), 589-604. Descargado de https://doi.org/10.1198/016214501753168271 doi: 10.1198/016214501753168271Liaw, A., y Wiener, M. (2002). Classi cation and regression by randomforest. R News, 2(3), 18-22. Descargado de https://CRAN.R-project.org/ doc/Rnews/Messenger, R., y Mandell, L. (1972). A modal search technique for predictive nominal scale multivariate analysis. Journal of the American Statistical Asso- ciation, 67(340), 768-772. Descargado de https://doi.org/10.1080/ 01621459.1972.10481290 doi: 10.1080/01621459.1972.10481290R Core Team. (2018). R: A language and environment for statistical computing [Manual de software inform atico]. Vienna, Austria. Descargado de https://www.R-project.org/Sandri, M., y Zuccolotto, P. (2008). A bias correction algorithm for the gini variable importance measure in classi cation trees. Journal of Computatio- nal and Graphical Statistics, 17(3), 611-628. Descargado de https://doi .org/10.1198/106186008X344522 doi: 10.1198/106186008X344522Tolo si, L., y Lengauer, T. (2011, 05). Classi cation with correlated features: unreliability of feature ranking and solutions. Bioinformatics, 27(14), 1986- 1994. Descargado de https://doi.org/10.1093/bioinformatics/ btr300 doi: 10.1093/bioinformatics/btr300Wright, M., y Ziegler, A. (2017). ranger: A fast implementation of random forests for high dimensional data in c++ and r. Journal of Statistical Software, Articles, 77(1), 1{17. Descargado de https://www.jstatsoft.org/ v077/i01 doi: 10.18637/jss.v077.i01Ziegler, A., y K onig, I. R. (2014). Mining data with random forests: current options for real-world applications. Wiley Interdisciplinary Reviews: Data Mining and Knowledge Discovery, 4(1), 55-63. Descargado de https:// onlinelibrary.wiley.com/doi/abs/10.1002/widm.1114 doi: 10 .1002/widm.1114ORIGINAL8063120.2019.pdf8063120.2019.pdfapplication/pdf426524https://repositorio.unal.edu.co/bitstream/unal/75561/1/8063120.2019.pdfaea8f8a225815ee7d0536a17c0779f0fMD51LICENSElicense.txtlicense.txttext/plain; charset=utf-83991https://repositorio.unal.edu.co/bitstream/unal/75561/2/license.txt6f3f13b02594d02ad110b3ad534cd5dfMD52CC-LICENSElicense_rdflicense_rdfapplication/rdf+xml; charset=utf-8805https://repositorio.unal.edu.co/bitstream/unal/75561/3/license_rdf4460e5956bc1d1639be9ae6146a50347MD53THUMBNAIL8063120.2019.pdf.jpg8063120.2019.pdf.jpgGenerated Thumbnailimage/jpeg4254https://repositorio.unal.edu.co/bitstream/unal/75561/4/8063120.2019.pdf.jpg05277a09c29a0b1934b9cecee61411d1MD54unal/75561oai:repositorio.unal.edu.co:unal/755612023-03-23 08:53:26.388Repositorio Institucional Universidad Nacional de Colombiarepositorio_nal@unal.edu.coUExBTlRJTExBIERFUMOTU0lUTwoKQ29tbyBlZGl0b3IgZGUgZXN0ZSDDrXRlbSwgdXN0ZWQgcHVlZGUgbW92ZXJsbyBhIHJldmlzacOzbiBzaW4gYW50ZXMgcmVzb2x2ZXIgbG9zIHByb2JsZW1hcyBpZGVudGlmaWNhZG9zLCBkZSBsbyBjb250cmFyaW8sIGhhZ2EgY2xpYyBlbiBHdWFyZGFyIHBhcmEgZ3VhcmRhciBlbCDDrXRlbSB5IHNvbHVjaW9uYXIgZXN0b3MgcHJvYmxlbWFzIG1hcyB0YXJkZS4KCk5PVEFTOgoqU0kgTEEgVEVTSVMgQSBQVUJMSUNBUiBBRFFVSVJJw5MgQ09NUFJPTUlTT1MgREUgQ09ORklERU5DSUFMSURBRCBFTiBFTCBERVNBUlJPTExPIE8gUEFSVEVTIERFTCBET0NVTUVOVE8uIFNJR0EgTEEgRElSRUNUUklaIERFIExBIFJFU09MVUNJw5NOIDAyMyBERSAyMDE1LCBQT1IgTEEgQ1VBTCBTRSBFU1RBQkxFQ0UgRUwgUFJPQ0VESU1JRU5UTyBQQVJBIExBIFBVQkxJQ0FDScOTTiBERSBURVNJUyBERSBNQUVTVFLDjUEgWSBET0NUT1JBRE8gREUgTE9TIEVTVFVESUFOVEVTIERFIExBIFVOSVZFUlNJREFEIE5BQ0lPTkFMIERFIENPTE9NQklBIEVOIEVMIFJFUE9TSVRPUklPIElOU1RJVFVDSU9OQUwgVU4sIEVYUEVESURBIFBPUiBMQSBTRUNSRVRBUsONQSBHRU5FUkFMLgoqTEEgVEVTSVMgQSBQVUJMSUNBUiBERUJFIFNFUiBMQSBWRVJTScOTTiBGSU5BTCBBUFJPQkFEQS4KUGFyYSB0cmFiYWpvcyBkZXBvc2l0YWRvcyBwb3Igc3UgcHJvcGlvIGF1dG9yOiBBbCBhdXRvYXJjaGl2YXIgZXN0ZSBncnVwbyBkZSBhcmNoaXZvcyBkaWdpdGFsZXMgeSBzdXMgbWV0YWRhdG9zLCBZbyBnYXJhbnRpem8gYWwgUmVwb3NpdG9yaW8gSW5zdGl0dWNpb25hbCBVTiBlbCBkZXJlY2hvIGEgYWxtYWNlbmFybG9zIHkgbWFudGVuZXJsb3MgZGlzcG9uaWJsZXMgZW4gbMOtbmVhIGRlIG1hbmVyYSBncmF0dWl0YS4gRGVjbGFybyBxdWUgZGljaG8gbWF0ZXJpYWwgZXMgZGUgbWkgcHJvcGllZGFkIGludGVsZWN0dWFsIHkgcXVlIGVsIFJlcG9zaXRvcmlvIEluc3RpdHVjaW9uYWwgVU4gbm8gYXN1bWUgbmluZ3VuYSByZXNwb25zYWJpbGlkYWQgc2kgaGF5IGFsZ3VuYSB2aW9sYWNpw7NuIGEgbG9zIGRlcmVjaG9zIGRlIGF1dG9yIGFsIGRpc3RyaWJ1aXIgZXN0b3MgYXJjaGl2b3MgeSBtZXRhZGF0b3MuIChTZSByZWNvbWllbmRhIGEgdG9kb3MgbG9zIGF1dG9yZXMgYSBpbmRpY2FyIHN1cyBkZXJlY2hvcyBkZSBhdXRvciBlbiBsYSBww6FnaW5hIGRlIHTDrXR1bG8gZGUgc3UgZG9jdW1lbnRvLikgRGUgbGEgbWlzbWEgbWFuZXJhLCBhY2VwdG8gbG9zIHTDqXJtaW5vcyBkZSBsYSBzaWd1aWVudGUgbGljZW5jaWE6IExvcyBhdXRvcmVzIG8gdGl0dWxhcmVzIGRlbCBkZXJlY2hvIGRlIGF1dG9yIGRlbCBwcmVzZW50ZSBkb2N1bWVudG8gY29uZmllcmVuIGEgbGEgVW5pdmVyc2lkYWQgTmFjaW9uYWwgZGUgQ29sb21iaWEgdW5hIGxpY2VuY2lhIG5vIGV4Y2x1c2l2YSwgbGltaXRhZGEgeSBncmF0dWl0YSBzb2JyZSBsYSBvYnJhIHF1ZSBzZSBpbnRlZ3JhIGVuIGVsIFJlcG9zaXRvcmlvIEluc3RpdHVjaW9uYWwsIHF1ZSBzZSBhanVzdGEgYSBsYXMgc2lndWllbnRlcyBjYXJhY3RlcsOtc3RpY2FzOiBhKSBFc3RhcsOhIHZpZ2VudGUgYSBwYXJ0aXIgZGUgbGEgZmVjaGEgZW4gcXVlIHNlIGluY2x1eWUgZW4gZWwgcmVwb3NpdG9yaW8sIHBvciB1biBwbGF6byBkZSA1IGHDsW9zLCBxdWUgc2Vyw6FuIHByb3Jyb2dhYmxlcyBpbmRlZmluaWRhbWVudGUgcG9yIGVsIHRpZW1wbyBxdWUgZHVyZSBlbCBkZXJlY2hvIHBhdHJpbW9uaWFsIGRlbCBhdXRvci4gRWwgYXV0b3IgcG9kcsOhIGRhciBwb3IgdGVybWluYWRhIGxhIGxpY2VuY2lhIHNvbGljaXTDoW5kb2xvIGEgbGEgVW5pdmVyc2lkYWQgY29uIHVuYSBhbnRlbGFjacOzbiBkZSBkb3MgbWVzZXMgYW50ZXMgZGUgbGEgY29ycmVzcG9uZGllbnRlIHByw7Nycm9nYS4gYikgTG9zIGF1dG9yZXMgYXV0b3JpemFuIGEgbGEgVW5pdmVyc2lkYWQgTmFjaW9uYWwgZGUgQ29sb21iaWEgcGFyYSBwdWJsaWNhciBsYSBvYnJhIGVuIGVsIGZvcm1hdG8gcXVlIGVsIHJlcG9zaXRvcmlvIGxvIHJlcXVpZXJhIChpbXByZXNvLCBkaWdpdGFsLCBlbGVjdHLDs25pY28gbyBjdWFscXVpZXIgb3RybyBjb25vY2lkbyBvIHBvciBjb25vY2VyKSB5IGNvbm9jZW4gcXVlIGRhZG8gcXVlIHNlIHB1YmxpY2EgZW4gSW50ZXJuZXQgcG9yIGVzdGUgaGVjaG8gY2lyY3VsYSBjb24gdW4gYWxjYW5jZSBtdW5kaWFsLiBjKSBMb3MgYXV0b3JlcyBhY2VwdGFuIHF1ZSBsYSBhdXRvcml6YWNpw7NuIHNlIGhhY2UgYSB0w610dWxvIGdyYXR1aXRvLCBwb3IgbG8gdGFudG8sIHJlbnVuY2lhbiBhIHJlY2liaXIgZW1vbHVtZW50byBhbGd1bm8gcG9yIGxhIHB1YmxpY2FjacOzbiwgZGlzdHJpYnVjacOzbiwgY29tdW5pY2FjacOzbiBww7pibGljYSB5IGN1YWxxdWllciBvdHJvIHVzbyBxdWUgc2UgaGFnYSBlbiBsb3MgdMOpcm1pbm9zIGRlIGxhIHByZXNlbnRlIGxpY2VuY2lhIHkgZGUgbGEgbGljZW5jaWEgQ3JlYXRpdmUgQ29tbW9ucyBjb24gcXVlIHNlIHB1YmxpY2EuIGQpIExvcyBhdXRvcmVzIG1hbmlmaWVzdGFuIHF1ZSBzZSB0cmF0YSBkZSB1bmEgb2JyYSBvcmlnaW5hbCBzb2JyZSBsYSBxdWUgdGllbmVuIGxvcyBkZXJlY2hvcyBxdWUgYXV0b3JpemFuIHkgcXVlIHNvbiBlbGxvcyBxdWllbmVzIGFzdW1lbiB0b3RhbCByZXNwb25zYWJpbGlkYWQgcG9yIGVsIGNvbnRlbmlkbyBkZSBzdSBvYnJhIGFudGUgbGEgVW5pdmVyc2lkYWQgTmFjaW9uYWwgeSBhbnRlIHRlcmNlcm9zLiBFbiB0b2RvIGNhc28gbGEgVW5pdmVyc2lkYWQgTmFjaW9uYWwgZGUgQ29sb21iaWEgc2UgY29tcHJvbWV0ZSBhIGluZGljYXIgc2llbXByZSBsYSBhdXRvcsOtYSBpbmNsdXllbmRvIGVsIG5vbWJyZSBkZWwgYXV0b3IgeSBsYSBmZWNoYSBkZSBwdWJsaWNhY2nDs24uIGUpIExvcyBhdXRvcmVzIGF1dG9yaXphbiBhIGxhIFVuaXZlcnNpZGFkIHBhcmEgaW5jbHVpciBsYSBvYnJhIGVuIGxvcyDDrW5kaWNlcyB5IGJ1c2NhZG9yZXMgcXVlIGVzdGltZW4gbmVjZXNhcmlvcyBwYXJhIHByb21vdmVyIHN1IGRpZnVzacOzbi4gZikgTG9zIGF1dG9yZXMgYWNlcHRhbiBxdWUgbGEgVW5pdmVyc2lkYWQgTmFjaW9uYWwgZGUgQ29sb21iaWEgcHVlZGEgY29udmVydGlyIGVsIGRvY3VtZW50byBhIGN1YWxxdWllciBtZWRpbyBvIGZvcm1hdG8gcGFyYSBwcm9ww7NzaXRvcyBkZSBwcmVzZXJ2YWNpw7NuIGRpZ2l0YWwuIFNJIEVMIERPQ1VNRU5UTyBTRSBCQVNBIEVOIFVOIFRSQUJBSk8gUVVFIEhBIFNJRE8gUEFUUk9DSU5BRE8gTyBBUE9ZQURPIFBPUiBVTkEgQUdFTkNJQSBPIFVOQSBPUkdBTklaQUNJw5NOLCBDT04gRVhDRVBDScOTTiBERSBMQSBVTklWRVJTSURBRCBOQUNJT05BTCBERSBDT0xPTUJJQSwgTE9TIEFVVE9SRVMgR0FSQU5USVpBTiBRVUUgU0UgSEEgQ1VNUExJRE8gQ09OIExPUyBERVJFQ0hPUyBZIE9CTElHQUNJT05FUyBSRVFVRVJJRE9TIFBPUiBFTCBSRVNQRUNUSVZPIENPTlRSQVRPIE8gQUNVRVJETy4KUGFyYSB0cmFiYWpvcyBkZXBvc2l0YWRvcyBwb3Igb3RyYXMgcGVyc29uYXMgZGlzdGludGFzIGEgc3UgYXV0b3I6IERlY2xhcm8gcXVlIGVsIGdydXBvIGRlIGFyY2hpdm9zIGRpZ2l0YWxlcyB5IG1ldGFkYXRvcyBhc29jaWFkb3MgcXVlIGVzdG95IGFyY2hpdmFuZG8gZW4gZWwgUmVwb3NpdG9yaW8gSW5zdGl0dWNpb25hbCBVTikgZXMgZGUgZG9taW5pbyBww7pibGljby4gU2kgbm8gZnVlc2UgZWwgY2FzbywgYWNlcHRvIHRvZGEgbGEgcmVzcG9uc2FiaWxpZGFkIHBvciBjdWFscXVpZXIgaW5mcmFjY2nDs24gZGUgZGVyZWNob3MgZGUgYXV0b3IgcXVlIGNvbmxsZXZlIGxhIGRpc3RyaWJ1Y2nDs24gZGUgZXN0b3MgYXJjaGl2b3MgeSBtZXRhZGF0b3MuCkFsIGhhY2VyIGNsaWMgZW4gZWwgc2lndWllbnRlIGJvdMOzbiwgdXN0ZWQgaW5kaWNhIHF1ZSBlc3TDoSBkZSBhY3VlcmRvIGNvbiBlc3RvcyB0w6lybWlub3MuCg==