Análisis comparativo de metodologías de pronóstico para múltiples series de tiempo de conteos

Ilustraciones

Autores:
Betancur Rodríguez, Daniel
Tipo de recurso:
Fecha de publicación:
2024
Institución:
Universidad Nacional de Colombia
Repositorio:
Universidad Nacional de Colombia
Idioma:
spa
OAI Identifier:
oai:repositorio.unal.edu.co:unal/85925
Acceso en línea:
https://repositorio.unal.edu.co/handle/unal/85925
https://repositorio.unal.edu.co/
Palabra clave:
510 - Matemáticas::519 - Probabilidades y matemáticas aplicadas
Análisis de series de tiempo
Procesos de Poisson
Redes neuronales (computadores)
Aprendizaje automático (inteligencia artificial)
Modelos lineales generalizados
predicción
datos de conteos
regresión Poisson
series de tiempo
redes neuronales recurrentes
transformers
Generalized lineal models
Prediction
Count data
Poisson regression
Statespace models
Time series
Reuronal networks
Recurrent neuronal networks
Transformers
Rights
openAccess
License
Atribución-NoComercial 4.0 Internacional
id UNACIONAL2_59f5ea09057f69fa2524ba8a7878e46b
oai_identifier_str oai:repositorio.unal.edu.co:unal/85925
network_acronym_str UNACIONAL2
network_name_str Universidad Nacional de Colombia
repository_id_str
dc.title.spa.fl_str_mv Análisis comparativo de metodologías de pronóstico para múltiples series de tiempo de conteos
dc.title.translated.eng.fl_str_mv Comparative analysis of forecasting methodologies for multiple time series of counts
title Análisis comparativo de metodologías de pronóstico para múltiples series de tiempo de conteos
spellingShingle Análisis comparativo de metodologías de pronóstico para múltiples series de tiempo de conteos
510 - Matemáticas::519 - Probabilidades y matemáticas aplicadas
Análisis de series de tiempo
Procesos de Poisson
Redes neuronales (computadores)
Aprendizaje automático (inteligencia artificial)
Modelos lineales generalizados
predicción
datos de conteos
regresión Poisson
series de tiempo
redes neuronales recurrentes
transformers
Generalized lineal models
Prediction
Count data
Poisson regression
Statespace models
Time series
Reuronal networks
Recurrent neuronal networks
Transformers
title_short Análisis comparativo de metodologías de pronóstico para múltiples series de tiempo de conteos
title_full Análisis comparativo de metodologías de pronóstico para múltiples series de tiempo de conteos
title_fullStr Análisis comparativo de metodologías de pronóstico para múltiples series de tiempo de conteos
title_full_unstemmed Análisis comparativo de metodologías de pronóstico para múltiples series de tiempo de conteos
title_sort Análisis comparativo de metodologías de pronóstico para múltiples series de tiempo de conteos
dc.creator.fl_str_mv Betancur Rodríguez, Daniel
dc.contributor.advisor.none.fl_str_mv Cabarcas Jaramillo, Daniel
Gonzáles Alvarez, Nelfi Gertrudis
dc.contributor.author.none.fl_str_mv Betancur Rodríguez, Daniel
dc.subject.ddc.spa.fl_str_mv 510 - Matemáticas::519 - Probabilidades y matemáticas aplicadas
topic 510 - Matemáticas::519 - Probabilidades y matemáticas aplicadas
Análisis de series de tiempo
Procesos de Poisson
Redes neuronales (computadores)
Aprendizaje automático (inteligencia artificial)
Modelos lineales generalizados
predicción
datos de conteos
regresión Poisson
series de tiempo
redes neuronales recurrentes
transformers
Generalized lineal models
Prediction
Count data
Poisson regression
Statespace models
Time series
Reuronal networks
Recurrent neuronal networks
Transformers
dc.subject.lemb.none.fl_str_mv Análisis de series de tiempo
Procesos de Poisson
Redes neuronales (computadores)
Aprendizaje automático (inteligencia artificial)
dc.subject.proposal.spa.fl_str_mv Modelos lineales generalizados
predicción
datos de conteos
regresión Poisson
series de tiempo
redes neuronales recurrentes
transformers
dc.subject.proposal.ita.fl_str_mv Generalized lineal models
dc.subject.proposal.eng.fl_str_mv Prediction
Count data
Poisson regression
Statespace models
Time series
Reuronal networks
Recurrent neuronal networks
Transformers
description Ilustraciones
publishDate 2024
dc.date.accessioned.none.fl_str_mv 2024-04-16T15:44:15Z
dc.date.available.none.fl_str_mv 2024-04-16T15:44:15Z
dc.date.issued.none.fl_str_mv 2024-04-16
dc.type.spa.fl_str_mv Trabajo de grado - Maestría
dc.type.driver.spa.fl_str_mv info:eu-repo/semantics/masterThesis
dc.type.version.spa.fl_str_mv info:eu-repo/semantics/acceptedVersion
dc.type.content.spa.fl_str_mv Text
dc.type.redcol.spa.fl_str_mv http://purl.org/redcol/resource_type/TM
status_str acceptedVersion
dc.identifier.uri.none.fl_str_mv https://repositorio.unal.edu.co/handle/unal/85925
dc.identifier.instname.spa.fl_str_mv Universidad Nacional de Colombia
dc.identifier.reponame.spa.fl_str_mv Repositorio Institucional Universidad Nacional de Colombia
dc.identifier.repourl.spa.fl_str_mv https://repositorio.unal.edu.co/
url https://repositorio.unal.edu.co/handle/unal/85925
https://repositorio.unal.edu.co/
identifier_str_mv Universidad Nacional de Colombia
Repositorio Institucional Universidad Nacional de Colombia
dc.language.iso.spa.fl_str_mv spa
language spa
dc.relation.indexed.spa.fl_str_mv LaReferencia
dc.relation.references.spa.fl_str_mv Aghababaei Jazi, M., & Alamatsaz, M. (2012). Two new thinning operators and their appli cations. Global Journal of Pure and Applied Mathematics, 8, 13-28
Allende, H., Moraga, C., & Salas, R. (2002). Artificial neural networks in time series fore casting: a comparative analysis. Kybernetika, 38(6), 685-707
Ba, J. L., Kiros, J. R., & Hinton, G. E. (2016). Layer Normalization
Bahdanau, D., Cho, K., & Bengio, Y. (2016a). Neural Machine Translation by Jointly Lear ning to Align and Translate
Bahdanau, D., Cho, K., & Bengio, Y. (2016b). Neural Machine Translation by Jointly Lear ning to Align and Translate
Bandara, K., Shi, P., Bergmeir, C., Hewamalage, H., Tran, Q., & Seaman, B. (2019). Sales Demand Forecast in E-commerce Using a Long Short-Term Memory Neural Network Methodology. En T. Gedeon, K. W. Wong & M. Lee (Eds.), Neural Information Processing (pp. 462-474). Springer International Publishing
Byrd, R. H., Schnabel, R. B., & Shultz, G. A. (1987). A Trust Region Algorithm for Non linearly Constrained Optimization. SIAM Journal on Numerical Analysis, 24(5), 1152-1170. Consultado el 7 de mayo de 2023, desde http://www.jstor.org/stable/ 2157645
Chollet, F. (2017). Deep Learning with Python (1st). Manning Publications Co
Christou, V., & Fokianos, K. (2015). On count time series prediction. Journal of Statistical Computation and Simulation, 85(2), 357-373. https://doi.org/10.1080/00949655. 2013.823612
Davis, R. A., Fokianos, K., Holan, S. H., Joe, H., Livsey, J., Lund, R., Pipiras, V., & Ravishanker, N. (2021). Count Time Series: A methodological Review. Journal of the American Statistical Association, 116, 1533-1547. https://doi.org/10.1080/01621459. 2021.1904957
Dufour, J.-M. (2008). Estimation of ARMA models by maximum likelihood. https:// jeanmariedufour.github.io/ResE/Dufour 2008 C TS ARIMA Estimation.pdf
Dunsmuir, W. T. (2016). Generalized Linear Autoregressive Moving Average Models. En R. A. Davis, S. H. Holan, R. Lund & N. Ravishanker (Eds.). CRC Press
Excoffier, M., Gicquel, C., & Jouini, O. (2016). A joint chance-constrained programming approach for call center workforce scheduling under uncertain call arrival forecasts. Computers & Industrial Engineering, 96, 16-30. https://doi.org/https://doi.org/10. 1016/j.cie.2016.03.013
Farsani, R., Pazouki, E., & Jecei, J. (2021). A Transformer Self-Attention Model for Time Series Forecasting. Journal of Electrical and Computer Engineering Innovations, 9, 1-10. https://doi.org/10.22061/JECEI.2020.7426.391
Fearnhead, P. (2011). MCMC for State Space Models. En S. Brooks, A. Gelman, G. Jones & X.-L. Meng (Eds.). Chapman; HALL/CRC
Feng, C., Li, L., & Sadeghpour, A. (2020). A comparison of residual diagnosis tools for diagnosing regression models for count data. BMC Medical Research Methodology, 20, 1-21. https://doi.org/10.1186/s12874-020-01055-2
Ferland, R., Latour, A., & Oraichi, D. (2006). Integer-Valued GARCH Process. Journal of Time Series Analysis, 27(6), 923-942. https://doi.org/https://doi.org/10.1111/j. 1467-9892.2006.00496.x
Fokianos, K. (2012). Count Time Series Models. Handbook of Statistics, 30, 315-347. https: //doi.org/10.1016/B978-0-444-53858-1.00012-0
Fokianos, K., Rahbek, A., & Tjøstheim, D. (2009). Poisson Autoregression. Journal of the American Statistical Association, 104(488), 1430-1439. Consultado el 15 de marzo de 2023, desde http://www.jstor.org/stable/40592351
Fokianos, K., & Tjøstheim, D. (2011). Log-linear Poisson autoregression. Journal of Multi variate Analysis, 102(3), 563-578. https://doi.org/https://doi.org/10.1016/j.jmva. 2010.11.002
Gamerman, D., Abanto-Valle, C., Silva, R., & Martins, T. (2016). Dynamic Bayesian Mo dels for Discrete-Valued Time Series. En R. A. Davis, S. H. Holan, R. Lund & N. Ravishanker (Eds.). CRC Pess
Goodfellow, I., Bengio, Y., & Courville, A. (2016). Deep Learning [http://www.deeplearningbook. org]. MIT Press
He, K., Zhang, X., Ren, S., & Sun, J. (2016). Deep Residual Learning for Image Recogni tion. 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 770-778. https://doi.org/10.1109/CVPR.2016.90
Hewamalage, H., Bergmeir, C., & Bandara, K. (2022). Global models for time series forecas ting: A Simulation study. Pattern Recognition, 124, 108441. https://doi.org/https: //doi.org/10.1016/j.patcog.2021.108441
Hoppe, R. W. (2006). Chapter 4 Sequential Quadratic Programming. https://www.math. uh.edu/∼rohop/fall 06/Chapter4.pdf
Hyndman, R. J. Focused Workshop: Synthetic Data — Generating time series. En: 2021. https://www.youtube.com/watch?v=F3lWECtFa44&ab channel=AustralianDataScienceNetwok
Hyndman, R. J., & Athanasopoulos, G. (2021). Forecasting: Principles and Practice (3rd). OTexts.
Hyndman, R. J., Kang, Y., Montero-Manso, P., O’Hara-Wild, M., Talagala, T., Wang, E., & Yang, Y. (2023). tsfeatures: Time Series Feature Extraction [https://pkg.robjhyndman.com/tsfeatures/, https://github.com/robjhyndman/tsfeatures]
Hyndman, R. J., & Koehler, A. B. (2006). Another look at measures of forecast accuracy. International Journal of Forecasting, 22(4), 679-688. https://doi.org/https://doi. org/10.1016/j.ijforecast.2006.03.001
Jia, Y. (2018). Some Models for Count TimeSeries (Tesis doctoral). Clemson University. 105 Sikes Hall, Clemson, SC 29634, Estados Unidos. https://tigerprints.clemson. edu/all dissertations/2213
Kang, Y., Hyndman, R. J., & Li, F. (2020). GRATIS: GeneRAting TIme Series with diverse and controllable characteristics. Statistical Analysis and Data Mining: The ASA Data Science Journal, 13(4), 354-376. https://doi.org/10.1002/sam.11461
Liboschik, T., Fokianos, K., & Fried, R. (2017). tscount: An R Package for Analysis of Count Time Series Following Generalized Linear Models. Journal of Statistical Software, 82(5), 1-51. https://doi.org/10.18637/jss.v082.i05
Lund, R., & Livsey, J. (2016). Renewal-Based Count Time Series. En R. A. Davis, S. H. Holan, R. Lund & N. Ravishanker (Eds.). CRC Press
Makridakis, S. (1993). Accuracy measures: theoretical and practical concerns. International Journal of Forecasting, 9(4), 527-529. https://doi.org/https://doi.org/10.1016/0169 2070(93)90079-3
Makridakis, S., Spiliotis, E., & Assimakopoulos, V. (2018). Statistical and Machine Learning forecasting methods: Concerns and ways forward. Plos One. https://doi.org/10.1371/ journal.pone.0194889
Makridakis, S., Spiliotis, E., & Assimakopoulos, V. (2020). The M4 Competition: 100,000 time series and 61 forecasting methods [M4 Competition]. International Journal of Forecasting, 36(1), 54-74. https://doi.org/https://doi.org/10.1016/j.ijforecast.2019. 04.014
Mart´ ın Abadi, Ashish Agarwal, Paul Barham, Eugene Brevdo, Zhifeng Chen, Craig Citro, Greg S. Corrado, Andy Davis, Jeffrey Dean, Matthieu Devin, Sanjay Ghemawat, Ian Goodfellow, Andrew Harp, Geoffrey Irving, Michael Isard, Jia, Y., Rafal Jozefo wicz, Lukasz Kaiser, Manjunath Kudlur, ... Xiaoqiang Zheng. (2015). TensorFlow: Large-Scale Machine Learning on Heterogeneous Systems [Software available from tensorflow.org]. https://www.tensorflow.org/
McCulloch, W. S., & Pitts, W. (1943). A logical calculus of the ideas immanent in ner vous activity. The bulletin of mathematical biophysics, 5. https://doi.org/10.1007/ BF02478259
Mikolov, T., Chen, K., Corrado, G., & Dean, J. (2013). Efficient Estimation of Word Repre sentations in Vector Space
Montero-Manso, P., & Hyndman, R. J. (2021). Principles and algorithms for forecasting groups of time series: Locality and globality. International Journal of Forecasting, 37(4), 1632-1653. https://doi.org/https://doi.org/10.1016/j.ijforecast.2021.03.004
Nariswari, R., & Pudjihastuti, H. (2019). Bayesian Forecasting for Time Series of Count Data [The 4th International Conference on Computer Science and Computational Intelligence (ICCSCI 2019) : Enabling Collaboration to Escalate Impact of Research Results for Society]. Procedia Computer Science, 157, 427-435. https://doi.org/https: //doi.org/10.1016/j.procs.2019.08.235
Nelder, J. A., & Wedderburn, R. W. M. (1972). Generalized Linear Models. Journal of the Royal Statistical Society. Series A (General), 135(3), 370-384. Consultado el 13 de enero de 2024, desde http://www.jstor.org/stable/2344614
Ng, A. Y., Katanforoosh, K., & Mourri, Y. B. (2023). Neural Networks and Deep Learning [MOOC]. Coursera. https://www.coursera.org/learn/neural-networks-deep-learning
Nie, Y., Nguyen, N. H., Sinthong, P., & Kalagnanam, J. (2023). A Time Series is Worth 64 Words: Long-term Forecasting with Transformers
Nielsen, M. A. (2015). Neural Networks and Deep Learning. Determination Press. of Transportation, N. D. (2017). Bicycle Counts for East River Bridges (Historical) [Daily total of bike counts conducted monthly on the Brooklyn Bridge, Manhattan Brid ge, Williamsburg Bridge, and Queensboro Bridge. https://data.cityofnewyork.us/ Transportation/Bicycle-Counts-for-East-River-Bridges-Historical-/gua4-p9wg]
Parr, T., & Howard, J. (2018). The Matrix Calculus You Need For Deep Learning
Pedregosa, F., Varoquaux, G., Gramfort, A., Michel, V., Thirion, B., Grisel, O., Blondel, M., Prettenhofer, P., Weiss, R., Dubourg, V., Vanderplas, J., Passos, A., Cournapeau, D., Brucher, M., Perrot, M., & Duchesnay, E. (2011). Scikit-learn: Machine Learning in Python. Journal of Machine Learning Research, 12, 2825-2830
Phuong, M., & Hutter, M. (2022). Formal Algorithms for Transformers
R Core Team. (2023). R: A Language and Environment for Statistical Computing. R Foun dation for Statistical Computing. Vienna, Austria. https://www.R-project.org/
Ruder, S. (2016). An overview of gradient descent optimization algorithms. CoRR, abs/1609.04747. http://arxiv.org/abs/1609.04747
Rue, H., Martino, S., & Chopin, N. (2009). Approximate Bayesian Inference for Latent Gaussian models by using Integrated Nested Laplace Approximations. Journal of the Royal Statistical Society Series B: Statistical Methodology, 71(2), 319-392. https: //doi.org/10.1111/j.1467-9868.2008.00700.x
Sathish, V., Mukhopadhyay, S., & Tiwari, R. (2020). ARMA Models for Zero Inflated Count Time Series. https://doi.org/10.48550/ARXIV.2004.10732
Schmidhuber, J. (2015). Deep learning in neural networks: An overview. Neural Networks, 61, 85-117. https://doi.org/10.1016/j.neunet.2014.09.003
Seabold, S., & Perktold, J. (2010). statsmodels: Econometric and statistical modeling with python. 9th Python in Science Conference
Shenstone, L., & Hyndman, R. J. (2005). Stochastic models underlying Croston’s method for intermittent demand forecasting. Journal of Forecasting, 24(6), 389-402. https: //doi.org/https://doi.org/10.1002/for.963
Shmueli, G., Bruce, P. C., Yahav, I., Patel, N. R., & Lichtendahl Jr., K. C. (2018). Data mining for business analytics. Wiley
Shmueli, G., & Lichtebdahl, K. C. (2018). Practical time series forecasting with R. Axelrod Schnall Publishers
Shrivastava, S. (2020). Cross Validation in Time Series. https://medium.com/@soumyachess1496/ cross-validation-in-time-series-566ae4981ce4
Smith, T. G. (2017). pmdarima: ARIMA estimators for Python
Srivastava, N., Hinton, G., Krizhevsky, A., Sutskever, I., & Salakhutdinov, R. (2014). Dro pout: A Simple WaytoPrevent Neural Networks from Overfitting. Journal of Machine Learning Research, 15(56), 1929-1958. http://jmlr.org/papers/v15/srivastava14a. html
Terven, J., Cordova-Esparza, D. M., Ramirez-Pedraza, A., & Chavez-Urbiola, E. A. (2023). Loss Functions and Metrics in Deep Learning
Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A. N., Kaiser, L., & Polosukhin, I. (2017). Attention is all you need. Advances in neural information processing systems, 5998-6008. http://arxiv.org/abs/1706.03762
Virtanen, P., Gommers, R., Oliphant, T. E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., van der Walt, S. J., Brett, M., Wilson, J., Millman, K. J., Mayorov, N., Nelson, A. R. J., Jones, E., Kern, R., Larson, E., ... SciPy 1.0 Contributors. (2020). SciPy 1.0: Fundamental Algorithms for Scientific Computing in Python. Nature Methods, 17, 261-272. https://doi.org/ 10.1038/s41592-019-0686-2
Wen, Q., Zhou, T., Zhang, C., Chen, W., Ma, Z., Yan, J., & Sun, L. (2023). Transformers in Time Series: A Survey
Zeng, A., Chen, M.-H., Zhang, L., & Xu, Q. (2022). Are Transformers Effective for Ti me Series Forecasting? AAAI Conference on Artificial Intelligence. https://api. semanticscholar.org/CorpusID:249097444
Zhang, A., Lipton, Z. C., Li, M., & Smola, A. J. (2021). Dive into Deep Learning. arXiv preprint arXiv:2106.11342
dc.rights.coar.fl_str_mv http://purl.org/coar/access_right/c_abf2
dc.rights.license.spa.fl_str_mv Atribución-NoComercial 4.0 Internacional
dc.rights.uri.spa.fl_str_mv http://creativecommons.org/licenses/by-nc/4.0/
dc.rights.accessrights.spa.fl_str_mv info:eu-repo/semantics/openAccess
rights_invalid_str_mv Atribución-NoComercial 4.0 Internacional
http://creativecommons.org/licenses/by-nc/4.0/
http://purl.org/coar/access_right/c_abf2
eu_rights_str_mv openAccess
dc.format.extent.spa.fl_str_mv 1 recursos en línea (167 páginas)
dc.format.mimetype.spa.fl_str_mv application/pdf
dc.publisher.spa.fl_str_mv Universidad Nacional de Colombia
dc.publisher.program.spa.fl_str_mv Medellín - Ciencias - Maestría en Ciencias - Estadística
dc.publisher.faculty.spa.fl_str_mv Facultad de Ciencias
dc.publisher.place.spa.fl_str_mv Medellín, Colombia
dc.publisher.branch.spa.fl_str_mv Universidad Nacional de Colombia - Sede Medellín
institution Universidad Nacional de Colombia
bitstream.url.fl_str_mv https://repositorio.unal.edu.co/bitstream/unal/85925/1/license.txt
https://repositorio.unal.edu.co/bitstream/unal/85925/3/1152456210.2024.pdf
https://repositorio.unal.edu.co/bitstream/unal/85925/4/1152456210.2024.pdf.jpg
bitstream.checksum.fl_str_mv eb34b1cf90b7e1103fc9dfd26be24b4a
4c0201ec036c147f883503770c2e8d78
a19e8cf205a6c1f8e28a665691dfb85e
bitstream.checksumAlgorithm.fl_str_mv MD5
MD5
MD5
repository.name.fl_str_mv Repositorio Institucional Universidad Nacional de Colombia
repository.mail.fl_str_mv repositorio_nal@unal.edu.co
_version_ 1814089940437303296
spelling Atribución-NoComercial 4.0 Internacionalhttp://creativecommons.org/licenses/by-nc/4.0/info:eu-repo/semantics/openAccesshttp://purl.org/coar/access_right/c_abf2Cabarcas Jaramillo, Daniel9523b5dcc283edd60a465e234d239f3cGonzáles Alvarez, Nelfi Gertrudis3957256ebf7d9d41633c63e4d946876fBetancur Rodríguez, Danielb6bcfbc58553fd8a6ebf508e9eac3b5b2024-04-16T15:44:15Z2024-04-16T15:44:15Z2024-04-16https://repositorio.unal.edu.co/handle/unal/85925Universidad Nacional de ColombiaRepositorio Institucional Universidad Nacional de Colombiahttps://repositorio.unal.edu.co/IlustracionesEl pronóstico de series de tiempo de conteos es un caso particular de interés para la asignación óptima de capacidades e inventarios acorde a la demanda esperada, entre otras aplicaciones. Para abordar el pronóstico de las series de tiempo de conteos se han propuesto modelos estadísticos como los modelos autorregresivos para series de conteo o los modelos dinámicos generalizados. Por otro lado, se han aplicado metodologías basadas en algoritmos de machine learning apalancándose en la creciente potencia computacional, como las redes neuronales recurrentes y las arquitecturas basadas en algoritmos de atención, llamadas Transformers. El presente trabajo explora el problema del pronóstico paralelo de múltiples series de conteo, aplicando metodologías propias de la estadística y el machine learning en diversos escenarios de simulación en los cuales se compara la calidad de pronóstico, el tiempo computacional demandado y el esfuerzo para adaptar las metodologías a casos reales (texto tomado de la fuente)Forecasting time series of counts, with support on non-negative integers, is a particular case of interest for optimal job assigment and inventory allocation according to expected demand, among other applications. To address the problem of forecasting time series of counts, statiscal models such as autorregresive models for count data or dynamic generalized models have been proposed. On the other side, methodologies based on machine learning algorithms have been applied, leveraging on the increasing computational power, such as recurrent neuronal netwroks, LSTM networks architecures and architectures based in attention algorithms called Transformers. This study explores the problem of parallel forecasting multiple time series of counts, applying statistical and machine learning methodologies to various simulation scenarios in which the forecasting performance, demanded computational time, and the effort to adapt each methodology to real cases are comparedMaestríaMagíster en EstadísticaAnalíticaProcesos estocásticosÁrea Curricular Estadística1 recursos en línea (167 páginas)application/pdfspaUniversidad Nacional de ColombiaMedellín - Ciencias - Maestría en Ciencias - EstadísticaFacultad de CienciasMedellín, ColombiaUniversidad Nacional de Colombia - Sede Medellín510 - Matemáticas::519 - Probabilidades y matemáticas aplicadasAnálisis de series de tiempoProcesos de PoissonRedes neuronales (computadores)Aprendizaje automático (inteligencia artificial)Modelos lineales generalizadosprediccióndatos de conteosregresión Poissonseries de tiemporedes neuronales recurrentestransformersGeneralized lineal modelsPredictionCount dataPoisson regressionStatespace modelsTime seriesReuronal networksRecurrent neuronal networksTransformersAnálisis comparativo de metodologías de pronóstico para múltiples series de tiempo de conteosComparative analysis of forecasting methodologies for multiple time series of countsTrabajo de grado - Maestríainfo:eu-repo/semantics/masterThesisinfo:eu-repo/semantics/acceptedVersionTexthttp://purl.org/redcol/resource_type/TMLaReferenciaAghababaei Jazi, M., & Alamatsaz, M. (2012). Two new thinning operators and their appli cations. Global Journal of Pure and Applied Mathematics, 8, 13-28Allende, H., Moraga, C., & Salas, R. (2002). Artificial neural networks in time series fore casting: a comparative analysis. Kybernetika, 38(6), 685-707Ba, J. L., Kiros, J. R., & Hinton, G. E. (2016). Layer NormalizationBahdanau, D., Cho, K., & Bengio, Y. (2016a). Neural Machine Translation by Jointly Lear ning to Align and TranslateBahdanau, D., Cho, K., & Bengio, Y. (2016b). Neural Machine Translation by Jointly Lear ning to Align and TranslateBandara, K., Shi, P., Bergmeir, C., Hewamalage, H., Tran, Q., & Seaman, B. (2019). Sales Demand Forecast in E-commerce Using a Long Short-Term Memory Neural Network Methodology. En T. Gedeon, K. W. Wong & M. Lee (Eds.), Neural Information Processing (pp. 462-474). Springer International PublishingByrd, R. H., Schnabel, R. B., & Shultz, G. A. (1987). A Trust Region Algorithm for Non linearly Constrained Optimization. SIAM Journal on Numerical Analysis, 24(5), 1152-1170. Consultado el 7 de mayo de 2023, desde http://www.jstor.org/stable/ 2157645Chollet, F. (2017). Deep Learning with Python (1st). Manning Publications CoChristou, V., & Fokianos, K. (2015). On count time series prediction. Journal of Statistical Computation and Simulation, 85(2), 357-373. https://doi.org/10.1080/00949655. 2013.823612Davis, R. A., Fokianos, K., Holan, S. H., Joe, H., Livsey, J., Lund, R., Pipiras, V., & Ravishanker, N. (2021). Count Time Series: A methodological Review. Journal of the American Statistical Association, 116, 1533-1547. https://doi.org/10.1080/01621459. 2021.1904957Dufour, J.-M. (2008). Estimation of ARMA models by maximum likelihood. https:// jeanmariedufour.github.io/ResE/Dufour 2008 C TS ARIMA Estimation.pdfDunsmuir, W. T. (2016). Generalized Linear Autoregressive Moving Average Models. En R. A. Davis, S. H. Holan, R. Lund & N. Ravishanker (Eds.). CRC PressExcoffier, M., Gicquel, C., & Jouini, O. (2016). A joint chance-constrained programming approach for call center workforce scheduling under uncertain call arrival forecasts. Computers & Industrial Engineering, 96, 16-30. https://doi.org/https://doi.org/10. 1016/j.cie.2016.03.013Farsani, R., Pazouki, E., & Jecei, J. (2021). A Transformer Self-Attention Model for Time Series Forecasting. Journal of Electrical and Computer Engineering Innovations, 9, 1-10. https://doi.org/10.22061/JECEI.2020.7426.391Fearnhead, P. (2011). MCMC for State Space Models. En S. Brooks, A. Gelman, G. Jones & X.-L. Meng (Eds.). Chapman; HALL/CRCFeng, C., Li, L., & Sadeghpour, A. (2020). A comparison of residual diagnosis tools for diagnosing regression models for count data. BMC Medical Research Methodology, 20, 1-21. https://doi.org/10.1186/s12874-020-01055-2Ferland, R., Latour, A., & Oraichi, D. (2006). Integer-Valued GARCH Process. Journal of Time Series Analysis, 27(6), 923-942. https://doi.org/https://doi.org/10.1111/j. 1467-9892.2006.00496.xFokianos, K. (2012). Count Time Series Models. Handbook of Statistics, 30, 315-347. https: //doi.org/10.1016/B978-0-444-53858-1.00012-0Fokianos, K., Rahbek, A., & Tjøstheim, D. (2009). Poisson Autoregression. Journal of the American Statistical Association, 104(488), 1430-1439. Consultado el 15 de marzo de 2023, desde http://www.jstor.org/stable/40592351Fokianos, K., & Tjøstheim, D. (2011). Log-linear Poisson autoregression. Journal of Multi variate Analysis, 102(3), 563-578. https://doi.org/https://doi.org/10.1016/j.jmva. 2010.11.002Gamerman, D., Abanto-Valle, C., Silva, R., & Martins, T. (2016). Dynamic Bayesian Mo dels for Discrete-Valued Time Series. En R. A. Davis, S. H. Holan, R. Lund & N. Ravishanker (Eds.). CRC PessGoodfellow, I., Bengio, Y., & Courville, A. (2016). Deep Learning [http://www.deeplearningbook. org]. MIT PressHe, K., Zhang, X., Ren, S., & Sun, J. (2016). Deep Residual Learning for Image Recogni tion. 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 770-778. https://doi.org/10.1109/CVPR.2016.90Hewamalage, H., Bergmeir, C., & Bandara, K. (2022). Global models for time series forecas ting: A Simulation study. Pattern Recognition, 124, 108441. https://doi.org/https: //doi.org/10.1016/j.patcog.2021.108441Hoppe, R. W. (2006). Chapter 4 Sequential Quadratic Programming. https://www.math. uh.edu/∼rohop/fall 06/Chapter4.pdfHyndman, R. J. Focused Workshop: Synthetic Data — Generating time series. En: 2021. https://www.youtube.com/watch?v=F3lWECtFa44&ab channel=AustralianDataScienceNetwokHyndman, R. J., & Athanasopoulos, G. (2021). Forecasting: Principles and Practice (3rd). OTexts.Hyndman, R. J., Kang, Y., Montero-Manso, P., O’Hara-Wild, M., Talagala, T., Wang, E., & Yang, Y. (2023). tsfeatures: Time Series Feature Extraction [https://pkg.robjhyndman.com/tsfeatures/, https://github.com/robjhyndman/tsfeatures]Hyndman, R. J., & Koehler, A. B. (2006). Another look at measures of forecast accuracy. International Journal of Forecasting, 22(4), 679-688. https://doi.org/https://doi. org/10.1016/j.ijforecast.2006.03.001Jia, Y. (2018). Some Models for Count TimeSeries (Tesis doctoral). Clemson University. 105 Sikes Hall, Clemson, SC 29634, Estados Unidos. https://tigerprints.clemson. edu/all dissertations/2213Kang, Y., Hyndman, R. J., & Li, F. (2020). GRATIS: GeneRAting TIme Series with diverse and controllable characteristics. Statistical Analysis and Data Mining: The ASA Data Science Journal, 13(4), 354-376. https://doi.org/10.1002/sam.11461Liboschik, T., Fokianos, K., & Fried, R. (2017). tscount: An R Package for Analysis of Count Time Series Following Generalized Linear Models. Journal of Statistical Software, 82(5), 1-51. https://doi.org/10.18637/jss.v082.i05Lund, R., & Livsey, J. (2016). Renewal-Based Count Time Series. En R. A. Davis, S. H. Holan, R. Lund & N. Ravishanker (Eds.). CRC PressMakridakis, S. (1993). Accuracy measures: theoretical and practical concerns. International Journal of Forecasting, 9(4), 527-529. https://doi.org/https://doi.org/10.1016/0169 2070(93)90079-3Makridakis, S., Spiliotis, E., & Assimakopoulos, V. (2018). Statistical and Machine Learning forecasting methods: Concerns and ways forward. Plos One. https://doi.org/10.1371/ journal.pone.0194889Makridakis, S., Spiliotis, E., & Assimakopoulos, V. (2020). The M4 Competition: 100,000 time series and 61 forecasting methods [M4 Competition]. International Journal of Forecasting, 36(1), 54-74. https://doi.org/https://doi.org/10.1016/j.ijforecast.2019. 04.014Mart´ ın Abadi, Ashish Agarwal, Paul Barham, Eugene Brevdo, Zhifeng Chen, Craig Citro, Greg S. Corrado, Andy Davis, Jeffrey Dean, Matthieu Devin, Sanjay Ghemawat, Ian Goodfellow, Andrew Harp, Geoffrey Irving, Michael Isard, Jia, Y., Rafal Jozefo wicz, Lukasz Kaiser, Manjunath Kudlur, ... Xiaoqiang Zheng. (2015). TensorFlow: Large-Scale Machine Learning on Heterogeneous Systems [Software available from tensorflow.org]. https://www.tensorflow.org/McCulloch, W. S., & Pitts, W. (1943). A logical calculus of the ideas immanent in ner vous activity. The bulletin of mathematical biophysics, 5. https://doi.org/10.1007/ BF02478259Mikolov, T., Chen, K., Corrado, G., & Dean, J. (2013). Efficient Estimation of Word Repre sentations in Vector SpaceMontero-Manso, P., & Hyndman, R. J. (2021). Principles and algorithms for forecasting groups of time series: Locality and globality. International Journal of Forecasting, 37(4), 1632-1653. https://doi.org/https://doi.org/10.1016/j.ijforecast.2021.03.004Nariswari, R., & Pudjihastuti, H. (2019). Bayesian Forecasting for Time Series of Count Data [The 4th International Conference on Computer Science and Computational Intelligence (ICCSCI 2019) : Enabling Collaboration to Escalate Impact of Research Results for Society]. Procedia Computer Science, 157, 427-435. https://doi.org/https: //doi.org/10.1016/j.procs.2019.08.235Nelder, J. A., & Wedderburn, R. W. M. (1972). Generalized Linear Models. Journal of the Royal Statistical Society. Series A (General), 135(3), 370-384. Consultado el 13 de enero de 2024, desde http://www.jstor.org/stable/2344614Ng, A. Y., Katanforoosh, K., & Mourri, Y. B. (2023). Neural Networks and Deep Learning [MOOC]. Coursera. https://www.coursera.org/learn/neural-networks-deep-learningNie, Y., Nguyen, N. H., Sinthong, P., & Kalagnanam, J. (2023). A Time Series is Worth 64 Words: Long-term Forecasting with TransformersNielsen, M. A. (2015). Neural Networks and Deep Learning. Determination Press. of Transportation, N. D. (2017). Bicycle Counts for East River Bridges (Historical) [Daily total of bike counts conducted monthly on the Brooklyn Bridge, Manhattan Brid ge, Williamsburg Bridge, and Queensboro Bridge. https://data.cityofnewyork.us/ Transportation/Bicycle-Counts-for-East-River-Bridges-Historical-/gua4-p9wg]Parr, T., & Howard, J. (2018). The Matrix Calculus You Need For Deep LearningPedregosa, F., Varoquaux, G., Gramfort, A., Michel, V., Thirion, B., Grisel, O., Blondel, M., Prettenhofer, P., Weiss, R., Dubourg, V., Vanderplas, J., Passos, A., Cournapeau, D., Brucher, M., Perrot, M., & Duchesnay, E. (2011). Scikit-learn: Machine Learning in Python. Journal of Machine Learning Research, 12, 2825-2830Phuong, M., & Hutter, M. (2022). Formal Algorithms for TransformersR Core Team. (2023). R: A Language and Environment for Statistical Computing. R Foun dation for Statistical Computing. Vienna, Austria. https://www.R-project.org/Ruder, S. (2016). An overview of gradient descent optimization algorithms. CoRR, abs/1609.04747. http://arxiv.org/abs/1609.04747Rue, H., Martino, S., & Chopin, N. (2009). Approximate Bayesian Inference for Latent Gaussian models by using Integrated Nested Laplace Approximations. Journal of the Royal Statistical Society Series B: Statistical Methodology, 71(2), 319-392. https: //doi.org/10.1111/j.1467-9868.2008.00700.xSathish, V., Mukhopadhyay, S., & Tiwari, R. (2020). ARMA Models for Zero Inflated Count Time Series. https://doi.org/10.48550/ARXIV.2004.10732Schmidhuber, J. (2015). Deep learning in neural networks: An overview. Neural Networks, 61, 85-117. https://doi.org/10.1016/j.neunet.2014.09.003Seabold, S., & Perktold, J. (2010). statsmodels: Econometric and statistical modeling with python. 9th Python in Science ConferenceShenstone, L., & Hyndman, R. J. (2005). Stochastic models underlying Croston’s method for intermittent demand forecasting. Journal of Forecasting, 24(6), 389-402. https: //doi.org/https://doi.org/10.1002/for.963Shmueli, G., Bruce, P. C., Yahav, I., Patel, N. R., & Lichtendahl Jr., K. C. (2018). Data mining for business analytics. WileyShmueli, G., & Lichtebdahl, K. C. (2018). Practical time series forecasting with R. Axelrod Schnall PublishersShrivastava, S. (2020). Cross Validation in Time Series. https://medium.com/@soumyachess1496/ cross-validation-in-time-series-566ae4981ce4Smith, T. G. (2017). pmdarima: ARIMA estimators for PythonSrivastava, N., Hinton, G., Krizhevsky, A., Sutskever, I., & Salakhutdinov, R. (2014). Dro pout: A Simple WaytoPrevent Neural Networks from Overfitting. Journal of Machine Learning Research, 15(56), 1929-1958. http://jmlr.org/papers/v15/srivastava14a. htmlTerven, J., Cordova-Esparza, D. M., Ramirez-Pedraza, A., & Chavez-Urbiola, E. A. (2023). Loss Functions and Metrics in Deep LearningVaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A. N., Kaiser, L., & Polosukhin, I. (2017). Attention is all you need. Advances in neural information processing systems, 5998-6008. http://arxiv.org/abs/1706.03762Virtanen, P., Gommers, R., Oliphant, T. E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., van der Walt, S. J., Brett, M., Wilson, J., Millman, K. J., Mayorov, N., Nelson, A. R. J., Jones, E., Kern, R., Larson, E., ... SciPy 1.0 Contributors. (2020). SciPy 1.0: Fundamental Algorithms for Scientific Computing in Python. Nature Methods, 17, 261-272. https://doi.org/ 10.1038/s41592-019-0686-2Wen, Q., Zhou, T., Zhang, C., Chen, W., Ma, Z., Yan, J., & Sun, L. (2023). Transformers in Time Series: A SurveyZeng, A., Chen, M.-H., Zhang, L., & Xu, Q. (2022). Are Transformers Effective for Ti me Series Forecasting? AAAI Conference on Artificial Intelligence. https://api. semanticscholar.org/CorpusID:249097444Zhang, A., Lipton, Z. C., Li, M., & Smola, A. J. (2021). Dive into Deep Learning. arXiv preprint arXiv:2106.11342AdministradoresEstudiantesInvestigadoresMaestrosLICENSElicense.txtlicense.txttext/plain; charset=utf-85879https://repositorio.unal.edu.co/bitstream/unal/85925/1/license.txteb34b1cf90b7e1103fc9dfd26be24b4aMD51ORIGINAL1152456210.2024.pdf1152456210.2024.pdfTesis Maestría en Ciencias - Estadísticaapplication/pdf3388421https://repositorio.unal.edu.co/bitstream/unal/85925/3/1152456210.2024.pdf4c0201ec036c147f883503770c2e8d78MD53THUMBNAIL1152456210.2024.pdf.jpg1152456210.2024.pdf.jpgGenerated Thumbnailimage/jpeg4343https://repositorio.unal.edu.co/bitstream/unal/85925/4/1152456210.2024.pdf.jpga19e8cf205a6c1f8e28a665691dfb85eMD54unal/85925oai:repositorio.unal.edu.co:unal/859252024-08-23 23:12:18.156Repositorio Institucional Universidad Nacional de Colombiarepositorio_nal@unal.edu.coUEFSVEUgMS4gVMOJUk1JTk9TIERFIExBIExJQ0VOQ0lBIFBBUkEgUFVCTElDQUNJw5NOIERFIE9CUkFTIEVOIEVMIFJFUE9TSVRPUklPIElOU1RJVFVDSU9OQUwgVU5BTC4KCkxvcyBhdXRvcmVzIHkvbyB0aXR1bGFyZXMgZGUgbG9zIGRlcmVjaG9zIHBhdHJpbW9uaWFsZXMgZGUgYXV0b3IsIGNvbmZpZXJlbiBhIGxhIFVuaXZlcnNpZGFkIE5hY2lvbmFsIGRlIENvbG9tYmlhIHVuYSBsaWNlbmNpYSBubyBleGNsdXNpdmEsIGxpbWl0YWRhIHkgZ3JhdHVpdGEgc29icmUgbGEgb2JyYSBxdWUgc2UgaW50ZWdyYSBlbiBlbCBSZXBvc2l0b3JpbyBJbnN0aXR1Y2lvbmFsLCBiYWpvIGxvcyBzaWd1aWVudGVzIHTDqXJtaW5vczoKCgphKQlMb3MgYXV0b3JlcyB5L28gbG9zIHRpdHVsYXJlcyBkZSBsb3MgZGVyZWNob3MgcGF0cmltb25pYWxlcyBkZSBhdXRvciBzb2JyZSBsYSBvYnJhIGNvbmZpZXJlbiBhIGxhIFVuaXZlcnNpZGFkIE5hY2lvbmFsIGRlIENvbG9tYmlhIHVuYSBsaWNlbmNpYSBubyBleGNsdXNpdmEgcGFyYSByZWFsaXphciBsb3Mgc2lndWllbnRlcyBhY3RvcyBzb2JyZSBsYSBvYnJhOiBpKSByZXByb2R1Y2lyIGxhIG9icmEgZGUgbWFuZXJhIGRpZ2l0YWwsIHBlcm1hbmVudGUgbyB0ZW1wb3JhbCwgaW5jbHV5ZW5kbyBlbCBhbG1hY2VuYW1pZW50byBlbGVjdHLDs25pY28sIGFzw60gY29tbyBjb252ZXJ0aXIgZWwgZG9jdW1lbnRvIGVuIGVsIGN1YWwgc2UgZW5jdWVudHJhIGNvbnRlbmlkYSBsYSBvYnJhIGEgY3VhbHF1aWVyIG1lZGlvIG8gZm9ybWF0byBleGlzdGVudGUgYSBsYSBmZWNoYSBkZSBsYSBzdXNjcmlwY2nDs24gZGUgbGEgcHJlc2VudGUgbGljZW5jaWEsIHkgaWkpIGNvbXVuaWNhciBhbCBww7pibGljbyBsYSBvYnJhIHBvciBjdWFscXVpZXIgbWVkaW8gbyBwcm9jZWRpbWllbnRvLCBlbiBtZWRpb3MgYWzDoW1icmljb3MgbyBpbmFsw6FtYnJpY29zLCBpbmNsdXllbmRvIGxhIHB1ZXN0YSBhIGRpc3Bvc2ljacOzbiBlbiBhY2Nlc28gYWJpZXJ0by4gQWRpY2lvbmFsIGEgbG8gYW50ZXJpb3IsIGVsIGF1dG9yIHkvbyB0aXR1bGFyIGF1dG9yaXphIGEgbGEgVW5pdmVyc2lkYWQgTmFjaW9uYWwgZGUgQ29sb21iaWEgcGFyYSBxdWUsIGVuIGxhIHJlcHJvZHVjY2nDs24geSBjb211bmljYWNpw7NuIGFsIHDDumJsaWNvIHF1ZSBsYSBVbml2ZXJzaWRhZCByZWFsaWNlIHNvYnJlIGxhIG9icmEsIGhhZ2EgbWVuY2nDs24gZGUgbWFuZXJhIGV4cHJlc2EgYWwgdGlwbyBkZSBsaWNlbmNpYSBDcmVhdGl2ZSBDb21tb25zIGJham8gbGEgY3VhbCBlbCBhdXRvciB5L28gdGl0dWxhciBkZXNlYSBvZnJlY2VyIHN1IG9icmEgYSBsb3MgdGVyY2Vyb3MgcXVlIGFjY2VkYW4gYSBkaWNoYSBvYnJhIGEgdHJhdsOpcyBkZWwgUmVwb3NpdG9yaW8gSW5zdGl0dWNpb25hbCwgY3VhbmRvIHNlYSBlbCBjYXNvLiBFbCBhdXRvciB5L28gdGl0dWxhciBkZSBsb3MgZGVyZWNob3MgcGF0cmltb25pYWxlcyBkZSBhdXRvciBwb2Ryw6EgZGFyIHBvciB0ZXJtaW5hZGEgbGEgcHJlc2VudGUgbGljZW5jaWEgbWVkaWFudGUgc29saWNpdHVkIGVsZXZhZGEgYSBsYSBEaXJlY2Npw7NuIE5hY2lvbmFsIGRlIEJpYmxpb3RlY2FzIGRlIGxhIFVuaXZlcnNpZGFkIE5hY2lvbmFsIGRlIENvbG9tYmlhLiAKCmIpIAlMb3MgYXV0b3JlcyB5L28gdGl0dWxhcmVzIGRlIGxvcyBkZXJlY2hvcyBwYXRyaW1vbmlhbGVzIGRlIGF1dG9yIHNvYnJlIGxhIG9icmEgY29uZmllcmVuIGxhIGxpY2VuY2lhIHNlw7FhbGFkYSBlbiBlbCBsaXRlcmFsIGEpIGRlbCBwcmVzZW50ZSBkb2N1bWVudG8gcG9yIGVsIHRpZW1wbyBkZSBwcm90ZWNjacOzbiBkZSBsYSBvYnJhIGVuIHRvZG9zIGxvcyBwYcOtc2VzIGRlbCBtdW5kbywgZXN0byBlcywgc2luIGxpbWl0YWNpw7NuIHRlcnJpdG9yaWFsIGFsZ3VuYS4KCmMpCUxvcyBhdXRvcmVzIHkvbyB0aXR1bGFyZXMgZGUgZGVyZWNob3MgcGF0cmltb25pYWxlcyBkZSBhdXRvciBtYW5pZmllc3RhbiBlc3RhciBkZSBhY3VlcmRvIGNvbiBxdWUgbGEgcHJlc2VudGUgbGljZW5jaWEgc2Ugb3RvcmdhIGEgdMOtdHVsbyBncmF0dWl0bywgcG9yIGxvIHRhbnRvLCByZW51bmNpYW4gYSByZWNpYmlyIGN1YWxxdWllciByZXRyaWJ1Y2nDs24gZWNvbsOzbWljYSBvIGVtb2x1bWVudG8gYWxndW5vIHBvciBsYSBwdWJsaWNhY2nDs24sIGRpc3RyaWJ1Y2nDs24sIGNvbXVuaWNhY2nDs24gcMO6YmxpY2EgeSBjdWFscXVpZXIgb3RybyB1c28gcXVlIHNlIGhhZ2EgZW4gbG9zIHTDqXJtaW5vcyBkZSBsYSBwcmVzZW50ZSBsaWNlbmNpYSB5IGRlIGxhIGxpY2VuY2lhIENyZWF0aXZlIENvbW1vbnMgY29uIHF1ZSBzZSBwdWJsaWNhLgoKZCkJUXVpZW5lcyBmaXJtYW4gZWwgcHJlc2VudGUgZG9jdW1lbnRvIGRlY2xhcmFuIHF1ZSBwYXJhIGxhIGNyZWFjacOzbiBkZSBsYSBvYnJhLCBubyBzZSBoYW4gdnVsbmVyYWRvIGxvcyBkZXJlY2hvcyBkZSBwcm9waWVkYWQgaW50ZWxlY3R1YWwsIGluZHVzdHJpYWwsIG1vcmFsZXMgeSBwYXRyaW1vbmlhbGVzIGRlIHRlcmNlcm9zLiBEZSBvdHJhIHBhcnRlLCAgcmVjb25vY2VuIHF1ZSBsYSBVbml2ZXJzaWRhZCBOYWNpb25hbCBkZSBDb2xvbWJpYSBhY3TDumEgY29tbyB1biB0ZXJjZXJvIGRlIGJ1ZW5hIGZlIHkgc2UgZW5jdWVudHJhIGV4ZW50YSBkZSBjdWxwYSBlbiBjYXNvIGRlIHByZXNlbnRhcnNlIGFsZ8O6biB0aXBvIGRlIHJlY2xhbWFjacOzbiBlbiBtYXRlcmlhIGRlIGRlcmVjaG9zIGRlIGF1dG9yIG8gcHJvcGllZGFkIGludGVsZWN0dWFsIGVuIGdlbmVyYWwuIFBvciBsbyB0YW50bywgbG9zIGZpcm1hbnRlcyAgYWNlcHRhbiBxdWUgY29tbyB0aXR1bGFyZXMgw7puaWNvcyBkZSBsb3MgZGVyZWNob3MgcGF0cmltb25pYWxlcyBkZSBhdXRvciwgYXN1bWlyw6FuIHRvZGEgbGEgcmVzcG9uc2FiaWxpZGFkIGNpdmlsLCBhZG1pbmlzdHJhdGl2YSB5L28gcGVuYWwgcXVlIHB1ZWRhIGRlcml2YXJzZSBkZSBsYSBwdWJsaWNhY2nDs24gZGUgbGEgb2JyYS4gIAoKZikJQXV0b3JpemFuIGEgbGEgVW5pdmVyc2lkYWQgTmFjaW9uYWwgZGUgQ29sb21iaWEgaW5jbHVpciBsYSBvYnJhIGVuIGxvcyBhZ3JlZ2Fkb3JlcyBkZSBjb250ZW5pZG9zLCBidXNjYWRvcmVzIGFjYWTDqW1pY29zLCBtZXRhYnVzY2Fkb3Jlcywgw61uZGljZXMgeSBkZW3DoXMgbWVkaW9zIHF1ZSBzZSBlc3RpbWVuIG5lY2VzYXJpb3MgcGFyYSBwcm9tb3ZlciBlbCBhY2Nlc28geSBjb25zdWx0YSBkZSBsYSBtaXNtYS4gCgpnKQlFbiBlbCBjYXNvIGRlIGxhcyB0ZXNpcyBjcmVhZGFzIHBhcmEgb3B0YXIgZG9ibGUgdGl0dWxhY2nDs24sIGxvcyBmaXJtYW50ZXMgc2Vyw6FuIGxvcyByZXNwb25zYWJsZXMgZGUgY29tdW5pY2FyIGEgbGFzIGluc3RpdHVjaW9uZXMgbmFjaW9uYWxlcyBvIGV4dHJhbmplcmFzIGVuIGNvbnZlbmlvLCBsYXMgbGljZW5jaWFzIGRlIGFjY2VzbyBhYmllcnRvIENyZWF0aXZlIENvbW1vbnMgeSBhdXRvcml6YWNpb25lcyBhc2lnbmFkYXMgYSBzdSBvYnJhIHBhcmEgbGEgcHVibGljYWNpw7NuIGVuIGVsIFJlcG9zaXRvcmlvIEluc3RpdHVjaW9uYWwgVU5BTCBkZSBhY3VlcmRvIGNvbiBsYXMgZGlyZWN0cmljZXMgZGUgbGEgUG9sw610aWNhIEdlbmVyYWwgZGUgbGEgQmlibGlvdGVjYSBEaWdpdGFsLgoKCmgpCVNlIGF1dG9yaXphIGEgbGEgVW5pdmVyc2lkYWQgTmFjaW9uYWwgZGUgQ29sb21iaWEgY29tbyByZXNwb25zYWJsZSBkZWwgdHJhdGFtaWVudG8gZGUgZGF0b3MgcGVyc29uYWxlcywgZGUgYWN1ZXJkbyBjb24gbGEgbGV5IDE1ODEgZGUgMjAxMiBlbnRlbmRpZW5kbyBxdWUgc2UgZW5jdWVudHJhbiBiYWpvIG1lZGlkYXMgcXVlIGdhcmFudGl6YW4gbGEgc2VndXJpZGFkLCBjb25maWRlbmNpYWxpZGFkIGUgaW50ZWdyaWRhZCwgeSBzdSB0cmF0YW1pZW50byB0aWVuZSB1bmEgZmluYWxpZGFkIGhpc3TDs3JpY2EsIGVzdGFkw61zdGljYSBvIGNpZW50w61maWNhIHNlZ8O6biBsbyBkaXNwdWVzdG8gZW4gbGEgUG9sw610aWNhIGRlIFRyYXRhbWllbnRvIGRlIERhdG9zIFBlcnNvbmFsZXMuCgoKClBBUlRFIDIuIEFVVE9SSVpBQ0nDk04gUEFSQSBQVUJMSUNBUiBZIFBFUk1JVElSIExBIENPTlNVTFRBIFkgVVNPIERFIE9CUkFTIEVOIEVMIFJFUE9TSVRPUklPIElOU1RJVFVDSU9OQUwgVU5BTC4KClNlIGF1dG9yaXphIGxhIHB1YmxpY2FjacOzbiBlbGVjdHLDs25pY2EsIGNvbnN1bHRhIHkgdXNvIGRlIGxhIG9icmEgcG9yIHBhcnRlIGRlIGxhIFVuaXZlcnNpZGFkIE5hY2lvbmFsIGRlIENvbG9tYmlhIHkgZGUgc3VzIHVzdWFyaW9zIGRlIGxhIHNpZ3VpZW50ZSBtYW5lcmE6CgphLglDb25jZWRvIGxpY2VuY2lhIGVuIGxvcyB0w6lybWlub3Mgc2XDsWFsYWRvcyBlbiBsYSBwYXJ0ZSAxIGRlbCBwcmVzZW50ZSBkb2N1bWVudG8sIGNvbiBlbCBvYmpldGl2byBkZSBxdWUgbGEgb2JyYSBlbnRyZWdhZGEgc2VhIHB1YmxpY2FkYSBlbiBlbCBSZXBvc2l0b3JpbyBJbnN0aXR1Y2lvbmFsIGRlIGxhIFVuaXZlcnNpZGFkIE5hY2lvbmFsIGRlIENvbG9tYmlhIHkgcHVlc3RhIGEgZGlzcG9zaWNpw7NuIGVuIGFjY2VzbyBhYmllcnRvIHBhcmEgc3UgY29uc3VsdGEgcG9yIGxvcyB1c3VhcmlvcyBkZSBsYSBVbml2ZXJzaWRhZCBOYWNpb25hbCBkZSBDb2xvbWJpYSAgYSB0cmF2w6lzIGRlIGludGVybmV0LgoKCgpQQVJURSAzIEFVVE9SSVpBQ0nDk04gREUgVFJBVEFNSUVOVE8gREUgREFUT1MgUEVSU09OQUxFUy4KCkxhIFVuaXZlcnNpZGFkIE5hY2lvbmFsIGRlIENvbG9tYmlhLCBjb21vIHJlc3BvbnNhYmxlIGRlbCBUcmF0YW1pZW50byBkZSBEYXRvcyBQZXJzb25hbGVzLCBpbmZvcm1hIHF1ZSBsb3MgZGF0b3MgZGUgY2Fyw6FjdGVyIHBlcnNvbmFsIHJlY29sZWN0YWRvcyBtZWRpYW50ZSBlc3RlIGZvcm11bGFyaW8sIHNlIGVuY3VlbnRyYW4gYmFqbyBtZWRpZGFzIHF1ZSBnYXJhbnRpemFuIGxhIHNlZ3VyaWRhZCwgY29uZmlkZW5jaWFsaWRhZCBlIGludGVncmlkYWQgeSBzdSB0cmF0YW1pZW50byBzZSByZWFsaXphIGRlIGFjdWVyZG8gYWwgY3VtcGxpbWllbnRvIG5vcm1hdGl2byBkZSBsYSBMZXkgMTU4MSBkZSAyMDEyIHkgZGUgbGEgUG9sw610aWNhIGRlIFRyYXRhbWllbnRvIGRlIERhdG9zIFBlcnNvbmFsZXMgZGUgbGEgVW5pdmVyc2lkYWQgTmFjaW9uYWwgZGUgQ29sb21iaWEuIFB1ZWRlIGVqZXJjZXIgc3VzIGRlcmVjaG9zIGNvbW8gdGl0dWxhciBhIGNvbm9jZXIsIGFjdHVhbGl6YXIsIHJlY3RpZmljYXIgeSByZXZvY2FyIGxhcyBhdXRvcml6YWNpb25lcyBkYWRhcyBhIGxhcyBmaW5hbGlkYWRlcyBhcGxpY2FibGVzIGEgdHJhdsOpcyBkZSBsb3MgY2FuYWxlcyBkaXNwdWVzdG9zIHkgZGlzcG9uaWJsZXMgZW4gd3d3LnVuYWwuZWR1LmNvIG8gZS1tYWlsOiBwcm90ZWNkYXRvc19uYUB1bmFsLmVkdS5jbyIKClRlbmllbmRvIGVuIGN1ZW50YSBsbyBhbnRlcmlvciwgYXV0b3Jpem8gZGUgbWFuZXJhIHZvbHVudGFyaWEsIHByZXZpYSwgZXhwbMOtY2l0YSwgaW5mb3JtYWRhIGUgaW5lcXXDrXZvY2EgYSBsYSBVbml2ZXJzaWRhZCBOYWNpb25hbCBkZSBDb2xvbWJpYSBhIHRyYXRhciBsb3MgZGF0b3MgcGVyc29uYWxlcyBkZSBhY3VlcmRvIGNvbiBsYXMgZmluYWxpZGFkZXMgZXNwZWPDrWZpY2FzIHBhcmEgZWwgZGVzYXJyb2xsbyB5IGVqZXJjaWNpbyBkZSBsYXMgZnVuY2lvbmVzIG1pc2lvbmFsZXMgZGUgZG9jZW5jaWEsIGludmVzdGlnYWNpw7NuIHkgZXh0ZW5zacOzbiwgYXPDrSBjb21vIGxhcyByZWxhY2lvbmVzIGFjYWTDqW1pY2FzLCBsYWJvcmFsZXMsIGNvbnRyYWN0dWFsZXMgeSB0b2RhcyBsYXMgZGVtw6FzIHJlbGFjaW9uYWRhcyBjb24gZWwgb2JqZXRvIHNvY2lhbCBkZSBsYSBVbml2ZXJzaWRhZC4gCgo=