Análisis de dos algoritmos de Reinforcement Learning aplicados a OpenAi Gym Retro

El documento se enfoca en hacer una descripción de los algoritmos de reinforcement learning PPO y DQN, los cuales si bien son comparados con el framework Gym-retro de OpenAI (utilizando el juego de NES ice-climbers), cuentan con tantas optimizaciones y estilos que el alcance de las comparaciones es...

Full description

Autores:
González Oviedo, Rodrigo José
Tipo de recurso:
Trabajo de grado de pregrado
Fecha de publicación:
2023
Institución:
Universidad de los Andes
Repositorio:
Séneca: repositorio Uniandes
Idioma:
spa
OAI Identifier:
oai:repositorio.uniandes.edu.co:1992/69749
Acceso en línea:
http://hdl.handle.net/1992/69749
Palabra clave:
Reinforcement learning
Nintendo entertainment system
Game agents
Artificial intelligence
IceClimber
DQN
PPO
Gym retro
Stable baselines
Ingeniería
Rights
openAccess
License
Atribución 4.0 Internacional
id UNIANDES2_62839b20792d15b2508cd88e824eebe4
oai_identifier_str oai:repositorio.uniandes.edu.co:1992/69749
network_acronym_str UNIANDES2
network_name_str Séneca: repositorio Uniandes
repository_id_str
dc.title.none.fl_str_mv Análisis de dos algoritmos de Reinforcement Learning aplicados a OpenAi Gym Retro
title Análisis de dos algoritmos de Reinforcement Learning aplicados a OpenAi Gym Retro
spellingShingle Análisis de dos algoritmos de Reinforcement Learning aplicados a OpenAi Gym Retro
Reinforcement learning
Nintendo entertainment system
Game agents
Artificial intelligence
IceClimber
DQN
PPO
Gym retro
Stable baselines
Ingeniería
title_short Análisis de dos algoritmos de Reinforcement Learning aplicados a OpenAi Gym Retro
title_full Análisis de dos algoritmos de Reinforcement Learning aplicados a OpenAi Gym Retro
title_fullStr Análisis de dos algoritmos de Reinforcement Learning aplicados a OpenAi Gym Retro
title_full_unstemmed Análisis de dos algoritmos de Reinforcement Learning aplicados a OpenAi Gym Retro
title_sort Análisis de dos algoritmos de Reinforcement Learning aplicados a OpenAi Gym Retro
dc.creator.fl_str_mv González Oviedo, Rodrigo José
dc.contributor.advisor.none.fl_str_mv Takahashi Rodríguez, Silvia
dc.contributor.author.none.fl_str_mv González Oviedo, Rodrigo José
dc.contributor.jury.none.fl_str_mv Takahashi Rodríguez, Silvia
dc.subject.keyword.none.fl_str_mv Reinforcement learning
Nintendo entertainment system
Game agents
Artificial intelligence
IceClimber
DQN
PPO
Gym retro
Stable baselines
topic Reinforcement learning
Nintendo entertainment system
Game agents
Artificial intelligence
IceClimber
DQN
PPO
Gym retro
Stable baselines
Ingeniería
dc.subject.themes.es_CO.fl_str_mv Ingeniería
description El documento se enfoca en hacer una descripción de los algoritmos de reinforcement learning PPO y DQN, los cuales si bien son comparados con el framework Gym-retro de OpenAI (utilizando el juego de NES ice-climbers), cuentan con tantas optimizaciones y estilos que el alcance de las comparaciones es limitado.
publishDate 2023
dc.date.accessioned.none.fl_str_mv 2023-08-16T13:44:18Z
dc.date.available.none.fl_str_mv 2023-08-16T13:44:18Z
dc.date.issued.none.fl_str_mv 2023-08-15
dc.type.es_CO.fl_str_mv Trabajo de grado - Pregrado
dc.type.driver.none.fl_str_mv info:eu-repo/semantics/bachelorThesis
dc.type.version.none.fl_str_mv info:eu-repo/semantics/acceptedVersion
dc.type.coar.none.fl_str_mv http://purl.org/coar/resource_type/c_7a1f
dc.type.content.es_CO.fl_str_mv Text
dc.type.redcol.none.fl_str_mv http://purl.org/redcol/resource_type/TP
format http://purl.org/coar/resource_type/c_7a1f
status_str acceptedVersion
dc.identifier.uri.none.fl_str_mv http://hdl.handle.net/1992/69749
dc.identifier.instname.es_CO.fl_str_mv instname:Universidad de los Andes
dc.identifier.reponame.es_CO.fl_str_mv reponame:Repositorio Institucional Séneca
dc.identifier.repourl.es_CO.fl_str_mv repourl:https://repositorio.uniandes.edu.co/
url http://hdl.handle.net/1992/69749
identifier_str_mv instname:Universidad de los Andes
reponame:Repositorio Institucional Séneca
repourl:https://repositorio.uniandes.edu.co/
dc.language.iso.es_CO.fl_str_mv spa
language spa
dc.relation.references.es_CO.fl_str_mv Kaelbling, L. P., Littman, M. L., & Moore, A. W. (1996). Reinforcement learning: A survey. Journal of artificial intelligence research, 4, 237-285.
Gym.openai.com. 2016. Gym: A toolkit for developing and comparing reinforcement learning algorithms. [online] Available at: <https://gym.openai.com/docs/> [Accessed 23 December 2021].
Pfau, V., Nichol, A., Hesse, C., Schiavo, L., Schulman, J. and Klimov, O., 2018. Gym Retro. [online] OpenAI. Available at: <https://openai.com/blog/gym-retro/> [Accessed 25 December 2021].
Lipovetzky, N., & Sardina, S. (2018). Pacman capture the flag in AI courses. IEEE Transactions on Games, 11(3), 296-299.
Nichol, A., Pfau, V., Hesse, C., Klimov, O., & Schulman, J. (2018). Gotta learn fast: A new benchmark for generalization in rl. arXiv preprint arXiv:1804.03720.
Alemán de León, C. D. Agente Sonic. Deep Reinforcement Learning
LeBlanc, D. G., & Lee, G. General Deep Reinforcement Learning in NES Games.
Bellman, R.E. 1957. Dynamic Programming. Princeton University Press, Princeton, NJ. Republished 2003: Dover
Dusparic, I. and Cardozo, N., 2021. ISIS 4222 RL Markov Decision Processes.
Dusparic, I. and Cardozo, N., 2021. ISIS 4222 RL Q learning.
Watkins, C.J.C.H. (1989). Learning from Delayed Rewards. PhD thesis, Cambridge University, Cambridge, England
Sutton, R. S., & Barto, A. G. (2018). Reinforcement learning: An introduction. MIT press.
Odemakinde, E., 2022. Model-Based and Model-Free Reinforcement Learning: Pytennis Case Study - neptune.ai. [online] neptune.ai. Available at: <https://neptune.ai/blog/model-based-and-model-free-reinforcement-learning-pytenniscase- study> [Accessed 8 June 2022].
Spinningup.openai.com. 2018. Part 2: Kinds of RL Algorithms Spinning Up documentation. [online] Available at: <https://spinningup.openai.com/en/latest/spinningup/rl_intro2.html> [Accessed 7 June 2022].
Sutton, R. S., & Barto, A. G. (2018). Reinforcement Learning: An Introduction. MIT Press.
Watkins, Christopher J. C. H., and Peter Dayan. "Q-learning." Machine learning 8.3-4 (1992): 279-292.
Lin, L.-J. Self-improving reactive agents based on reinforcement learning, planning and teaching. Machine learning, 8(3-4):293-321, 1992
Fedus, W., Ramachandran, P., Agarwal, R., Bengio, Y., Larochelle, H., Rowland, M., & Dabney, W. (2020, November). Revisiting fundamentals of experience replay. In International Conference on Machine Learning (pp. 3061-3071). PMLR.
Li, Y. (2017). Deep reinforcement learning: An overview. arXiv preprint arXiv:1701.07274.
Mnih, V., Kavukcuoglu, K., Silver, D., Graves, A., Antonoglou, I., Wierstra, D., & Riedmiller, M. (2013). Playing atari with deep reinforcement learning. arXiv preprint arXiv:1312.5602.
Schulman, J., Wolski, F., Dhariwal, P., Radford, A., & Klimov, O. (2017). Proximal policy optimization algorithms. arXiv preprint arXiv:1707.06347.
Schulman, J., Levine, S., Abbeel, P., Jordan, M., & Moritz, P. (2015, June). Trust region policy optimization. In International conference on machine learning (pp. 1889-1897). PMLR.
Estes, R. (2020) Rjalnev - DQN, GitHub. Available at: https://github.com/rjalnev (Accessed: 15 December 2021).
dc.rights.license.*.fl_str_mv Atribución 4.0 Internacional
dc.rights.uri.*.fl_str_mv http://creativecommons.org/licenses/by/4.0/
dc.rights.accessrights.none.fl_str_mv info:eu-repo/semantics/openAccess
dc.rights.coar.none.fl_str_mv http://purl.org/coar/access_right/c_abf2
rights_invalid_str_mv Atribución 4.0 Internacional
http://creativecommons.org/licenses/by/4.0/
http://purl.org/coar/access_right/c_abf2
eu_rights_str_mv openAccess
dc.format.extent.es_CO.fl_str_mv 38 páginas
dc.format.mimetype.es_CO.fl_str_mv application/pdf
dc.publisher.es_CO.fl_str_mv Universidad de los Andes
dc.publisher.program.es_CO.fl_str_mv Ingeniería de Sistemas y Computación
dc.publisher.faculty.es_CO.fl_str_mv Facultad de Ingeniería
dc.publisher.department.es_CO.fl_str_mv Departamento de Ingeniería Sistemas y Computación
institution Universidad de los Andes
bitstream.url.fl_str_mv https://repositorio.uniandes.edu.co/bitstreams/2a89aac7-da6d-4dfc-a7ae-9eca6706d611/download
https://repositorio.uniandes.edu.co/bitstreams/85277596-d08d-42ad-95a3-52932dadd0ad/download
https://repositorio.uniandes.edu.co/bitstreams/c0054a4b-c528-4aa0-8818-2cef305ff9ab/download
https://repositorio.uniandes.edu.co/bitstreams/002e7824-9494-4d20-b994-879a75ee4f0a/download
https://repositorio.uniandes.edu.co/bitstreams/dc35e395-6b60-4459-9f6d-4ec8c201bfc1/download
https://repositorio.uniandes.edu.co/bitstreams/6358ffbe-6804-41c0-941b-6e545f6d581f/download
https://repositorio.uniandes.edu.co/bitstreams/aa1fbda6-f678-4b7d-86ae-abcd6846faea/download
https://repositorio.uniandes.edu.co/bitstreams/a1a23570-eb36-427a-a7c5-fe0b061e0766/download
bitstream.checksum.fl_str_mv 0463a7172de3715b6e386d4f286a843c
08b106dfeb12472e88207a069e15ba30
0175ea4a2d4caec4bbcc37e300941108
c62b1df50f559b3045a9365fe7d2a76b
aba4f89b20fbe3c0c893b84dbfeb954f
5aa5c691a1ffe97abd12c2966efcb8d6
cb23c16a33e0e9a3431201c153092bd2
7bb240e14694b5da4fce5d3a2297a95a
bitstream.checksumAlgorithm.fl_str_mv MD5
MD5
MD5
MD5
MD5
MD5
MD5
MD5
repository.name.fl_str_mv Repositorio institucional Séneca
repository.mail.fl_str_mv adminrepositorio@uniandes.edu.co
_version_ 1808390398061576192
spelling Atribución 4.0 Internacionalhttp://creativecommons.org/licenses/by/4.0/info:eu-repo/semantics/openAccesshttp://purl.org/coar/access_right/c_abf2Takahashi Rodríguez, Silviad87dbcfc-97cc-4baf-8ff3-c9ea6c4fd89d600González Oviedo, Rodrigo Josébbda3ff1-ef5b-43a7-8e91-56fcec364907600Takahashi Rodríguez, Silvia2023-08-16T13:44:18Z2023-08-16T13:44:18Z2023-08-15http://hdl.handle.net/1992/69749instname:Universidad de los Andesreponame:Repositorio Institucional Sénecarepourl:https://repositorio.uniandes.edu.co/El documento se enfoca en hacer una descripción de los algoritmos de reinforcement learning PPO y DQN, los cuales si bien son comparados con el framework Gym-retro de OpenAI (utilizando el juego de NES ice-climbers), cuentan con tantas optimizaciones y estilos que el alcance de las comparaciones es limitado.El presente trabajo explica detalladamente el funcionamiento de los algoritmos de aprendizaje reforzado, DQN y PPO. Adicionalmente se realiza una comparación entre estos algoritmos utilizando el framework de OpenAI Gym-retro, para entrenar un game agent basado en cada uno de los algoritmos.Ingeniero de Sistemas y ComputaciónPregrado38 páginasapplication/pdfspaUniversidad de los AndesIngeniería de Sistemas y ComputaciónFacultad de IngenieríaDepartamento de Ingeniería Sistemas y ComputaciónAnálisis de dos algoritmos de Reinforcement Learning aplicados a OpenAi Gym RetroTrabajo de grado - Pregradoinfo:eu-repo/semantics/bachelorThesisinfo:eu-repo/semantics/acceptedVersionhttp://purl.org/coar/resource_type/c_7a1fTexthttp://purl.org/redcol/resource_type/TPReinforcement learningNintendo entertainment systemGame agentsArtificial intelligenceIceClimberDQNPPOGym retroStable baselinesIngenieríaKaelbling, L. P., Littman, M. L., & Moore, A. W. (1996). Reinforcement learning: A survey. Journal of artificial intelligence research, 4, 237-285.Gym.openai.com. 2016. Gym: A toolkit for developing and comparing reinforcement learning algorithms. [online] Available at: <https://gym.openai.com/docs/> [Accessed 23 December 2021].Pfau, V., Nichol, A., Hesse, C., Schiavo, L., Schulman, J. and Klimov, O., 2018. Gym Retro. [online] OpenAI. Available at: <https://openai.com/blog/gym-retro/> [Accessed 25 December 2021].Lipovetzky, N., & Sardina, S. (2018). Pacman capture the flag in AI courses. IEEE Transactions on Games, 11(3), 296-299.Nichol, A., Pfau, V., Hesse, C., Klimov, O., & Schulman, J. (2018). Gotta learn fast: A new benchmark for generalization in rl. arXiv preprint arXiv:1804.03720.Alemán de León, C. D. Agente Sonic. Deep Reinforcement LearningLeBlanc, D. G., & Lee, G. General Deep Reinforcement Learning in NES Games.Bellman, R.E. 1957. Dynamic Programming. Princeton University Press, Princeton, NJ. Republished 2003: DoverDusparic, I. and Cardozo, N., 2021. ISIS 4222 RL Markov Decision Processes.Dusparic, I. and Cardozo, N., 2021. ISIS 4222 RL Q learning.Watkins, C.J.C.H. (1989). Learning from Delayed Rewards. PhD thesis, Cambridge University, Cambridge, EnglandSutton, R. S., & Barto, A. G. (2018). Reinforcement learning: An introduction. MIT press.Odemakinde, E., 2022. Model-Based and Model-Free Reinforcement Learning: Pytennis Case Study - neptune.ai. [online] neptune.ai. Available at: <https://neptune.ai/blog/model-based-and-model-free-reinforcement-learning-pytenniscase- study> [Accessed 8 June 2022].Spinningup.openai.com. 2018. Part 2: Kinds of RL Algorithms Spinning Up documentation. [online] Available at: <https://spinningup.openai.com/en/latest/spinningup/rl_intro2.html> [Accessed 7 June 2022].Sutton, R. S., & Barto, A. G. (2018). Reinforcement Learning: An Introduction. MIT Press.Watkins, Christopher J. C. H., and Peter Dayan. "Q-learning." Machine learning 8.3-4 (1992): 279-292.Lin, L.-J. Self-improving reactive agents based on reinforcement learning, planning and teaching. Machine learning, 8(3-4):293-321, 1992Fedus, W., Ramachandran, P., Agarwal, R., Bengio, Y., Larochelle, H., Rowland, M., & Dabney, W. (2020, November). Revisiting fundamentals of experience replay. In International Conference on Machine Learning (pp. 3061-3071). PMLR.Li, Y. (2017). Deep reinforcement learning: An overview. arXiv preprint arXiv:1701.07274.Mnih, V., Kavukcuoglu, K., Silver, D., Graves, A., Antonoglou, I., Wierstra, D., & Riedmiller, M. (2013). Playing atari with deep reinforcement learning. arXiv preprint arXiv:1312.5602.Schulman, J., Wolski, F., Dhariwal, P., Radford, A., & Klimov, O. (2017). Proximal policy optimization algorithms. arXiv preprint arXiv:1707.06347.Schulman, J., Levine, S., Abbeel, P., Jordan, M., & Moritz, P. (2015, June). Trust region policy optimization. In International conference on machine learning (pp. 1889-1897). PMLR.Estes, R. (2020) Rjalnev - DQN, GitHub. Available at: https://github.com/rjalnev (Accessed: 15 December 2021).201516582PublicationTEXTAna¿lisis de dos algoritmos de Reinforcement Learning aplicados a OpenAi Gym Retro.pdf.txtAna¿lisis de dos algoritmos de Reinforcement Learning aplicados a OpenAi Gym Retro.pdf.txtExtracted texttext/plain63161https://repositorio.uniandes.edu.co/bitstreams/2a89aac7-da6d-4dfc-a7ae-9eca6706d611/download0463a7172de3715b6e386d4f286a843cMD56autorizacion tesis(2).pdf.txtautorizacion tesis(2).pdf.txtExtracted texttext/plain1161https://repositorio.uniandes.edu.co/bitstreams/85277596-d08d-42ad-95a3-52932dadd0ad/download08b106dfeb12472e88207a069e15ba30MD58CC-LICENSElicense_rdflicense_rdfapplication/rdf+xml; charset=utf-8908https://repositorio.uniandes.edu.co/bitstreams/c0054a4b-c528-4aa0-8818-2cef305ff9ab/download0175ea4a2d4caec4bbcc37e300941108MD52THUMBNAILAna¿lisis de dos algoritmos de Reinforcement Learning aplicados a OpenAi Gym Retro.pdf.jpgAna¿lisis de dos algoritmos de Reinforcement Learning aplicados a OpenAi Gym Retro.pdf.jpgIM Thumbnailimage/jpeg8890https://repositorio.uniandes.edu.co/bitstreams/002e7824-9494-4d20-b994-879a75ee4f0a/downloadc62b1df50f559b3045a9365fe7d2a76bMD57autorizacion tesis(2).pdf.jpgautorizacion tesis(2).pdf.jpgIM Thumbnailimage/jpeg16709https://repositorio.uniandes.edu.co/bitstreams/dc35e395-6b60-4459-9f6d-4ec8c201bfc1/downloadaba4f89b20fbe3c0c893b84dbfeb954fMD59LICENSElicense.txtlicense.txttext/plain; charset=utf-81810https://repositorio.uniandes.edu.co/bitstreams/6358ffbe-6804-41c0-941b-6e545f6d581f/download5aa5c691a1ffe97abd12c2966efcb8d6MD53ORIGINALAna¿lisis de dos algoritmos de Reinforcement Learning aplicados a OpenAi Gym Retro.pdfAna¿lisis de dos algoritmos de Reinforcement Learning aplicados a OpenAi Gym Retro.pdfTrabajo de gradoapplication/pdf1980143https://repositorio.uniandes.edu.co/bitstreams/aa1fbda6-f678-4b7d-86ae-abcd6846faea/downloadcb23c16a33e0e9a3431201c153092bd2MD54autorizacion tesis(2).pdfautorizacion tesis(2).pdfHIDEapplication/pdf296523https://repositorio.uniandes.edu.co/bitstreams/a1a23570-eb36-427a-a7c5-fe0b061e0766/download7bb240e14694b5da4fce5d3a2297a95aMD551992/69749oai:repositorio.uniandes.edu.co:1992/697492023-10-10 18:24:37.18http://creativecommons.org/licenses/by/4.0/open.accesshttps://repositorio.uniandes.edu.coRepositorio institucional Sénecaadminrepositorio@uniandes.edu.coWW8sIGVuIG1pIGNhbGlkYWQgZGUgYXV0b3IgZGVsIHRyYWJham8gZGUgdGVzaXMsIG1vbm9ncmFmw61hIG8gdHJhYmFqbyBkZSBncmFkbywgaGFnbyBlbnRyZWdhIGRlbCBlamVtcGxhciByZXNwZWN0aXZvIHkgZGUgc3VzIGFuZXhvcyBkZSBzZXIgZWwgY2FzbywgZW4gZm9ybWF0byBkaWdpdGFsIHkvbyBlbGVjdHLDs25pY28geSBhdXRvcml6byBhIGxhIFVuaXZlcnNpZGFkIGRlIGxvcyBBbmRlcyBwYXJhIHF1ZSByZWFsaWNlIGxhIHB1YmxpY2FjacOzbiBlbiBlbCBTaXN0ZW1hIGRlIEJpYmxpb3RlY2FzIG8gZW4gY3VhbHF1aWVyIG90cm8gc2lzdGVtYSBvIGJhc2UgZGUgZGF0b3MgcHJvcGlvIG8gYWplbm8gYSBsYSBVbml2ZXJzaWRhZCB5IHBhcmEgcXVlIGVuIGxvcyB0w6lybWlub3MgZXN0YWJsZWNpZG9zIGVuIGxhIExleSAyMyBkZSAxOTgyLCBMZXkgNDQgZGUgMTk5MywgRGVjaXNpw7NuIEFuZGluYSAzNTEgZGUgMTk5MywgRGVjcmV0byA0NjAgZGUgMTk5NSB5IGRlbcOhcyBub3JtYXMgZ2VuZXJhbGVzIHNvYnJlIGxhIG1hdGVyaWEsIHV0aWxpY2UgZW4gdG9kYXMgc3VzIGZvcm1hcywgbG9zIGRlcmVjaG9zIHBhdHJpbW9uaWFsZXMgZGUgcmVwcm9kdWNjacOzbiwgY29tdW5pY2FjacOzbiBww7pibGljYSwgdHJhbnNmb3JtYWNpw7NuIHkgZGlzdHJpYnVjacOzbiAoYWxxdWlsZXIsIHByw6lzdGFtbyBww7pibGljbyBlIGltcG9ydGFjacOzbikgcXVlIG1lIGNvcnJlc3BvbmRlbiBjb21vIGNyZWFkb3IgZGUgbGEgb2JyYSBvYmpldG8gZGVsIHByZXNlbnRlIGRvY3VtZW50by4gIAoKCkxhIHByZXNlbnRlIGF1dG9yaXphY2nDs24gc2UgZW1pdGUgZW4gY2FsaWRhZCBkZSBhdXRvciBkZSBsYSBvYnJhIG9iamV0byBkZWwgcHJlc2VudGUgZG9jdW1lbnRvIHkgbm8gY29ycmVzcG9uZGUgYSBjZXNpw7NuIGRlIGRlcmVjaG9zLCBzaW5vIGEgbGEgYXV0b3JpemFjacOzbiBkZSB1c28gYWNhZMOpbWljbyBkZSBjb25mb3JtaWRhZCBjb24gbG8gYW50ZXJpb3JtZW50ZSBzZcOxYWxhZG8uIExhIHByZXNlbnRlIGF1dG9yaXphY2nDs24gc2UgaGFjZSBleHRlbnNpdmEgbm8gc29sbyBhIGxhcyBmYWN1bHRhZGVzIHkgZGVyZWNob3MgZGUgdXNvIHNvYnJlIGxhIG9icmEgZW4gZm9ybWF0byBvIHNvcG9ydGUgbWF0ZXJpYWwsIHNpbm8gdGFtYmnDqW4gcGFyYSBmb3JtYXRvIGVsZWN0csOzbmljbywgeSBlbiBnZW5lcmFsIHBhcmEgY3VhbHF1aWVyIGZvcm1hdG8gY29ub2NpZG8gbyBwb3IgY29ub2Nlci4gCgoKRWwgYXV0b3IsIG1hbmlmaWVzdGEgcXVlIGxhIG9icmEgb2JqZXRvIGRlIGxhIHByZXNlbnRlIGF1dG9yaXphY2nDs24gZXMgb3JpZ2luYWwgeSBsYSByZWFsaXrDsyBzaW4gdmlvbGFyIG8gdXN1cnBhciBkZXJlY2hvcyBkZSBhdXRvciBkZSB0ZXJjZXJvcywgcG9yIGxvIHRhbnRvLCBsYSBvYnJhIGVzIGRlIHN1IGV4Y2x1c2l2YSBhdXRvcsOtYSB5IHRpZW5lIGxhIHRpdHVsYXJpZGFkIHNvYnJlIGxhIG1pc21hLiAKCgpFbiBjYXNvIGRlIHByZXNlbnRhcnNlIGN1YWxxdWllciByZWNsYW1hY2nDs24gbyBhY2Npw7NuIHBvciBwYXJ0ZSBkZSB1biB0ZXJjZXJvIGVuIGN1YW50byBhIGxvcyBkZXJlY2hvcyBkZSBhdXRvciBzb2JyZSBsYSBvYnJhIGVuIGN1ZXN0acOzbiwgZWwgYXV0b3IgYXN1bWlyw6EgdG9kYSBsYSByZXNwb25zYWJpbGlkYWQsIHkgc2FsZHLDoSBkZSBkZWZlbnNhIGRlIGxvcyBkZXJlY2hvcyBhcXXDrSBhdXRvcml6YWRvcywgcGFyYSB0b2RvcyBsb3MgZWZlY3RvcyBsYSBVbml2ZXJzaWRhZCBhY3TDumEgY29tbyB1biB0ZXJjZXJvIGRlIGJ1ZW5hIGZlLiAKCg==