Deep reinforcement learning for optimal gameplay in street fighter III: a resource-constrained approach
This bachelor’s thesis investigates the performance of reinforcement learning (RL) algorithms in the context of fighting games, specifically Street Fighter III Third Strike, under heavy resource constraints. The research focuses on four distinct RL algorithms: Proximal Policy Optimization (PPO), Asy...
- Autores:
-
Zambrano Huertas, Daniel Ernesto
Díaz Salamanca, Jhoan Sebastián
- Tipo de recurso:
- Trabajo de grado de pregrado
- Fecha de publicación:
- 2023
- Institución:
- Universidad de los Andes
- Repositorio:
- Séneca: repositorio Uniandes
- Idioma:
- eng
- OAI Identifier:
- oai:repositorio.uniandes.edu.co:1992/70987
- Acceso en línea:
- https://hdl.handle.net/1992/70987
- Palabra clave:
- Deep Reinforcement Learning
Machine Learning
VideoGames
FightGames
Discrete Spaces
Constrained Resources
RL Agent Policies
Ingeniería
- Rights
- openAccess
- License
- https://repositorio.uniandes.edu.co/static/pdf/aceptacion_uso_es.pdf
id |
UNIANDES2_5a26dd62cf4bda09dca26728cfad8582 |
---|---|
oai_identifier_str |
oai:repositorio.uniandes.edu.co:1992/70987 |
network_acronym_str |
UNIANDES2 |
network_name_str |
Séneca: repositorio Uniandes |
repository_id_str |
|
dc.title.none.fl_str_mv |
Deep reinforcement learning for optimal gameplay in street fighter III: a resource-constrained approach |
title |
Deep reinforcement learning for optimal gameplay in street fighter III: a resource-constrained approach |
spellingShingle |
Deep reinforcement learning for optimal gameplay in street fighter III: a resource-constrained approach Deep Reinforcement Learning Machine Learning VideoGames FightGames Discrete Spaces Constrained Resources RL Agent Policies Ingeniería |
title_short |
Deep reinforcement learning for optimal gameplay in street fighter III: a resource-constrained approach |
title_full |
Deep reinforcement learning for optimal gameplay in street fighter III: a resource-constrained approach |
title_fullStr |
Deep reinforcement learning for optimal gameplay in street fighter III: a resource-constrained approach |
title_full_unstemmed |
Deep reinforcement learning for optimal gameplay in street fighter III: a resource-constrained approach |
title_sort |
Deep reinforcement learning for optimal gameplay in street fighter III: a resource-constrained approach |
dc.creator.fl_str_mv |
Zambrano Huertas, Daniel Ernesto Díaz Salamanca, Jhoan Sebastián |
dc.contributor.advisor.none.fl_str_mv |
Cardozo Álvarez, Nicolás |
dc.contributor.author.none.fl_str_mv |
Zambrano Huertas, Daniel Ernesto Díaz Salamanca, Jhoan Sebastián |
dc.subject.keyword.eng.fl_str_mv |
Deep Reinforcement Learning Machine Learning VideoGames FightGames Discrete Spaces Constrained Resources RL Agent Policies |
topic |
Deep Reinforcement Learning Machine Learning VideoGames FightGames Discrete Spaces Constrained Resources RL Agent Policies Ingeniería |
dc.subject.themes.spa.fl_str_mv |
Ingeniería |
description |
This bachelor’s thesis investigates the performance of reinforcement learning (RL) algorithms in the context of fighting games, specifically Street Fighter III Third Strike, under heavy resource constraints. The research focuses on four distinct RL algorithms: Proximal Policy Optimization (PPO), Asynchronous Advantage Actor-Critic (A3C), Advantage Actor-Critic(A2C) and Deep Q-Network (DQN). Each algorithm is trained under the same restricted resource conditions to facilitate a fair comparison of their final or estimated performances. The training process involves a combination of batch training and the use of the replay buffer for the off-policy DQN algorithm. The RL agents are trained with a reward function based on the game’s health values, with damage inflicted on the opponent corresponding to a positive reward and damage suffered by the agent corresponding to a negative reward. The primary objective of this research is to determine which RL algorithm performs best under resource constraints and to identify the optimal training conditions for each, with the secondary focus to explore various strategies that could potentially make the algorithms perform better when changing the RL agent’s behavior, by modifying the agent’s reward. The research also explores the potential of developing a meta-agent that can select the best-performing agent based on the current game state, aiming to improve the overall performance. The results of this project aim to contribute to the understanding and advancement of reinforcement learning in complex, dynamic, and discrete environments, such as fighting games. The research also lays the groundwork for future investigations into the development of a metaagent and the formulation of more effective reward functions. |
publishDate |
2023 |
dc.date.accessioned.none.fl_str_mv |
2023-11-07T18:24:54Z |
dc.date.available.none.fl_str_mv |
2023-11-07T18:24:54Z |
dc.date.issued.none.fl_str_mv |
2023-08-16 |
dc.type.none.fl_str_mv |
Trabajo de grado - Pregrado |
dc.type.driver.none.fl_str_mv |
info:eu-repo/semantics/bachelorThesis |
dc.type.version.none.fl_str_mv |
info:eu-repo/semantics/acceptedVersion |
dc.type.coar.none.fl_str_mv |
http://purl.org/coar/resource_type/c_7a1f |
dc.type.content.none.fl_str_mv |
Text |
dc.type.redcol.none.fl_str_mv |
http://purl.org/redcol/resource_type/TP |
format |
http://purl.org/coar/resource_type/c_7a1f |
status_str |
acceptedVersion |
dc.identifier.uri.none.fl_str_mv |
https://hdl.handle.net/1992/70987 |
dc.identifier.instname.none.fl_str_mv |
instname:Universidad de los Andes |
dc.identifier.reponame.none.fl_str_mv |
reponame:Repositorio Institucional Séneca |
dc.identifier.repourl.none.fl_str_mv |
repourl:https://repositorio.uniandes.edu.co/ |
url |
https://hdl.handle.net/1992/70987 |
identifier_str_mv |
instname:Universidad de los Andes reponame:Repositorio Institucional Séneca repourl:https://repositorio.uniandes.edu.co/ |
dc.language.iso.none.fl_str_mv |
eng |
language |
eng |
dc.relation.references.none.fl_str_mv |
Arulkumaran, K., Deisenroth, M. P., Brundage, M., & Bharath, A. A. (2017). Deep Reinforcement Learning: A Brief Survey. IEEE Signal Processing Magazine, 34(6), 26–38. https://doi.org/10.1109/msp.2017.2743240 Dabney. (2017). Distributional Reinforcement Learning with Quantile Regression. Espeholt, L., Soyer, H., Munos, R., Simonyan, K., Mnih, V., Ward, T., Doron, Y., Firoiu, V., Harley, T., Dunning, I., Legg, S., & Kavukcuoglu, K. (2018). IMPALA: Scalable Distributed Deep-RL with Importance Weighted Actor-Learner Architectures. EventHubs. (2023). Tiers for Street Fighter 3 Third Strike. EventHubs. https://www.eventhubs.com/tiers/sf3-3s/. Haarnoja, T., Zhou, A., Abbeel, P., & Levine, S. (2018). Soft Actor-Critic: Off-Policy Maximum Entropy Deep Reinforcement Learning with a Stochastic Actor. Lahtela, M., & Kaplan, P. (Provenance). (2023). AWS SageMaker. Amazon. https://aws.amazon.com/es/sagemaker/. Mnih, V., Kavukcuoglu, K., Silver, D., Graves, A., Antonoglou, I., Wierstra, D., & Riedmiller, M. (2013). Playing Atari with Deep Reinforcement Learning. Mnih, Volodymyr, Badia, A. P., Mirza, M., Graves, A., Lillicrap, T. P., Harley, T., Silver, D., & Kavukcuoglu, K. (2016). Asynchronous Methods for Deep Reinforcement Learning. OpenAI. (2018). Introduction. Introduction - Spinning Up documentation. https://spinningup.openai.com/en/latest/user/introduction.html. Palmas, A. (2022). DIAMBRA Arena: a New Reinforcement Learning Platform for Research and Experimentation. ArXiv E-Prints, earXiv:2210.10595. Quach, K. (2018). What does it take for an OpenAI bot to best Dota 2 heroes? 128,000 CPU cores, 256 NVIDIA GPUs. The Register® - Biting the Hand That Feeds IT. https://www.theregister.com/2018/06/25/openai_dota_2_5v5_bots/. Salimans, T., Ho, J., Chen, X., Sidor, S., & Sutskever, I. (2017). Evolution Strategies as a Scalable Alternative to Reinforcement Learning. Schulman, J., Wolski, F., Dhariwal, P., Radford, A., & Klimov, O. (2017). Proximal Policy Optimization Algorithms. Sutton, R. S., & Barto, A. G. (2018). Reinforcement Learning: An Introduction, Second Edition. The MIT Press. Van Hasselt, H., Guez, A., & Silver, D. (2015). Deep Reinforcement Learning with Double Q-learning. Wang. (2016). Dueling Network Architectures for Deep Reinforcement Learning. YouTube. (2023). DESTROYING Donkey Kong with AI (Deep Reinforcement Learning). YouTube. |
dc.rights.uri.none.fl_str_mv |
https://repositorio.uniandes.edu.co/static/pdf/aceptacion_uso_es.pdf |
dc.rights.accessrights.none.fl_str_mv |
info:eu-repo/semantics/openAccess |
dc.rights.coar.none.fl_str_mv |
http://purl.org/coar/access_right/c_abf2 |
rights_invalid_str_mv |
https://repositorio.uniandes.edu.co/static/pdf/aceptacion_uso_es.pdf http://purl.org/coar/access_right/c_abf2 |
eu_rights_str_mv |
openAccess |
dc.format.extent.none.fl_str_mv |
115 páginas |
dc.format.mimetype.none.fl_str_mv |
application/pdf |
dc.publisher.none.fl_str_mv |
Universidad de los Andes |
dc.publisher.program.none.fl_str_mv |
Ingeniería de Sistemas y Computación |
dc.publisher.faculty.none.fl_str_mv |
Facultad de Ingeniería |
dc.publisher.department.none.fl_str_mv |
Departamento de Ingeniería Sistemas y Computación |
publisher.none.fl_str_mv |
Universidad de los Andes |
institution |
Universidad de los Andes |
bitstream.url.fl_str_mv |
https://repositorio.uniandes.edu.co/bitstreams/400fa813-45e5-48de-8af3-4589b0d793e9/download https://repositorio.uniandes.edu.co/bitstreams/06f8530e-3290-4949-8cf8-b26f4136c213/download https://repositorio.uniandes.edu.co/bitstreams/61b0377f-3434-4bed-ab43-c4e6c17ec12e/download https://repositorio.uniandes.edu.co/bitstreams/93679faa-d8f7-451f-bfd9-4a739ec10f7d/download https://repositorio.uniandes.edu.co/bitstreams/cc87b1ca-672b-4de4-8898-261c20f2e5f7/download https://repositorio.uniandes.edu.co/bitstreams/8325ed9e-12a4-471a-b3e1-829efcb07c62/download https://repositorio.uniandes.edu.co/bitstreams/61ebaff4-9517-4808-ad5b-c08e6661bc78/download |
bitstream.checksum.fl_str_mv |
f7a4e1a9348ae9127ba08b778d6a0223 8a359c45075a6c539e587b060db7b4e1 ae9e573a68e7f92501b6913cc846c39f 3997ab529999bc4c16df4cac2c76748a 48bb9d925a2534e37ca688c07fc77fe2 a4b2b6ae24fc9f640384a3ae25136f4a 3539f8fbe5fe82c196da4ce645c84da7 |
bitstream.checksumAlgorithm.fl_str_mv |
MD5 MD5 MD5 MD5 MD5 MD5 MD5 |
repository.name.fl_str_mv |
Repositorio institucional Séneca |
repository.mail.fl_str_mv |
adminrepositorio@uniandes.edu.co |
_version_ |
1812134027476336640 |
spelling |
Cardozo Álvarez, NicolásZambrano Huertas, Daniel ErnestoDíaz Salamanca, Jhoan Sebastián2023-11-07T18:24:54Z2023-11-07T18:24:54Z2023-08-16https://hdl.handle.net/1992/70987instname:Universidad de los Andesreponame:Repositorio Institucional Sénecarepourl:https://repositorio.uniandes.edu.co/This bachelor’s thesis investigates the performance of reinforcement learning (RL) algorithms in the context of fighting games, specifically Street Fighter III Third Strike, under heavy resource constraints. The research focuses on four distinct RL algorithms: Proximal Policy Optimization (PPO), Asynchronous Advantage Actor-Critic (A3C), Advantage Actor-Critic(A2C) and Deep Q-Network (DQN). Each algorithm is trained under the same restricted resource conditions to facilitate a fair comparison of their final or estimated performances. The training process involves a combination of batch training and the use of the replay buffer for the off-policy DQN algorithm. The RL agents are trained with a reward function based on the game’s health values, with damage inflicted on the opponent corresponding to a positive reward and damage suffered by the agent corresponding to a negative reward. The primary objective of this research is to determine which RL algorithm performs best under resource constraints and to identify the optimal training conditions for each, with the secondary focus to explore various strategies that could potentially make the algorithms perform better when changing the RL agent’s behavior, by modifying the agent’s reward. The research also explores the potential of developing a meta-agent that can select the best-performing agent based on the current game state, aiming to improve the overall performance. The results of this project aim to contribute to the understanding and advancement of reinforcement learning in complex, dynamic, and discrete environments, such as fighting games. The research also lays the groundwork for future investigations into the development of a metaagent and the formulation of more effective reward functions.Ingeniero de Sistemas y ComputaciónPregrado115 páginasapplication/pdfengUniversidad de los AndesIngeniería de Sistemas y ComputaciónFacultad de IngenieríaDepartamento de Ingeniería Sistemas y Computaciónhttps://repositorio.uniandes.edu.co/static/pdf/aceptacion_uso_es.pdfinfo:eu-repo/semantics/openAccesshttp://purl.org/coar/access_right/c_abf2Deep reinforcement learning for optimal gameplay in street fighter III: a resource-constrained approachTrabajo de grado - Pregradoinfo:eu-repo/semantics/bachelorThesisinfo:eu-repo/semantics/acceptedVersionhttp://purl.org/coar/resource_type/c_7a1fTexthttp://purl.org/redcol/resource_type/TPDeep Reinforcement LearningMachine LearningVideoGamesFightGamesDiscrete SpacesConstrained ResourcesRL Agent PoliciesIngenieríaArulkumaran, K., Deisenroth, M. P., Brundage, M., & Bharath, A. A. (2017). Deep Reinforcement Learning: A Brief Survey. IEEE Signal Processing Magazine, 34(6), 26–38. https://doi.org/10.1109/msp.2017.2743240Dabney. (2017). Distributional Reinforcement Learning with Quantile Regression.Espeholt, L., Soyer, H., Munos, R., Simonyan, K., Mnih, V., Ward, T., Doron, Y., Firoiu, V., Harley, T., Dunning, I., Legg, S., & Kavukcuoglu, K. (2018). IMPALA: Scalable Distributed Deep-RL with Importance Weighted Actor-Learner Architectures.EventHubs. (2023). Tiers for Street Fighter 3 Third Strike. EventHubs. https://www.eventhubs.com/tiers/sf3-3s/.Haarnoja, T., Zhou, A., Abbeel, P., & Levine, S. (2018). Soft Actor-Critic: Off-Policy Maximum Entropy Deep Reinforcement Learning with a Stochastic Actor.Lahtela, M., & Kaplan, P. (Provenance). (2023). AWS SageMaker. Amazon. https://aws.amazon.com/es/sagemaker/.Mnih, V., Kavukcuoglu, K., Silver, D., Graves, A., Antonoglou, I., Wierstra, D., & Riedmiller, M. (2013). Playing Atari with Deep Reinforcement Learning.Mnih, Volodymyr, Badia, A. P., Mirza, M., Graves, A., Lillicrap, T. P., Harley, T., Silver, D., & Kavukcuoglu, K. (2016). Asynchronous Methods for Deep Reinforcement Learning.OpenAI. (2018). Introduction. Introduction - Spinning Up documentation. https://spinningup.openai.com/en/latest/user/introduction.html.Palmas, A. (2022). DIAMBRA Arena: a New Reinforcement Learning Platform for Research and Experimentation. ArXiv E-Prints, earXiv:2210.10595.Quach, K. (2018). What does it take for an OpenAI bot to best Dota 2 heroes? 128,000 CPU cores, 256 NVIDIA GPUs. The Register® - Biting the Hand That Feeds IT. https://www.theregister.com/2018/06/25/openai_dota_2_5v5_bots/.Salimans, T., Ho, J., Chen, X., Sidor, S., & Sutskever, I. (2017). Evolution Strategies as a Scalable Alternative to Reinforcement Learning.Schulman, J., Wolski, F., Dhariwal, P., Radford, A., & Klimov, O. (2017). Proximal Policy Optimization Algorithms.Sutton, R. S., & Barto, A. G. (2018). Reinforcement Learning: An Introduction, Second Edition. The MIT Press.Van Hasselt, H., Guez, A., & Silver, D. (2015). Deep Reinforcement Learning with Double Q-learning.Wang. (2016). Dueling Network Architectures for Deep Reinforcement Learning.YouTube. (2023). DESTROYING Donkey Kong with AI (Deep Reinforcement Learning). YouTube.201914912201819861PublicationORIGINALDeep Reinforcement Learning for Optimal Gameplay.pdfDeep Reinforcement Learning for Optimal Gameplay.pdfapplication/pdf8817369https://repositorio.uniandes.edu.co/bitstreams/400fa813-45e5-48de-8af3-4589b0d793e9/downloadf7a4e1a9348ae9127ba08b778d6a0223MD51Autorizacion Tesis_Daniel Ernesto Zambr-1.pdfAutorizacion Tesis_Daniel Ernesto Zambr-1.pdfHIDEapplication/pdf235477https://repositorio.uniandes.edu.co/bitstreams/06f8530e-3290-4949-8cf8-b26f4136c213/download8a359c45075a6c539e587b060db7b4e1MD52LICENSElicense.txtlicense.txttext/plain; charset=utf-82535https://repositorio.uniandes.edu.co/bitstreams/61b0377f-3434-4bed-ab43-c4e6c17ec12e/downloadae9e573a68e7f92501b6913cc846c39fMD53TEXTDeep Reinforcement Learning for Optimal Gameplay.pdf.txtDeep Reinforcement Learning for Optimal Gameplay.pdf.txtExtracted texttext/plain101026https://repositorio.uniandes.edu.co/bitstreams/93679faa-d8f7-451f-bfd9-4a739ec10f7d/download3997ab529999bc4c16df4cac2c76748aMD54Autorizacion Tesis_Daniel Ernesto Zambr-1.pdf.txtAutorizacion Tesis_Daniel Ernesto Zambr-1.pdf.txtExtracted texttext/plain42https://repositorio.uniandes.edu.co/bitstreams/cc87b1ca-672b-4de4-8898-261c20f2e5f7/download48bb9d925a2534e37ca688c07fc77fe2MD56THUMBNAILDeep Reinforcement Learning for Optimal Gameplay.pdf.jpgDeep Reinforcement Learning for Optimal Gameplay.pdf.jpgGenerated Thumbnailimage/jpeg3561https://repositorio.uniandes.edu.co/bitstreams/8325ed9e-12a4-471a-b3e1-829efcb07c62/downloada4b2b6ae24fc9f640384a3ae25136f4aMD55Autorizacion Tesis_Daniel Ernesto Zambr-1.pdf.jpgAutorizacion Tesis_Daniel Ernesto Zambr-1.pdf.jpgGenerated Thumbnailimage/jpeg12196https://repositorio.uniandes.edu.co/bitstreams/61ebaff4-9517-4808-ad5b-c08e6661bc78/download3539f8fbe5fe82c196da4ce645c84da7MD571992/70987oai:repositorio.uniandes.edu.co:1992/709872024-08-27 07:57:52.261https://repositorio.uniandes.edu.co/static/pdf/aceptacion_uso_es.pdfopen.accesshttps://repositorio.uniandes.edu.coRepositorio institucional Sénecaadminrepositorio@uniandes.edu.coPGgzPjxzdHJvbmc+RGVzY2FyZ28gZGUgUmVzcG9uc2FiaWxpZGFkIC0gTGljZW5jaWEgZGUgQXV0b3JpemFjacOzbjwvc3Ryb25nPjwvaDM+CjxwPjxzdHJvbmc+UG9yIGZhdm9yIGxlZXIgYXRlbnRhbWVudGUgZXN0ZSBkb2N1bWVudG8gcXVlIHBlcm1pdGUgYWwgUmVwb3NpdG9yaW8gSW5zdGl0dWNpb25hbCBTw6luZWNhIHJlcHJvZHVjaXIgeSBkaXN0cmlidWlyIGxvcyByZWN1cnNvcyBkZSBpbmZvcm1hY2nDs24gZGVwb3NpdGFkb3MgbWVkaWFudGUgbGEgYXV0b3JpemFjacOzbiBkZSBsb3Mgc2lndWllbnRlcyB0w6lybWlub3M6PC9zdHJvbmc+PC9wPgo8cD5Db25jZWRhIGxhIGxpY2VuY2lhIGRlIGRlcMOzc2l0byBlc3TDoW5kYXIgc2VsZWNjaW9uYW5kbyBsYSBvcGNpw7NuIDxzdHJvbmc+J0FjZXB0YXIgbG9zIHTDqXJtaW5vcyBhbnRlcmlvcm1lbnRlIGRlc2NyaXRvcyc8L3N0cm9uZz4geSBjb250aW51YXIgZWwgcHJvY2VzbyBkZSBlbnbDrW8gbWVkaWFudGUgZWwgYm90w7NuIDxzdHJvbmc+J1NpZ3VpZW50ZScuPC9zdHJvbmc+PC9wPgo8aHI+CjxwPllvLCBlbiBtaSBjYWxpZGFkIGRlIGF1dG9yIGRlbCB0cmFiYWpvIGRlIHRlc2lzLCBtb25vZ3JhZsOtYSBvIHRyYWJham8gZGUgZ3JhZG8sIGhhZ28gZW50cmVnYSBkZWwgZWplbXBsYXIgcmVzcGVjdGl2byB5IGRlIHN1cyBhbmV4b3MgZGUgc2VyIGVsIGNhc28sIGVuIGZvcm1hdG8gZGlnaXRhbCB5L28gZWxlY3Ryw7NuaWNvIHkgYXV0b3Jpem8gYSBsYSBVbml2ZXJzaWRhZCBkZSBsb3MgQW5kZXMgcGFyYSBxdWUgcmVhbGljZSBsYSBwdWJsaWNhY2nDs24gZW4gZWwgU2lzdGVtYSBkZSBCaWJsaW90ZWNhcyBvIGVuIGN1YWxxdWllciBvdHJvIHNpc3RlbWEgbyBiYXNlIGRlIGRhdG9zIHByb3BpbyBvIGFqZW5vIGEgbGEgVW5pdmVyc2lkYWQgeSBwYXJhIHF1ZSBlbiBsb3MgdMOpcm1pbm9zIGVzdGFibGVjaWRvcyBlbiBsYSBMZXkgMjMgZGUgMTk4MiwgTGV5IDQ0IGRlIDE5OTMsIERlY2lzacOzbiBBbmRpbmEgMzUxIGRlIDE5OTMsIERlY3JldG8gNDYwIGRlIDE5OTUgeSBkZW3DoXMgbm9ybWFzIGdlbmVyYWxlcyBzb2JyZSBsYSBtYXRlcmlhLCB1dGlsaWNlIGVuIHRvZGFzIHN1cyBmb3JtYXMsIGxvcyBkZXJlY2hvcyBwYXRyaW1vbmlhbGVzIGRlIHJlcHJvZHVjY2nDs24sIGNvbXVuaWNhY2nDs24gcMO6YmxpY2EsIHRyYW5zZm9ybWFjacOzbiB5IGRpc3RyaWJ1Y2nDs24gKGFscXVpbGVyLCBwcsOpc3RhbW8gcMO6YmxpY28gZSBpbXBvcnRhY2nDs24pIHF1ZSBtZSBjb3JyZXNwb25kZW4gY29tbyBjcmVhZG9yIGRlIGxhIG9icmEgb2JqZXRvIGRlbCBwcmVzZW50ZSBkb2N1bWVudG8uPC9wPgo8cD5MYSBwcmVzZW50ZSBhdXRvcml6YWNpw7NuIHNlIGVtaXRlIGVuIGNhbGlkYWQgZGUgYXV0b3IgZGUgbGEgb2JyYSBvYmpldG8gZGVsIHByZXNlbnRlIGRvY3VtZW50byB5IG5vIGNvcnJlc3BvbmRlIGEgY2VzacOzbiBkZSBkZXJlY2hvcywgc2lubyBhIGxhIGF1dG9yaXphY2nDs24gZGUgdXNvIGFjYWTDqW1pY28gZGUgY29uZm9ybWlkYWQgY29uIGxvIGFudGVyaW9ybWVudGUgc2XDsWFsYWRvLiBMYSBwcmVzZW50ZSBhdXRvcml6YWNpw7NuIHNlIGhhY2UgZXh0ZW5zaXZhIG5vIHNvbG8gYSBsYXMgZmFjdWx0YWRlcyB5IGRlcmVjaG9zIGRlIHVzbyBzb2JyZSBsYSBvYnJhIGVuIGZvcm1hdG8gbyBzb3BvcnRlIG1hdGVyaWFsLCBzaW5vIHRhbWJpw6luIHBhcmEgZm9ybWF0byBlbGVjdHLDs25pY28sIHkgZW4gZ2VuZXJhbCBwYXJhIGN1YWxxdWllciBmb3JtYXRvIGNvbm9jaWRvIG8gcG9yIGNvbm9jZXIuPC9wPgo8cD5FbCBhdXRvciwgbWFuaWZpZXN0YSBxdWUgbGEgb2JyYSBvYmpldG8gZGUgbGEgcHJlc2VudGUgYXV0b3JpemFjacOzbiBlcyBvcmlnaW5hbCB5IGxhIHJlYWxpesOzIHNpbiB2aW9sYXIgbyB1c3VycGFyIGRlcmVjaG9zIGRlIGF1dG9yIGRlIHRlcmNlcm9zLCBwb3IgbG8gdGFudG8sIGxhIG9icmEgZXMgZGUgc3UgZXhjbHVzaXZhIGF1dG9yw61hIHkgdGllbmUgbGEgdGl0dWxhcmlkYWQgc29icmUgbGEgbWlzbWEuPC9wPgo8cD5FbiBjYXNvIGRlIHByZXNlbnRhcnNlIGN1YWxxdWllciByZWNsYW1hY2nDs24gbyBhY2Npw7NuIHBvciBwYXJ0ZSBkZSB1biB0ZXJjZXJvIGVuIGN1YW50byBhIGxvcyBkZXJlY2hvcyBkZSBhdXRvciBzb2JyZSBsYSBvYnJhIGVuIGN1ZXN0acOzbiwgZWwgYXV0b3IgYXN1bWlyw6EgdG9kYSBsYSByZXNwb25zYWJpbGlkYWQsIHkgc2FsZHLDoSBkZSBkZWZlbnNhIGRlIGxvcyBkZXJlY2hvcyBhcXXDrSBhdXRvcml6YWRvcywgcGFyYSB0b2RvcyBsb3MgZWZlY3RvcyBsYSBVbml2ZXJzaWRhZCBhY3TDumEgY29tbyB1biB0ZXJjZXJvIGRlIGJ1ZW5hIGZlLjwvcD4KPHA+U2kgdGllbmUgYWxndW5hIGR1ZGEgc29icmUgbGEgbGljZW5jaWEsIHBvciBmYXZvciwgY29udGFjdGUgY29uIGVsIDxhIGhyZWY9Im1haWx0bzpiaWJsaW90ZWNhQHVuaWFuZGVzLmVkdS5jbyIgdGFyZ2V0PSJfYmxhbmsiPkFkbWluaXN0cmFkb3IgZGVsIFNpc3RlbWEuPC9hPjwvcD4K |