A programming language for reinforcement learning

The increasing complexity of reinforcement learning (RL) algorithms has revealed a critical gap in the programming models and tools available to developers, leading to suboptimal implementations and a lack of standardization in RL software projects. This thesis proposes the design and implementation...

Full description

Autores:
Morillo Cervantes, Camilo Andrés
Tipo de recurso:
Trabajo de grado de pregrado
Fecha de publicación:
2025
Institución:
Universidad de los Andes
Repositorio:
Séneca: repositorio Uniandes
Idioma:
eng
OAI Identifier:
oai:repositorio.uniandes.edu.co:1992/75480
Acceso en línea:
https://hdl.handle.net/1992/75480
Palabra clave:
Reinforcement learning
Programming language
Abstraction
Ingeniería
Rights
openAccess
License
Attribution 4.0 International
id UNIANDES2_5f381356c7ded4b4bace813a29d89f6e
oai_identifier_str oai:repositorio.uniandes.edu.co:1992/75480
network_acronym_str UNIANDES2
network_name_str Séneca: repositorio Uniandes
repository_id_str
dc.title.eng.fl_str_mv A programming language for reinforcement learning
title A programming language for reinforcement learning
spellingShingle A programming language for reinforcement learning
Reinforcement learning
Programming language
Abstraction
Ingeniería
title_short A programming language for reinforcement learning
title_full A programming language for reinforcement learning
title_fullStr A programming language for reinforcement learning
title_full_unstemmed A programming language for reinforcement learning
title_sort A programming language for reinforcement learning
dc.creator.fl_str_mv Morillo Cervantes, Camilo Andrés
dc.contributor.advisor.none.fl_str_mv Cardozo Álvarez, Nicolás
dc.contributor.author.none.fl_str_mv Morillo Cervantes, Camilo Andrés
dc.subject.keyword.eng.fl_str_mv Reinforcement learning
Programming language
Abstraction
topic Reinforcement learning
Programming language
Abstraction
Ingeniería
dc.subject.themes.spa.fl_str_mv Ingeniería
description The increasing complexity of reinforcement learning (RL) algorithms has revealed a critical gap in the programming models and tools available to developers, leading to suboptimal implementations and a lack of standardization in RL software projects. This thesis proposes the design and implementation of a specialized programming language that provides higher-level abstractions tailored specifically for RL. By addressing the representation of state and action space, the reward function and the management of hyperparameters, our language aims to alleviate the burden on programmers, allowing them to concentrate on the intrinsic complexities of their algorithms rather than the underlying details of RL. We will develop language abstractions and data structures within the Racket programming environment to facilitate the expression of RL constructs effectively. Additionally, a suite of test programs will be created to evaluate the efficacy and usability of our proposed language. This work seeks to enhance the quality and accessibility of RL programming, fostering improved practices and innovations in the field.
publishDate 2025
dc.date.accessioned.none.fl_str_mv 2025-01-20T14:11:43Z
dc.date.available.none.fl_str_mv 2025-01-20T14:11:43Z
dc.date.issued.none.fl_str_mv 2025-01-17
dc.type.none.fl_str_mv Trabajo de grado - Pregrado
dc.type.driver.none.fl_str_mv info:eu-repo/semantics/bachelorThesis
dc.type.version.none.fl_str_mv info:eu-repo/semantics/acceptedVersion
dc.type.coar.none.fl_str_mv http://purl.org/coar/resource_type/c_7a1f
dc.type.content.none.fl_str_mv Text
dc.type.redcol.none.fl_str_mv http://purl.org/redcol/resource_type/TP
format http://purl.org/coar/resource_type/c_7a1f
status_str acceptedVersion
dc.identifier.uri.none.fl_str_mv https://hdl.handle.net/1992/75480
dc.identifier.instname.none.fl_str_mv instname:Universidad de los Andes
dc.identifier.reponame.none.fl_str_mv reponame:Repositorio Institucional Séneca
dc.identifier.repourl.none.fl_str_mv repourl:https://repositorio.uniandes.edu.co/
url https://hdl.handle.net/1992/75480
identifier_str_mv instname:Universidad de los Andes
reponame:Repositorio Institucional Séneca
repourl:https://repositorio.uniandes.edu.co/
dc.language.iso.none.fl_str_mv eng
language eng
dc.relation.references.none.fl_str_mv Nicolás Cardozo, Ivana Dusparic, and Christian Cabrera. Prevalence of code smells in reinforcement learning projects. IEEE/ACM 2nd International Conference on AI Engineering – Software Engineering for AI (CAIN), 2023.
Lyn Dupré. BUGS in Writing: A Guide to Debugging Your Prose. Revised edition, 1998. ISBN 0-201-37921-X.
Damien Ernst and Arthur Louette. Introduction to reinforcement learning. Feuerriegel, S., Hartmann, J., Janiesch, C., and Zschech, P, pages 111–126, 2024.
Matthias Felleisen, Robert Bruce Findler, Matthew Flatt, Shriram Krishnamurthi, Eli Barzilay, Jay McCarthy, and Sam Tobin-Hochstadt. The racket manifesto. In 1st Summit on Advances in Programming Languages (SNAPL 2015). Schloss-Dagstuhl-Leibniz Zentrum für Informatik, 2015.
Joan Giner-Miguelez, Abel Gómez, and Jordi Cabot. A domain-specific language for describing machine learning datasets. Journal of Computer Languages, 76:101209, 2023.
Sagar Imambi, Kolla Bhanu Prakash, and GR Kanagachidambaresan. Pytorch. Programming with TensorFlow: solution for edge computing applications, pages 87–104, 2021.
Eric Liang, Richard Liaw, Robert Nishihara, Philipp Moritz, Roy Fox, Ken Goldberg, Joseph Gonzalez, Michael Jordan, and Ion Stoica. Rllib: Abstractions for distributed reinforcement learning. In International conference on machine learning, pages 3053–3062. PMLR, 2018.
Abhishek Nandy, Manisha Biswas, Abhishek Nandy, and Manisha Biswas. Reinforcement learning with keras, tensorflow, and chainerrl. Reinforcement learning: With open ai, tensorflow and keras using python, pages 129–153, 2018.
Kate Nussenbaum and Catherine A Hartley. Reinforcement learning across development: What insights can we draw from a decade of research? Developmental cognitive neuroscience, 40:100733, 2019.
Bo Pang, Erik Nijkamp, and Ying Nian Wu. Deep learning with tensorflow: A review. Journal of Educational and Behavioral Statistics, 45(2):227–248, 2020.
Antonin Raffin, Ashley Hill, Adam Gleave, Anssi Kanervisto, Maximilian Ernestus, and Noah Dormann. Stable-baselines3: Reliable reinforcement learning implementations. Journal of Machine Learning Research, 22(268):1–8, 2021.
William Strunk and E.B. White. The Elements of Style. Longman, fourth edition, 2000. ISBN 0-205-30902-X.
Mark Towers, Ariel Kwiatkowski, Jordan Terry, John U Balis, Gianluca De Cola, Tristan Deleu, Manuel Goulao, Andreas Kallinteris, Markus Krimmel, Arjun KG, et al. Gymnasium: A standard interface for reinforcement learning environments. arXiv preprint arXiv:2407.17032, 2024.
dc.rights.en.fl_str_mv Attribution 4.0 International
dc.rights.uri.none.fl_str_mv http://creativecommons.org/licenses/by/4.0/
dc.rights.accessrights.none.fl_str_mv info:eu-repo/semantics/openAccess
dc.rights.coar.none.fl_str_mv http://purl.org/coar/access_right/c_abf2
rights_invalid_str_mv Attribution 4.0 International
http://creativecommons.org/licenses/by/4.0/
http://purl.org/coar/access_right/c_abf2
eu_rights_str_mv openAccess
dc.format.extent.none.fl_str_mv 33 páginas
dc.format.mimetype.none.fl_str_mv application/pdf
dc.publisher.none.fl_str_mv Universidad de los Andes
dc.publisher.program.none.fl_str_mv Ingeniería de Sistemas y Computación
dc.publisher.faculty.none.fl_str_mv Facultad de Ingeniería
dc.publisher.department.none.fl_str_mv Departamento de Ingeniería de Sistemas y Computación
publisher.none.fl_str_mv Universidad de los Andes
institution Universidad de los Andes
bitstream.url.fl_str_mv https://repositorio.uniandes.edu.co/bitstreams/9125cd43-b29b-4d5e-87d6-caaea8a3d0de/download
https://repositorio.uniandes.edu.co/bitstreams/578da201-def7-4dc2-a70c-dfd2e14d72b5/download
https://repositorio.uniandes.edu.co/bitstreams/d4abe52c-8e02-44af-ba16-d1c504f51b64/download
https://repositorio.uniandes.edu.co/bitstreams/55bc5a62-2331-4e2a-abcb-2cedde0a3ed7/download
https://repositorio.uniandes.edu.co/bitstreams/28e498dd-1823-448b-aef8-c4bb49436657/download
https://repositorio.uniandes.edu.co/bitstreams/942318c3-013d-4449-8dad-f9b0802b62f5/download
https://repositorio.uniandes.edu.co/bitstreams/81c9f051-45d8-4966-83bf-b194720a4a24/download
https://repositorio.uniandes.edu.co/bitstreams/4a8f136d-0b5a-41bf-8cc6-50e9fcc2ed7d/download
bitstream.checksum.fl_str_mv a3da7d993df25f34c1c86fa6f12cb209
84ab6f78b6f215f43579574a3766406b
0175ea4a2d4caec4bbcc37e300941108
ae9e573a68e7f92501b6913cc846c39f
1bf9b1c4074d9f27cc273178418e0709
c19ac71db4cb849b8ce8b71f92f8a3f7
f296c6efa77367b9a7afebc503a12629
e3db4ff7eeda924f49a609528991aafc
bitstream.checksumAlgorithm.fl_str_mv MD5
MD5
MD5
MD5
MD5
MD5
MD5
MD5
repository.name.fl_str_mv Repositorio institucional Séneca
repository.mail.fl_str_mv adminrepositorio@uniandes.edu.co
_version_ 1828159279970910208
spelling Cardozo Álvarez, Nicolásvirtual::22188-1Morillo Cervantes, Camilo Andrés2025-01-20T14:11:43Z2025-01-20T14:11:43Z2025-01-17https://hdl.handle.net/1992/75480instname:Universidad de los Andesreponame:Repositorio Institucional Sénecarepourl:https://repositorio.uniandes.edu.co/The increasing complexity of reinforcement learning (RL) algorithms has revealed a critical gap in the programming models and tools available to developers, leading to suboptimal implementations and a lack of standardization in RL software projects. This thesis proposes the design and implementation of a specialized programming language that provides higher-level abstractions tailored specifically for RL. By addressing the representation of state and action space, the reward function and the management of hyperparameters, our language aims to alleviate the burden on programmers, allowing them to concentrate on the intrinsic complexities of their algorithms rather than the underlying details of RL. We will develop language abstractions and data structures within the Racket programming environment to facilitate the expression of RL constructs effectively. Additionally, a suite of test programs will be created to evaluate the efficacy and usability of our proposed language. This work seeks to enhance the quality and accessibility of RL programming, fostering improved practices and innovations in the field.Pregrado33 páginasapplication/pdfengUniversidad de los AndesIngeniería de Sistemas y ComputaciónFacultad de IngenieríaDepartamento de Ingeniería de Sistemas y ComputaciónAttribution 4.0 Internationalhttp://creativecommons.org/licenses/by/4.0/info:eu-repo/semantics/openAccesshttp://purl.org/coar/access_right/c_abf2A programming language for reinforcement learningTrabajo de grado - Pregradoinfo:eu-repo/semantics/bachelorThesisinfo:eu-repo/semantics/acceptedVersionhttp://purl.org/coar/resource_type/c_7a1fTexthttp://purl.org/redcol/resource_type/TPReinforcement learningProgramming languageAbstractionIngenieríaNicolás Cardozo, Ivana Dusparic, and Christian Cabrera. Prevalence of code smells in reinforcement learning projects. IEEE/ACM 2nd International Conference on AI Engineering – Software Engineering for AI (CAIN), 2023.Lyn Dupré. BUGS in Writing: A Guide to Debugging Your Prose. Revised edition, 1998. ISBN 0-201-37921-X.Damien Ernst and Arthur Louette. Introduction to reinforcement learning. Feuerriegel, S., Hartmann, J., Janiesch, C., and Zschech, P, pages 111–126, 2024.Matthias Felleisen, Robert Bruce Findler, Matthew Flatt, Shriram Krishnamurthi, Eli Barzilay, Jay McCarthy, and Sam Tobin-Hochstadt. The racket manifesto. In 1st Summit on Advances in Programming Languages (SNAPL 2015). Schloss-Dagstuhl-Leibniz Zentrum für Informatik, 2015.Joan Giner-Miguelez, Abel Gómez, and Jordi Cabot. A domain-specific language for describing machine learning datasets. Journal of Computer Languages, 76:101209, 2023.Sagar Imambi, Kolla Bhanu Prakash, and GR Kanagachidambaresan. Pytorch. Programming with TensorFlow: solution for edge computing applications, pages 87–104, 2021.Eric Liang, Richard Liaw, Robert Nishihara, Philipp Moritz, Roy Fox, Ken Goldberg, Joseph Gonzalez, Michael Jordan, and Ion Stoica. Rllib: Abstractions for distributed reinforcement learning. In International conference on machine learning, pages 3053–3062. PMLR, 2018.Abhishek Nandy, Manisha Biswas, Abhishek Nandy, and Manisha Biswas. Reinforcement learning with keras, tensorflow, and chainerrl. Reinforcement learning: With open ai, tensorflow and keras using python, pages 129–153, 2018.Kate Nussenbaum and Catherine A Hartley. Reinforcement learning across development: What insights can we draw from a decade of research? Developmental cognitive neuroscience, 40:100733, 2019.Bo Pang, Erik Nijkamp, and Ying Nian Wu. Deep learning with tensorflow: A review. Journal of Educational and Behavioral Statistics, 45(2):227–248, 2020.Antonin Raffin, Ashley Hill, Adam Gleave, Anssi Kanervisto, Maximilian Ernestus, and Noah Dormann. Stable-baselines3: Reliable reinforcement learning implementations. Journal of Machine Learning Research, 22(268):1–8, 2021.William Strunk and E.B. White. The Elements of Style. Longman, fourth edition, 2000. ISBN 0-205-30902-X.Mark Towers, Ariel Kwiatkowski, Jordan Terry, John U Balis, Gianluca De Cola, Tristan Deleu, Manuel Goulao, Andreas Kallinteris, Markus Krimmel, Arjun KG, et al. Gymnasium: A standard interface for reinforcement learning environments. arXiv preprint arXiv:2407.17032, 2024.202015224Publicationhttps://scholar.google.es/citations?user=3iTzjQsAAAAJvirtual::22188-10000-0002-1094-9952virtual::22188-1a77ff528-fc33-44d6-9022-814f81ef407avirtual::22188-1a77ff528-fc33-44d6-9022-814f81ef407avirtual::22188-1ORIGINALautorizacion tesis.pdfautorizacion tesis.pdfHIDEapplication/pdf317691https://repositorio.uniandes.edu.co/bitstreams/9125cd43-b29b-4d5e-87d6-caaea8a3d0de/downloada3da7d993df25f34c1c86fa6f12cb209MD51A programming language for reinforcement learning.pdfA programming language for reinforcement learning.pdfapplication/pdf599138https://repositorio.uniandes.edu.co/bitstreams/578da201-def7-4dc2-a70c-dfd2e14d72b5/download84ab6f78b6f215f43579574a3766406bMD52CC-LICENSElicense_rdflicense_rdfapplication/rdf+xml; charset=utf-8908https://repositorio.uniandes.edu.co/bitstreams/d4abe52c-8e02-44af-ba16-d1c504f51b64/download0175ea4a2d4caec4bbcc37e300941108MD53LICENSElicense.txtlicense.txttext/plain; charset=utf-82535https://repositorio.uniandes.edu.co/bitstreams/55bc5a62-2331-4e2a-abcb-2cedde0a3ed7/downloadae9e573a68e7f92501b6913cc846c39fMD54TEXTautorizacion tesis.pdf.txtautorizacion tesis.pdf.txtExtracted texttext/plain1967https://repositorio.uniandes.edu.co/bitstreams/28e498dd-1823-448b-aef8-c4bb49436657/download1bf9b1c4074d9f27cc273178418e0709MD55A programming language for reinforcement learning.pdf.txtA programming language for reinforcement learning.pdf.txtExtracted texttext/plain36787https://repositorio.uniandes.edu.co/bitstreams/942318c3-013d-4449-8dad-f9b0802b62f5/downloadc19ac71db4cb849b8ce8b71f92f8a3f7MD57THUMBNAILautorizacion tesis.pdf.jpgautorizacion tesis.pdf.jpgGenerated Thumbnailimage/jpeg10957https://repositorio.uniandes.edu.co/bitstreams/81c9f051-45d8-4966-83bf-b194720a4a24/downloadf296c6efa77367b9a7afebc503a12629MD56A programming language for reinforcement learning.pdf.jpgA programming language for reinforcement learning.pdf.jpgGenerated Thumbnailimage/jpeg3548https://repositorio.uniandes.edu.co/bitstreams/4a8f136d-0b5a-41bf-8cc6-50e9fcc2ed7d/downloade3db4ff7eeda924f49a609528991aafcMD581992/75480oai:repositorio.uniandes.edu.co:1992/754802025-03-05 10:02:00.108http://creativecommons.org/licenses/by/4.0/Attribution 4.0 Internationalopen.accesshttps://repositorio.uniandes.edu.coRepositorio institucional Sénecaadminrepositorio@uniandes.edu.coPGgzPjxzdHJvbmc+RGVzY2FyZ28gZGUgUmVzcG9uc2FiaWxpZGFkIC0gTGljZW5jaWEgZGUgQXV0b3JpemFjacOzbjwvc3Ryb25nPjwvaDM+CjxwPjxzdHJvbmc+UG9yIGZhdm9yIGxlZXIgYXRlbnRhbWVudGUgZXN0ZSBkb2N1bWVudG8gcXVlIHBlcm1pdGUgYWwgUmVwb3NpdG9yaW8gSW5zdGl0dWNpb25hbCBTw6luZWNhIHJlcHJvZHVjaXIgeSBkaXN0cmlidWlyIGxvcyByZWN1cnNvcyBkZSBpbmZvcm1hY2nDs24gZGVwb3NpdGFkb3MgbWVkaWFudGUgbGEgYXV0b3JpemFjacOzbiBkZSBsb3Mgc2lndWllbnRlcyB0w6lybWlub3M6PC9zdHJvbmc+PC9wPgo8cD5Db25jZWRhIGxhIGxpY2VuY2lhIGRlIGRlcMOzc2l0byBlc3TDoW5kYXIgc2VsZWNjaW9uYW5kbyBsYSBvcGNpw7NuIDxzdHJvbmc+J0FjZXB0YXIgbG9zIHTDqXJtaW5vcyBhbnRlcmlvcm1lbnRlIGRlc2NyaXRvcyc8L3N0cm9uZz4geSBjb250aW51YXIgZWwgcHJvY2VzbyBkZSBlbnbDrW8gbWVkaWFudGUgZWwgYm90w7NuIDxzdHJvbmc+J1NpZ3VpZW50ZScuPC9zdHJvbmc+PC9wPgo8aHI+CjxwPllvLCBlbiBtaSBjYWxpZGFkIGRlIGF1dG9yIGRlbCB0cmFiYWpvIGRlIHRlc2lzLCBtb25vZ3JhZsOtYSBvIHRyYWJham8gZGUgZ3JhZG8sIGhhZ28gZW50cmVnYSBkZWwgZWplbXBsYXIgcmVzcGVjdGl2byB5IGRlIHN1cyBhbmV4b3MgZGUgc2VyIGVsIGNhc28sIGVuIGZvcm1hdG8gZGlnaXRhbCB5L28gZWxlY3Ryw7NuaWNvIHkgYXV0b3Jpem8gYSBsYSBVbml2ZXJzaWRhZCBkZSBsb3MgQW5kZXMgcGFyYSBxdWUgcmVhbGljZSBsYSBwdWJsaWNhY2nDs24gZW4gZWwgU2lzdGVtYSBkZSBCaWJsaW90ZWNhcyBvIGVuIGN1YWxxdWllciBvdHJvIHNpc3RlbWEgbyBiYXNlIGRlIGRhdG9zIHByb3BpbyBvIGFqZW5vIGEgbGEgVW5pdmVyc2lkYWQgeSBwYXJhIHF1ZSBlbiBsb3MgdMOpcm1pbm9zIGVzdGFibGVjaWRvcyBlbiBsYSBMZXkgMjMgZGUgMTk4MiwgTGV5IDQ0IGRlIDE5OTMsIERlY2lzacOzbiBBbmRpbmEgMzUxIGRlIDE5OTMsIERlY3JldG8gNDYwIGRlIDE5OTUgeSBkZW3DoXMgbm9ybWFzIGdlbmVyYWxlcyBzb2JyZSBsYSBtYXRlcmlhLCB1dGlsaWNlIGVuIHRvZGFzIHN1cyBmb3JtYXMsIGxvcyBkZXJlY2hvcyBwYXRyaW1vbmlhbGVzIGRlIHJlcHJvZHVjY2nDs24sIGNvbXVuaWNhY2nDs24gcMO6YmxpY2EsIHRyYW5zZm9ybWFjacOzbiB5IGRpc3RyaWJ1Y2nDs24gKGFscXVpbGVyLCBwcsOpc3RhbW8gcMO6YmxpY28gZSBpbXBvcnRhY2nDs24pIHF1ZSBtZSBjb3JyZXNwb25kZW4gY29tbyBjcmVhZG9yIGRlIGxhIG9icmEgb2JqZXRvIGRlbCBwcmVzZW50ZSBkb2N1bWVudG8uPC9wPgo8cD5MYSBwcmVzZW50ZSBhdXRvcml6YWNpw7NuIHNlIGVtaXRlIGVuIGNhbGlkYWQgZGUgYXV0b3IgZGUgbGEgb2JyYSBvYmpldG8gZGVsIHByZXNlbnRlIGRvY3VtZW50byB5IG5vIGNvcnJlc3BvbmRlIGEgY2VzacOzbiBkZSBkZXJlY2hvcywgc2lubyBhIGxhIGF1dG9yaXphY2nDs24gZGUgdXNvIGFjYWTDqW1pY28gZGUgY29uZm9ybWlkYWQgY29uIGxvIGFudGVyaW9ybWVudGUgc2XDsWFsYWRvLiBMYSBwcmVzZW50ZSBhdXRvcml6YWNpw7NuIHNlIGhhY2UgZXh0ZW5zaXZhIG5vIHNvbG8gYSBsYXMgZmFjdWx0YWRlcyB5IGRlcmVjaG9zIGRlIHVzbyBzb2JyZSBsYSBvYnJhIGVuIGZvcm1hdG8gbyBzb3BvcnRlIG1hdGVyaWFsLCBzaW5vIHRhbWJpw6luIHBhcmEgZm9ybWF0byBlbGVjdHLDs25pY28sIHkgZW4gZ2VuZXJhbCBwYXJhIGN1YWxxdWllciBmb3JtYXRvIGNvbm9jaWRvIG8gcG9yIGNvbm9jZXIuPC9wPgo8cD5FbCBhdXRvciwgbWFuaWZpZXN0YSBxdWUgbGEgb2JyYSBvYmpldG8gZGUgbGEgcHJlc2VudGUgYXV0b3JpemFjacOzbiBlcyBvcmlnaW5hbCB5IGxhIHJlYWxpesOzIHNpbiB2aW9sYXIgbyB1c3VycGFyIGRlcmVjaG9zIGRlIGF1dG9yIGRlIHRlcmNlcm9zLCBwb3IgbG8gdGFudG8sIGxhIG9icmEgZXMgZGUgc3UgZXhjbHVzaXZhIGF1dG9yw61hIHkgdGllbmUgbGEgdGl0dWxhcmlkYWQgc29icmUgbGEgbWlzbWEuPC9wPgo8cD5FbiBjYXNvIGRlIHByZXNlbnRhcnNlIGN1YWxxdWllciByZWNsYW1hY2nDs24gbyBhY2Npw7NuIHBvciBwYXJ0ZSBkZSB1biB0ZXJjZXJvIGVuIGN1YW50byBhIGxvcyBkZXJlY2hvcyBkZSBhdXRvciBzb2JyZSBsYSBvYnJhIGVuIGN1ZXN0acOzbiwgZWwgYXV0b3IgYXN1bWlyw6EgdG9kYSBsYSByZXNwb25zYWJpbGlkYWQsIHkgc2FsZHLDoSBkZSBkZWZlbnNhIGRlIGxvcyBkZXJlY2hvcyBhcXXDrSBhdXRvcml6YWRvcywgcGFyYSB0b2RvcyBsb3MgZWZlY3RvcyBsYSBVbml2ZXJzaWRhZCBhY3TDumEgY29tbyB1biB0ZXJjZXJvIGRlIGJ1ZW5hIGZlLjwvcD4KPHA+U2kgdGllbmUgYWxndW5hIGR1ZGEgc29icmUgbGEgbGljZW5jaWEsIHBvciBmYXZvciwgY29udGFjdGUgY29uIGVsIDxhIGhyZWY9Im1haWx0bzpiaWJsaW90ZWNhQHVuaWFuZGVzLmVkdS5jbyIgdGFyZ2V0PSJfYmxhbmsiPkFkbWluaXN0cmFkb3IgZGVsIFNpc3RlbWEuPC9hPjwvcD4K