Adaptive fine-tuning of LLMs with QLoRA adapters for enhanced understanding in cooperative multi-agent scenarios

This work explores fine-tuning of Large Language Models (LLMs) using QLoRA adapters to enhance performance in cooperative multi-agent scenarios. Using the Melting Pot framework and integrating multiple indicators of collective welfare and agent comprehension into a unified signal, the approach optim...

Full description

Autores:
Gómez Barrera, Daniel Fernando
Tipo de recurso:
Trabajo de grado de pregrado
Fecha de publicación:
2024
Institución:
Universidad de los Andes
Repositorio:
Séneca: repositorio Uniandes
Idioma:
eng
OAI Identifier:
oai:repositorio.uniandes.edu.co:1992/74837
Acceso en línea:
https://hdl.handle.net/1992/74837
Palabra clave:
Artificial Intelligence
Cooperative AI
Multi-agent scenarios
Machine learning
Natural language processing
NLP
LLM
Large Language Models
Ingeniería
Rights
embargoedAccess
License
Attribution-ShareAlike 4.0 International
id UNIANDES2_1edf207486b61ac3e972c751fe76c847
oai_identifier_str oai:repositorio.uniandes.edu.co:1992/74837
network_acronym_str UNIANDES2
network_name_str Séneca: repositorio Uniandes
repository_id_str
dc.title.eng.fl_str_mv Adaptive fine-tuning of LLMs with QLoRA adapters for enhanced understanding in cooperative multi-agent scenarios
title Adaptive fine-tuning of LLMs with QLoRA adapters for enhanced understanding in cooperative multi-agent scenarios
spellingShingle Adaptive fine-tuning of LLMs with QLoRA adapters for enhanced understanding in cooperative multi-agent scenarios
Artificial Intelligence
Cooperative AI
Multi-agent scenarios
Machine learning
Natural language processing
NLP
LLM
Large Language Models
Ingeniería
title_short Adaptive fine-tuning of LLMs with QLoRA adapters for enhanced understanding in cooperative multi-agent scenarios
title_full Adaptive fine-tuning of LLMs with QLoRA adapters for enhanced understanding in cooperative multi-agent scenarios
title_fullStr Adaptive fine-tuning of LLMs with QLoRA adapters for enhanced understanding in cooperative multi-agent scenarios
title_full_unstemmed Adaptive fine-tuning of LLMs with QLoRA adapters for enhanced understanding in cooperative multi-agent scenarios
title_sort Adaptive fine-tuning of LLMs with QLoRA adapters for enhanced understanding in cooperative multi-agent scenarios
dc.creator.fl_str_mv Gómez Barrera, Daniel Fernando
dc.contributor.advisor.none.fl_str_mv Manrique Piramanrique, Rubén Francisco
dc.contributor.author.none.fl_str_mv Gómez Barrera, Daniel Fernando
dc.contributor.jury.none.fl_str_mv Manrique Piramanrique, Rubén Francisco
dc.contributor.researchgroup.none.fl_str_mv Facultad de Ingeniería
dc.subject.keyword.eng.fl_str_mv Artificial Intelligence
Cooperative AI
Multi-agent scenarios
Machine learning
Natural language processing
NLP
LLM
Large Language Models
topic Artificial Intelligence
Cooperative AI
Multi-agent scenarios
Machine learning
Natural language processing
NLP
LLM
Large Language Models
Ingeniería
dc.subject.themes.none.fl_str_mv Ingeniería
description This work explores fine-tuning of Large Language Models (LLMs) using QLoRA adapters to enhance performance in cooperative multi-agent scenarios. Using the Melting Pot framework and integrating multiple indicators of collective welfare and agent comprehension into a unified signal, the approach optimizes the selection of training examples. Fine-tuning applied to the quantized Llama-3B models resulted in improved stability and performance, particularly in reward acquisition and equality maintenance. Despite quantitative support for the positive effects of fine-tuning on collective well-being and increased cooperativity, the training heavily depends on the model's original state, limiting the spectrum of solutions and preventing agents from explicitly reasoning about the common good.
publishDate 2024
dc.date.accessioned.none.fl_str_mv 2024-07-31T19:21:56Z
dc.date.issued.none.fl_str_mv 2024-07-30
dc.date.accepted.none.fl_str_mv 2024-07-31
dc.date.available.none.fl_str_mv 2026-06-30
dc.type.none.fl_str_mv Trabajo de grado - Pregrado
dc.type.driver.none.fl_str_mv info:eu-repo/semantics/bachelorThesis
dc.type.version.none.fl_str_mv info:eu-repo/semantics/acceptedVersion
dc.type.coar.none.fl_str_mv http://purl.org/coar/resource_type/c_7a1f
dc.type.content.none.fl_str_mv Text
dc.type.redcol.none.fl_str_mv http://purl.org/redcol/resource_type/TP
format http://purl.org/coar/resource_type/c_7a1f
status_str acceptedVersion
dc.identifier.uri.none.fl_str_mv https://hdl.handle.net/1992/74837
dc.identifier.instname.none.fl_str_mv instname:Universidad de los Andes
dc.identifier.reponame.none.fl_str_mv reponame:Repositorio Institucional Séneca
dc.identifier.repourl.none.fl_str_mv repourl:https://repositorio.uniandes.edu.co/
url https://hdl.handle.net/1992/74837
identifier_str_mv instname:Universidad de los Andes
reponame:Repositorio Institucional Séneca
repourl:https://repositorio.uniandes.edu.co/
dc.language.iso.none.fl_str_mv eng
language eng
dc.relation.references.none.fl_str_mv Agapiou, J. P., Vezhnevets, A. S., Duénez-Guzmán, E. A., Matyas, J., Mao, Y., Sunehag, P., . . . contributions, E. (2022, 11). Melting pot 2.0. Retrieved from https://arxiv.org/abs/2211.13746v6
AI@Meta. (2024). Llama 3 model card. Retrieved from https://github.com/meta-llama/llama3/blob/main/MODEL_CARD.md
Carroll, M., Shah, R., Ho, M. K., Griffiths, T., Seshia, S., Abbeel, P., & Dragan, A. (2019). On the utility of learning about humans for human-ai coordination. In H. Wallach, H. Larochelle, A. Beygelzimer, F. d Alché-Buc, E. Fox, & R. Garnett (Eds.), (Vol. 32). Curran Associates, Inc. Retrieved from https://proceedings.neurips.cc/paper_files/paper/2019/file/f5b1b89d98b7286673128a5fb112cb9a-Paper.pdf
Conitzer, V., & Oesterheld, C. (2023, 9). Foundations of cooperative ai. In (Vol. 37, p. 15359-15367). AAAI Press. Retrieved from https://ojs.aaai.org/index.php/AAAI/article/view/26791 doi: 10.1609/AAAI.V37I13.26791
Dafoe, A., Hughes, E., Bachrach, Y., Collins, T., McKee, K. R., Leibo, J. Z., . . . Graepel, T. (2020,12). Open problems in cooperative ai. , 2020-2032. Retrieved from https://arxiv.org/abs/2012.08630v1
Dettmers, T., Pagnoni, A., Holtzman, A., & Zettlemoyer, L. (2023). Qlora: Efficient finetuning of quantized llms. Retrieved from https://arxiv.org/abs/2305.14314
Du, Y., Leibo, J. Z., Islam, U., Willis, R., & Sunehag, P. (2023, 12). A review of cooperation in multi-agent learning. Retrieved from https://arxiv.org/abs/2312.05162v1
Gronauer, S., & Diepold, K. (2022). Multi-agent deep reinforcement learning: a survey. Artificial Intelligence Review, 55, 895-943. Retrieved from https://doi.org/10.1007/s10462-021-09996-w doi: 10.1007/s10462-021-09996-w
Heuillet, A., Couthouis, F., & D´ıaz-Rodr´ıguez, N. (2021). Collective explainable ai: Explaining co-operative strategies and agent contribution in multiagent reinforcement learning with shapley values.
Hong, S., Zhuge, M., Chen, J., Zheng, X., Cheng, Y., Zhang, C., . . . Schmidhuber, J. (2023,8). Metagpt: Meta programming for a multi-agent collaborative framework. Retrieved from https://arxiv.org/abs/2308.00352v5
Hughes, E., Leibo, J. Z., Phillips, M., Tuyls, K., Due˜nez-Guzman, E., Casta˜neda, A. G., . . . Graepel, T. (2018). Inequity aversion improves cooperation in intertemporal social dilemmas. In S. Bengio, H. Wallach, H. Larochelle, K. Grauman, N. Cesa-Bianchi, & R. Garnett (Eds.), (Vol. 31). Curran Associates, Inc. Retrieved from https://proceedings.neurips.cc/paper_files/paper/2018/file/7fea637fd6d02b8f0adf6f7dc36aed93-Paper.pdf
Mangrulkar, S., Gugger, S., Debut, L., Belkada, Y., Paul, S., & Bossan, B. (2022). Peft: State-of-the-art parameter-efficient fine-tuning methods. https://github.com/huggingface/peft
Mosquera, M., Pinzon, J. S., Rios, M., Fonseca, Y., Giraldo, L. F., Quijano, N., & Manrique, R. (2024). Can llm-augmented autonomous agents cooperate?, an evaluation of their cooperative capabilities through melting pot. Retrieved from https://arxiv.org/abs/2403.11381
Panait, L., & Luke, S. (2005). Cooperative multi-agent learning: The state of the art. Autonomous Agents and Multi-Agent Systems, 11 , 387-434. Retrieved from https://doi.org/10.1007/s10458-005-2631-2 doi: 10.1007/s10458-005-2631-2
Park, J. S., O’Brien, J., Cai, C. J., Morris, M. R., Liang, P., & Bernstein, M. S. (2023, 4). Generative agents: Interactive simulacra of human behavior. Association for Computing Machinery, Inc. Retrieved from https://arxiv.org/abs/2304.03442v2 doi: 10.1145/3586183.3606763
Radke, D., & Tilbury, K. (2023). Learning to learn group alignment: A self-tuning credo framework with multiagent teams.
Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A. N., . . . Polosukhin, I. (2023). Attention is all you need. Retrieved from https://arxiv.org/abs/1706.03762
Zhang, C., Yang, K., Hu, S., Wang, Z., Li, G., Sun, Y., . . . Yang, Y. (2023, 8). Proagent: Building proactive cooperative agents with large language models. Retrieved from https://arxiv.org/abs/2308.11339v3
dc.rights.en.fl_str_mv Attribution-ShareAlike 4.0 International
dc.rights.uri.none.fl_str_mv http://creativecommons.org/licenses/by-sa/4.0/
dc.rights.accessrights.none.fl_str_mv info:eu-repo/semantics/embargoedAccess
dc.rights.coar.none.fl_str_mv http://purl.org/coar/access_right/c_f1cf
rights_invalid_str_mv Attribution-ShareAlike 4.0 International
http://creativecommons.org/licenses/by-sa/4.0/
http://purl.org/coar/access_right/c_f1cf
eu_rights_str_mv embargoedAccess
dc.format.extent.none.fl_str_mv 33 páginas
dc.format.mimetype.none.fl_str_mv application/pdf
dc.publisher.none.fl_str_mv Universidad de los Andes
dc.publisher.program.none.fl_str_mv Ingeniería de Sistemas y Computación
dc.publisher.faculty.none.fl_str_mv Facultad de Ingeniería
dc.publisher.department.none.fl_str_mv Departamento de Ingeniería de Sistemas y Computación
publisher.none.fl_str_mv Universidad de los Andes
institution Universidad de los Andes
bitstream.url.fl_str_mv https://repositorio.uniandes.edu.co/bitstreams/65ed3124-4c07-4718-9d49-01cdec8eeb38/download
https://repositorio.uniandes.edu.co/bitstreams/7976a241-fb22-471c-bd46-ce2bcad71e85/download
https://repositorio.uniandes.edu.co/bitstreams/3cb02914-87ee-4bd9-88a3-e424f9ab5950/download
https://repositorio.uniandes.edu.co/bitstreams/cbd17211-52c0-4e13-bdc8-a6f541855d24/download
https://repositorio.uniandes.edu.co/bitstreams/9c63ec57-9bb9-4a06-8e70-097b26efa327/download
https://repositorio.uniandes.edu.co/bitstreams/a90c1029-f40b-4e68-afc9-12650907b3e6/download
https://repositorio.uniandes.edu.co/bitstreams/ef0ac3e5-00d5-4021-a617-60460a12e282/download
https://repositorio.uniandes.edu.co/bitstreams/f5cdeadc-20f4-4f33-9595-2dcb696928c7/download
bitstream.checksum.fl_str_mv c50401c8d94734fa3bc54ed7df0d766e
2043c924210f98aa154a3797e7c1fa9e
84a900c9dd4b2a10095a94649e1ce116
ae9e573a68e7f92501b6913cc846c39f
530d320eb9a3b695b3582e4fc981c68e
9fec258a581def2d2ad5ceef955aa40c
4a2e4cc2dd54961a01b8aab9a703f27e
e286ca2e16bf03b166939e503369c11b
bitstream.checksumAlgorithm.fl_str_mv MD5
MD5
MD5
MD5
MD5
MD5
MD5
MD5
repository.name.fl_str_mv Repositorio institucional Séneca
repository.mail.fl_str_mv adminrepositorio@uniandes.edu.co
_version_ 1812134036069416960
spelling Manrique Piramanrique, Rubén Franciscovirtual::19491-1Gómez Barrera, Daniel FernandoManrique Piramanrique, Rubén FranciscoFacultad de Ingeniería2024-07-31T19:21:56Z2026-06-302024-07-302024-07-31https://hdl.handle.net/1992/74837instname:Universidad de los Andesreponame:Repositorio Institucional Sénecarepourl:https://repositorio.uniandes.edu.co/This work explores fine-tuning of Large Language Models (LLMs) using QLoRA adapters to enhance performance in cooperative multi-agent scenarios. Using the Melting Pot framework and integrating multiple indicators of collective welfare and agent comprehension into a unified signal, the approach optimizes the selection of training examples. Fine-tuning applied to the quantized Llama-3B models resulted in improved stability and performance, particularly in reward acquisition and equality maintenance. Despite quantitative support for the positive effects of fine-tuning on collective well-being and increased cooperativity, the training heavily depends on the model's original state, limiting the spectrum of solutions and preventing agents from explicitly reasoning about the common good.Pregrado33 páginasapplication/pdfengUniversidad de los AndesIngeniería de Sistemas y ComputaciónFacultad de IngenieríaDepartamento de Ingeniería de Sistemas y ComputaciónAttribution-ShareAlike 4.0 Internationalhttp://creativecommons.org/licenses/by-sa/4.0/info:eu-repo/semantics/embargoedAccesshttp://purl.org/coar/access_right/c_f1cfAdaptive fine-tuning of LLMs with QLoRA adapters for enhanced understanding in cooperative multi-agent scenariosTrabajo de grado - Pregradoinfo:eu-repo/semantics/bachelorThesisinfo:eu-repo/semantics/acceptedVersionhttp://purl.org/coar/resource_type/c_7a1fTexthttp://purl.org/redcol/resource_type/TPArtificial IntelligenceCooperative AIMulti-agent scenariosMachine learningNatural language processingNLPLLMLarge Language ModelsIngenieríaAgapiou, J. P., Vezhnevets, A. S., Duénez-Guzmán, E. A., Matyas, J., Mao, Y., Sunehag, P., . . . contributions, E. (2022, 11). Melting pot 2.0. Retrieved from https://arxiv.org/abs/2211.13746v6AI@Meta. (2024). Llama 3 model card. Retrieved from https://github.com/meta-llama/llama3/blob/main/MODEL_CARD.mdCarroll, M., Shah, R., Ho, M. K., Griffiths, T., Seshia, S., Abbeel, P., & Dragan, A. (2019). On the utility of learning about humans for human-ai coordination. In H. Wallach, H. Larochelle, A. Beygelzimer, F. d Alché-Buc, E. Fox, & R. Garnett (Eds.), (Vol. 32). Curran Associates, Inc. Retrieved from https://proceedings.neurips.cc/paper_files/paper/2019/file/f5b1b89d98b7286673128a5fb112cb9a-Paper.pdfConitzer, V., & Oesterheld, C. (2023, 9). Foundations of cooperative ai. In (Vol. 37, p. 15359-15367). AAAI Press. Retrieved from https://ojs.aaai.org/index.php/AAAI/article/view/26791 doi: 10.1609/AAAI.V37I13.26791Dafoe, A., Hughes, E., Bachrach, Y., Collins, T., McKee, K. R., Leibo, J. Z., . . . Graepel, T. (2020,12). Open problems in cooperative ai. , 2020-2032. Retrieved from https://arxiv.org/abs/2012.08630v1Dettmers, T., Pagnoni, A., Holtzman, A., & Zettlemoyer, L. (2023). Qlora: Efficient finetuning of quantized llms. Retrieved from https://arxiv.org/abs/2305.14314Du, Y., Leibo, J. Z., Islam, U., Willis, R., & Sunehag, P. (2023, 12). A review of cooperation in multi-agent learning. Retrieved from https://arxiv.org/abs/2312.05162v1Gronauer, S., & Diepold, K. (2022). Multi-agent deep reinforcement learning: a survey. Artificial Intelligence Review, 55, 895-943. Retrieved from https://doi.org/10.1007/s10462-021-09996-w doi: 10.1007/s10462-021-09996-wHeuillet, A., Couthouis, F., & D´ıaz-Rodr´ıguez, N. (2021). Collective explainable ai: Explaining co-operative strategies and agent contribution in multiagent reinforcement learning with shapley values.Hong, S., Zhuge, M., Chen, J., Zheng, X., Cheng, Y., Zhang, C., . . . Schmidhuber, J. (2023,8). Metagpt: Meta programming for a multi-agent collaborative framework. Retrieved from https://arxiv.org/abs/2308.00352v5Hughes, E., Leibo, J. Z., Phillips, M., Tuyls, K., Due˜nez-Guzman, E., Casta˜neda, A. G., . . . Graepel, T. (2018). Inequity aversion improves cooperation in intertemporal social dilemmas. In S. Bengio, H. Wallach, H. Larochelle, K. Grauman, N. Cesa-Bianchi, & R. Garnett (Eds.), (Vol. 31). Curran Associates, Inc. Retrieved from https://proceedings.neurips.cc/paper_files/paper/2018/file/7fea637fd6d02b8f0adf6f7dc36aed93-Paper.pdfMangrulkar, S., Gugger, S., Debut, L., Belkada, Y., Paul, S., & Bossan, B. (2022). Peft: State-of-the-art parameter-efficient fine-tuning methods. https://github.com/huggingface/peftMosquera, M., Pinzon, J. S., Rios, M., Fonseca, Y., Giraldo, L. F., Quijano, N., & Manrique, R. (2024). Can llm-augmented autonomous agents cooperate?, an evaluation of their cooperative capabilities through melting pot. Retrieved from https://arxiv.org/abs/2403.11381Panait, L., & Luke, S. (2005). Cooperative multi-agent learning: The state of the art. Autonomous Agents and Multi-Agent Systems, 11 , 387-434. Retrieved from https://doi.org/10.1007/s10458-005-2631-2 doi: 10.1007/s10458-005-2631-2Park, J. S., O’Brien, J., Cai, C. J., Morris, M. R., Liang, P., & Bernstein, M. S. (2023, 4). Generative agents: Interactive simulacra of human behavior. Association for Computing Machinery, Inc. Retrieved from https://arxiv.org/abs/2304.03442v2 doi: 10.1145/3586183.3606763Radke, D., & Tilbury, K. (2023). Learning to learn group alignment: A self-tuning credo framework with multiagent teams.Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A. N., . . . Polosukhin, I. (2023). Attention is all you need. Retrieved from https://arxiv.org/abs/1706.03762Zhang, C., Yang, K., Hu, S., Wang, Z., Li, G., Sun, Y., . . . Yang, Y. (2023, 8). Proagent: Building proactive cooperative agents with large language models. Retrieved from https://arxiv.org/abs/2308.11339v3201728920Publication9f6e12e0-098e-4548-ab81-75552e8385e7virtual::19491-19f6e12e0-098e-4548-ab81-75552e8385e79f6e12e0-098e-4548-ab81-75552e8385e7virtual::19491-1ORIGINALautorizacion tesis.pdfautorizacion tesis.pdfHIDEapplication/pdf205966https://repositorio.uniandes.edu.co/bitstreams/65ed3124-4c07-4718-9d49-01cdec8eeb38/downloadc50401c8d94734fa3bc54ed7df0d766eMD51Adaptive_Fine_Tuning_of_LLMs_with_QLoRA_Adapters_for_Enhanced_Understanding_in_Cooperative_Multi_Agent_Scenarios_vf-1.pdfAdaptive_Fine_Tuning_of_LLMs_with_QLoRA_Adapters_for_Enhanced_Understanding_in_Cooperative_Multi_Agent_Scenarios_vf-1.pdfSe restringió el acceso de acuerdo a instrucciones del profesor.application/pdf8460790https://repositorio.uniandes.edu.co/bitstreams/7976a241-fb22-471c-bd46-ce2bcad71e85/download2043c924210f98aa154a3797e7c1fa9eMD52CC-LICENSElicense_rdflicense_rdfapplication/rdf+xml; charset=utf-81025https://repositorio.uniandes.edu.co/bitstreams/3cb02914-87ee-4bd9-88a3-e424f9ab5950/download84a900c9dd4b2a10095a94649e1ce116MD53LICENSElicense.txtlicense.txttext/plain; charset=utf-82535https://repositorio.uniandes.edu.co/bitstreams/cbd17211-52c0-4e13-bdc8-a6f541855d24/downloadae9e573a68e7f92501b6913cc846c39fMD54TEXTautorizacion tesis.pdf.txtautorizacion tesis.pdf.txtExtracted texttext/plain1176https://repositorio.uniandes.edu.co/bitstreams/9c63ec57-9bb9-4a06-8e70-097b26efa327/download530d320eb9a3b695b3582e4fc981c68eMD55Adaptive_Fine_Tuning_of_LLMs_with_QLoRA_Adapters_for_Enhanced_Understanding_in_Cooperative_Multi_Agent_Scenarios_vf-1.pdf.txtAdaptive_Fine_Tuning_of_LLMs_with_QLoRA_Adapters_for_Enhanced_Understanding_in_Cooperative_Multi_Agent_Scenarios_vf-1.pdf.txtExtracted texttext/plain64291https://repositorio.uniandes.edu.co/bitstreams/a90c1029-f40b-4e68-afc9-12650907b3e6/download9fec258a581def2d2ad5ceef955aa40cMD57THUMBNAILautorizacion tesis.pdf.jpgautorizacion tesis.pdf.jpgGenerated Thumbnailimage/jpeg10864https://repositorio.uniandes.edu.co/bitstreams/ef0ac3e5-00d5-4021-a617-60460a12e282/download4a2e4cc2dd54961a01b8aab9a703f27eMD56Adaptive_Fine_Tuning_of_LLMs_with_QLoRA_Adapters_for_Enhanced_Understanding_in_Cooperative_Multi_Agent_Scenarios_vf-1.pdf.jpgAdaptive_Fine_Tuning_of_LLMs_with_QLoRA_Adapters_for_Enhanced_Understanding_in_Cooperative_Multi_Agent_Scenarios_vf-1.pdf.jpgGenerated Thumbnailimage/jpeg11286https://repositorio.uniandes.edu.co/bitstreams/f5cdeadc-20f4-4f33-9595-2dcb696928c7/downloade286ca2e16bf03b166939e503369c11bMD581992/74837oai:repositorio.uniandes.edu.co:1992/748372024-09-12 16:20:10.605http://creativecommons.org/licenses/by-sa/4.0/Attribution-ShareAlike 4.0 Internationalrestrictedhttps://repositorio.uniandes.edu.coRepositorio institucional Sénecaadminrepositorio@uniandes.edu.coPGgzPjxzdHJvbmc+RGVzY2FyZ28gZGUgUmVzcG9uc2FiaWxpZGFkIC0gTGljZW5jaWEgZGUgQXV0b3JpemFjacOzbjwvc3Ryb25nPjwvaDM+CjxwPjxzdHJvbmc+UG9yIGZhdm9yIGxlZXIgYXRlbnRhbWVudGUgZXN0ZSBkb2N1bWVudG8gcXVlIHBlcm1pdGUgYWwgUmVwb3NpdG9yaW8gSW5zdGl0dWNpb25hbCBTw6luZWNhIHJlcHJvZHVjaXIgeSBkaXN0cmlidWlyIGxvcyByZWN1cnNvcyBkZSBpbmZvcm1hY2nDs24gZGVwb3NpdGFkb3MgbWVkaWFudGUgbGEgYXV0b3JpemFjacOzbiBkZSBsb3Mgc2lndWllbnRlcyB0w6lybWlub3M6PC9zdHJvbmc+PC9wPgo8cD5Db25jZWRhIGxhIGxpY2VuY2lhIGRlIGRlcMOzc2l0byBlc3TDoW5kYXIgc2VsZWNjaW9uYW5kbyBsYSBvcGNpw7NuIDxzdHJvbmc+J0FjZXB0YXIgbG9zIHTDqXJtaW5vcyBhbnRlcmlvcm1lbnRlIGRlc2NyaXRvcyc8L3N0cm9uZz4geSBjb250aW51YXIgZWwgcHJvY2VzbyBkZSBlbnbDrW8gbWVkaWFudGUgZWwgYm90w7NuIDxzdHJvbmc+J1NpZ3VpZW50ZScuPC9zdHJvbmc+PC9wPgo8aHI+CjxwPllvLCBlbiBtaSBjYWxpZGFkIGRlIGF1dG9yIGRlbCB0cmFiYWpvIGRlIHRlc2lzLCBtb25vZ3JhZsOtYSBvIHRyYWJham8gZGUgZ3JhZG8sIGhhZ28gZW50cmVnYSBkZWwgZWplbXBsYXIgcmVzcGVjdGl2byB5IGRlIHN1cyBhbmV4b3MgZGUgc2VyIGVsIGNhc28sIGVuIGZvcm1hdG8gZGlnaXRhbCB5L28gZWxlY3Ryw7NuaWNvIHkgYXV0b3Jpem8gYSBsYSBVbml2ZXJzaWRhZCBkZSBsb3MgQW5kZXMgcGFyYSBxdWUgcmVhbGljZSBsYSBwdWJsaWNhY2nDs24gZW4gZWwgU2lzdGVtYSBkZSBCaWJsaW90ZWNhcyBvIGVuIGN1YWxxdWllciBvdHJvIHNpc3RlbWEgbyBiYXNlIGRlIGRhdG9zIHByb3BpbyBvIGFqZW5vIGEgbGEgVW5pdmVyc2lkYWQgeSBwYXJhIHF1ZSBlbiBsb3MgdMOpcm1pbm9zIGVzdGFibGVjaWRvcyBlbiBsYSBMZXkgMjMgZGUgMTk4MiwgTGV5IDQ0IGRlIDE5OTMsIERlY2lzacOzbiBBbmRpbmEgMzUxIGRlIDE5OTMsIERlY3JldG8gNDYwIGRlIDE5OTUgeSBkZW3DoXMgbm9ybWFzIGdlbmVyYWxlcyBzb2JyZSBsYSBtYXRlcmlhLCB1dGlsaWNlIGVuIHRvZGFzIHN1cyBmb3JtYXMsIGxvcyBkZXJlY2hvcyBwYXRyaW1vbmlhbGVzIGRlIHJlcHJvZHVjY2nDs24sIGNvbXVuaWNhY2nDs24gcMO6YmxpY2EsIHRyYW5zZm9ybWFjacOzbiB5IGRpc3RyaWJ1Y2nDs24gKGFscXVpbGVyLCBwcsOpc3RhbW8gcMO6YmxpY28gZSBpbXBvcnRhY2nDs24pIHF1ZSBtZSBjb3JyZXNwb25kZW4gY29tbyBjcmVhZG9yIGRlIGxhIG9icmEgb2JqZXRvIGRlbCBwcmVzZW50ZSBkb2N1bWVudG8uPC9wPgo8cD5MYSBwcmVzZW50ZSBhdXRvcml6YWNpw7NuIHNlIGVtaXRlIGVuIGNhbGlkYWQgZGUgYXV0b3IgZGUgbGEgb2JyYSBvYmpldG8gZGVsIHByZXNlbnRlIGRvY3VtZW50byB5IG5vIGNvcnJlc3BvbmRlIGEgY2VzacOzbiBkZSBkZXJlY2hvcywgc2lubyBhIGxhIGF1dG9yaXphY2nDs24gZGUgdXNvIGFjYWTDqW1pY28gZGUgY29uZm9ybWlkYWQgY29uIGxvIGFudGVyaW9ybWVudGUgc2XDsWFsYWRvLiBMYSBwcmVzZW50ZSBhdXRvcml6YWNpw7NuIHNlIGhhY2UgZXh0ZW5zaXZhIG5vIHNvbG8gYSBsYXMgZmFjdWx0YWRlcyB5IGRlcmVjaG9zIGRlIHVzbyBzb2JyZSBsYSBvYnJhIGVuIGZvcm1hdG8gbyBzb3BvcnRlIG1hdGVyaWFsLCBzaW5vIHRhbWJpw6luIHBhcmEgZm9ybWF0byBlbGVjdHLDs25pY28sIHkgZW4gZ2VuZXJhbCBwYXJhIGN1YWxxdWllciBmb3JtYXRvIGNvbm9jaWRvIG8gcG9yIGNvbm9jZXIuPC9wPgo8cD5FbCBhdXRvciwgbWFuaWZpZXN0YSBxdWUgbGEgb2JyYSBvYmpldG8gZGUgbGEgcHJlc2VudGUgYXV0b3JpemFjacOzbiBlcyBvcmlnaW5hbCB5IGxhIHJlYWxpesOzIHNpbiB2aW9sYXIgbyB1c3VycGFyIGRlcmVjaG9zIGRlIGF1dG9yIGRlIHRlcmNlcm9zLCBwb3IgbG8gdGFudG8sIGxhIG9icmEgZXMgZGUgc3UgZXhjbHVzaXZhIGF1dG9yw61hIHkgdGllbmUgbGEgdGl0dWxhcmlkYWQgc29icmUgbGEgbWlzbWEuPC9wPgo8cD5FbiBjYXNvIGRlIHByZXNlbnRhcnNlIGN1YWxxdWllciByZWNsYW1hY2nDs24gbyBhY2Npw7NuIHBvciBwYXJ0ZSBkZSB1biB0ZXJjZXJvIGVuIGN1YW50byBhIGxvcyBkZXJlY2hvcyBkZSBhdXRvciBzb2JyZSBsYSBvYnJhIGVuIGN1ZXN0acOzbiwgZWwgYXV0b3IgYXN1bWlyw6EgdG9kYSBsYSByZXNwb25zYWJpbGlkYWQsIHkgc2FsZHLDoSBkZSBkZWZlbnNhIGRlIGxvcyBkZXJlY2hvcyBhcXXDrSBhdXRvcml6YWRvcywgcGFyYSB0b2RvcyBsb3MgZWZlY3RvcyBsYSBVbml2ZXJzaWRhZCBhY3TDumEgY29tbyB1biB0ZXJjZXJvIGRlIGJ1ZW5hIGZlLjwvcD4KPHA+U2kgdGllbmUgYWxndW5hIGR1ZGEgc29icmUgbGEgbGljZW5jaWEsIHBvciBmYXZvciwgY29udGFjdGUgY29uIGVsIDxhIGhyZWY9Im1haWx0bzpiaWJsaW90ZWNhQHVuaWFuZGVzLmVkdS5jbyIgdGFyZ2V0PSJfYmxhbmsiPkFkbWluaXN0cmFkb3IgZGVsIFNpc3RlbWEuPC9hPjwvcD4K