Natural Language Inference (NLI) via LLMs.

Natural Language Inference (NLI) is a fundamental task in Natural Language Processing that aims to determine whether a hypothesis can be inferred from a given premise. Although numerous experiments have been conducted to evaluate this task using various language models, most of these efforts have fo...

Full description

Autores:
Pérez Terán, Nicolás
Tipo de recurso:
Trabajo de grado de pregrado
Fecha de publicación:
2025
Institución:
Universidad de los Andes
Repositorio:
Séneca: repositorio Uniandes
Idioma:
eng
OAI Identifier:
oai:repositorio.uniandes.edu.co:1992/75670
Acceso en línea:
https://hdl.handle.net/1992/75670
Palabra clave:
Recognizing Textual Entailment
Natural Language Processing
IA
Research
Natural Language Inference
Large Language Models
Human Validation
Ingeniería
Rights
License
Attribution-ShareAlike 4.0 International
id UNIANDES2_0bacfee2ba1a0637cdebe1cab67017ad
oai_identifier_str oai:repositorio.uniandes.edu.co:1992/75670
network_acronym_str UNIANDES2
network_name_str Séneca: repositorio Uniandes
repository_id_str
dc.title.eng.fl_str_mv Natural Language Inference (NLI) via LLMs.
title Natural Language Inference (NLI) via LLMs.
spellingShingle Natural Language Inference (NLI) via LLMs.
Recognizing Textual Entailment
Natural Language Processing
IA
Research
Natural Language Inference
Large Language Models
Human Validation
Ingeniería
title_short Natural Language Inference (NLI) via LLMs.
title_full Natural Language Inference (NLI) via LLMs.
title_fullStr Natural Language Inference (NLI) via LLMs.
title_full_unstemmed Natural Language Inference (NLI) via LLMs.
title_sort Natural Language Inference (NLI) via LLMs.
dc.creator.fl_str_mv Pérez Terán, Nicolás
dc.contributor.advisor.none.fl_str_mv Manrique Piramanrique, Rubén Francisco
dc.contributor.author.none.fl_str_mv Pérez Terán, Nicolás
dc.subject.keyword.eng.fl_str_mv Recognizing Textual Entailment
Natural Language Processing
IA
Research
Natural Language Inference
Large Language Models
Human Validation
topic Recognizing Textual Entailment
Natural Language Processing
IA
Research
Natural Language Inference
Large Language Models
Human Validation
Ingeniería
dc.subject.themes.spa.fl_str_mv Ingeniería
description Natural Language Inference (NLI) is a fundamental task in Natural Language Processing that aims to determine whether a hypothesis can be inferred from a given premise. Although numerous experiments have been conducted to evaluate this task using various language models, most of these efforts have focused on the English language, leaving Spanish relatively unexplored. This thesis evaluates the state of the art of NLI task using LLMs for the Spanish language, implementing prompting strategies to maximize the performance of LLMs (Large Language Models) in this task. Finally, LLM performance in NLI is determinated for Few-Shot and Zero-Shot scenarios. The Spanish corpus was tested with GPT-4o-Mini which achieved an accuracy of 60% in a human validated subset of the Spanish Corpus, where it was concluded that LLMs struggle to identify the Entailment label and show a preference for selecting the Reasoning label over the others.
publishDate 2025
dc.date.accessioned.none.fl_str_mv 2025-01-27T15:11:40Z
dc.date.issued.none.fl_str_mv 2025-01-16
dc.date.available.none.fl_str_mv 2026-01-27
dc.type.none.fl_str_mv Trabajo de grado - Pregrado
dc.type.driver.none.fl_str_mv info:eu-repo/semantics/bachelorThesis
dc.type.version.none.fl_str_mv info:eu-repo/semantics/acceptedVersion
dc.type.coar.none.fl_str_mv http://purl.org/coar/resource_type/c_7a1f
dc.type.content.none.fl_str_mv Text
dc.type.redcol.none.fl_str_mv http://purl.org/redcol/resource_type/TP
format http://purl.org/coar/resource_type/c_7a1f
status_str acceptedVersion
dc.identifier.uri.none.fl_str_mv https://hdl.handle.net/1992/75670
dc.identifier.instname.none.fl_str_mv instname:Universidad de los Andes
dc.identifier.reponame.none.fl_str_mv reponame:Repositorio Institucional Séneca
dc.identifier.repourl.none.fl_str_mv repourl:https://repositorio.uniandes.edu.co/
url https://hdl.handle.net/1992/75670
identifier_str_mv instname:Universidad de los Andes
reponame:Repositorio Institucional Séneca
repourl:https://repositorio.uniandes.edu.co/
dc.language.iso.none.fl_str_mv eng
language eng
dc.relation.references.none.fl_str_mv Samuel R. Bowman, Gabor Angeli, Christopher Potts, and Christopher D. Manning. A large annotated corpus for learning natural language inference. CoRR, abs/1508.05326, 2015. URL http://arxiv.org/abs/1508.05326.
Tom B. Brown, Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared Kaplan, Prafulla Dhariwal, Arvind Neelakantan, Pranav Shyam, Girish Sastry, Amanda Askell, Sandhini Agarwal, Ariel Herbert-Voss, Gretchen Krueger, Tom Henighan, Rewon Child, Aditya Ramesh, Daniel M. Ziegler, Jeffrey Wu, Clemens Winter, Christopher Hesse, Mark Chen, Eric Sigler, Mateusz Litwin, Scott Gray, Benjamin Chess, Jack Clark, Christopher Berner, Sam McCandlish, Alec Radford, Ilya Sutskever, and Dario Amodei. Language models are few-shot learners. CoRR, abs/2005.14165, 2020. URL https://arxiv.org/abs/2005.14165.
Ruixiang Cui, Seolhwa Lee, Daniel Hershcovich, and Anders Søgaard. What does the failure to reason with "respectively" in zero/few-shot settings tell us about language models?, 2023. URL https://arxiv.org/abs/2305.19597.
Rodriguez Portela Johan David. Natural language inference: a spanish case. Master’s thesis, 2024. URL https://hdl.handle.net/1992/75248.
Shreyasi Mandal and Ashutosh Modi. IITK at SemEval-2024 task 2: Exploring the capabilities of LLMs for safe biomedical natural language inference for clinical trials. In Atul Kr. Ojha, A. Seza Doğruöz, Harish Tayyar Madabushi, Giovanni Da San Martino, Sara Rosenthal, and Aiala Rosá, editors, Proceedings of the 18th International Workshop on Semantic Evaluation (SemEval-2024), pages 1397–1404, Mexico City, Mexico, June 2024. Association for Computational Linguistics. DOI 10.18653/v1/2024.semeval-1.201. URL https://aclanthology.org/2024.semeval-1.201.
Aakanksha Naik, Abhilasha Ravichander, Norman Sadeh, Carolyn Rose, and Graham Neubig. Stress test evaluation for natural language inference. In Emily M. Bender, Leon Derczynski, and Pierre Isabelle, editors, Proceedings of the 27th International Conference on Computational Linguistics, pages 2340–2353, Santa Fe, New Mexico, USA, August 2018. Association for Computational Linguistics. URL https://aclanthology.org/C18-1198.
Mobashir Sadat and Cornelia Caragea. Scinli: A corpus for natural language inference on scientific text. In Smaranda Muresan, Preslav Nakov, and Aline Villavicencio, editors, Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 7399–7409, Dublin, Ireland, May 2022. Association for Computational Linguistics. DOI 10.18653/v1/2022.acl-long.511. URL https://aclanthology.org/2022.acl-long.511.
Mobashir Sadat and Cornelia Caragea. Mscinli: A diverse benchmark for scientific natural language inference, 2024. URL https://arxiv.org/abs/2404.08066.
Jason Wei, Xuezhi Wang, Dale Schuurmans, Maarten Bosma, Ed H. Chi, Quoc Le, and Denny Zhou. Chain of thought prompting elicits reasoning in large language models. CoRR, abs/2201.11903, 2022. URL https://arxiv.org/abs/2201.11903.
Shunyu Yao, Dian Yu, Jeffrey Zhao, Izhak Shafran, Thomas L. Griffiths, Yuan Cao, and Karthik Narasimhan. Tree of thoughts: deliberate problem solving with large language models. In Proceedings of the 37th International Conference on Neural Information Processing Systems, NIPS ’23, Red Hook, NY, USA, 2024. Curran Associates Inc.
dc.rights.en.fl_str_mv Attribution-ShareAlike 4.0 International
dc.rights.coar.fl_str_mv http://purl.org/coar/access_right/c_f1cf
dc.rights.uri.none.fl_str_mv http://creativecommons.org/licenses/by-sa/4.0/
dc.rights.coar.none.fl_str_mv http://purl.org/coar/access_right/c_f1cf
rights_invalid_str_mv Attribution-ShareAlike 4.0 International
http://creativecommons.org/licenses/by-sa/4.0/
http://purl.org/coar/access_right/c_f1cf
http://purl.org/coar/access_right/c_f1cf
dc.format.extent.none.fl_str_mv 81 Páginas
dc.format.mimetype.none.fl_str_mv application/pdf
dc.publisher.none.fl_str_mv Universidad de los Andes
dc.publisher.program.none.fl_str_mv Ingeniería de Sistemas y Computación
dc.publisher.faculty.none.fl_str_mv Facultad de Ingeniería
dc.publisher.department.none.fl_str_mv Departamento de Ingeniería de Sistemas y Computación
publisher.none.fl_str_mv Universidad de los Andes
institution Universidad de los Andes
bitstream.url.fl_str_mv https://repositorio.uniandes.edu.co/bitstreams/8b817779-86cc-40f6-96e3-00ca8d8f667b/download
https://repositorio.uniandes.edu.co/bitstreams/ca8ecc9f-31af-47b3-8341-5df4c80e5804/download
https://repositorio.uniandes.edu.co/bitstreams/a42e4392-ccfe-4d44-91cc-b38005090d88/download
https://repositorio.uniandes.edu.co/bitstreams/8b721a48-5dcc-4565-986f-df5b013b491f/download
https://repositorio.uniandes.edu.co/bitstreams/0f99a805-c9cc-4a78-9d42-2c7c3dcadee6/download
https://repositorio.uniandes.edu.co/bitstreams/62cabcf4-1bab-40b8-ab19-8eb0dcc6691f/download
https://repositorio.uniandes.edu.co/bitstreams/ba214ec2-d28d-4775-9cf4-6059d93e8ce3/download
https://repositorio.uniandes.edu.co/bitstreams/6516f335-813a-426e-9db8-00633dd781ee/download
bitstream.checksum.fl_str_mv ae9e573a68e7f92501b6913cc846c39f
b7403ef70ec4ca01c7a8ef2fe98aa990
43a88154e2046bddaff2835fd5ce0ac3
84a900c9dd4b2a10095a94649e1ce116
e1c06d85ae7b8b032bef47e42e4c08f9
ac3b7599d7678b4fea7cb47cc5ff7f05
4d26fee961d5a88202d840da35eaf6ae
e3db4ff7eeda924f49a609528991aafc
bitstream.checksumAlgorithm.fl_str_mv MD5
MD5
MD5
MD5
MD5
MD5
MD5
MD5
repository.name.fl_str_mv Repositorio institucional Séneca
repository.mail.fl_str_mv adminrepositorio@uniandes.edu.co
_version_ 1828159249442668544
spelling Manrique Piramanrique, Rubén Franciscovirtual::22604-1Pérez Terán, Nicolás2025-01-27T15:11:40Z2026-01-272025-01-16https://hdl.handle.net/1992/75670instname:Universidad de los Andesreponame:Repositorio Institucional Sénecarepourl:https://repositorio.uniandes.edu.co/Natural Language Inference (NLI) is a fundamental task in Natural Language Processing that aims to determine whether a hypothesis can be inferred from a given premise. Although numerous experiments have been conducted to evaluate this task using various language models, most of these efforts have focused on the English language, leaving Spanish relatively unexplored. This thesis evaluates the state of the art of NLI task using LLMs for the Spanish language, implementing prompting strategies to maximize the performance of LLMs (Large Language Models) in this task. Finally, LLM performance in NLI is determinated for Few-Shot and Zero-Shot scenarios. The Spanish corpus was tested with GPT-4o-Mini which achieved an accuracy of 60% in a human validated subset of the Spanish Corpus, where it was concluded that LLMs struggle to identify the Entailment label and show a preference for selecting the Reasoning label over the others.PregradoNatural Language Processing81 Páginasapplication/pdfengUniversidad de los AndesIngeniería de Sistemas y ComputaciónFacultad de IngenieríaDepartamento de Ingeniería de Sistemas y ComputaciónAttribution-ShareAlike 4.0 Internationalhttp://creativecommons.org/licenses/by-sa/4.0/http://purl.org/coar/access_right/c_f1cf http://purl.org/coar/access_right/c_f1cfNatural Language Inference (NLI) via LLMs.Trabajo de grado - Pregradoinfo:eu-repo/semantics/bachelorThesisinfo:eu-repo/semantics/acceptedVersionhttp://purl.org/coar/resource_type/c_7a1fTexthttp://purl.org/redcol/resource_type/TPRecognizing Textual EntailmentNatural Language ProcessingIAResearchNatural Language InferenceLarge Language ModelsHuman ValidationIngenieríaSamuel R. Bowman, Gabor Angeli, Christopher Potts, and Christopher D. Manning. A large annotated corpus for learning natural language inference. CoRR, abs/1508.05326, 2015. URL http://arxiv.org/abs/1508.05326.Tom B. Brown, Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared Kaplan, Prafulla Dhariwal, Arvind Neelakantan, Pranav Shyam, Girish Sastry, Amanda Askell, Sandhini Agarwal, Ariel Herbert-Voss, Gretchen Krueger, Tom Henighan, Rewon Child, Aditya Ramesh, Daniel M. Ziegler, Jeffrey Wu, Clemens Winter, Christopher Hesse, Mark Chen, Eric Sigler, Mateusz Litwin, Scott Gray, Benjamin Chess, Jack Clark, Christopher Berner, Sam McCandlish, Alec Radford, Ilya Sutskever, and Dario Amodei. Language models are few-shot learners. CoRR, abs/2005.14165, 2020. URL https://arxiv.org/abs/2005.14165.Ruixiang Cui, Seolhwa Lee, Daniel Hershcovich, and Anders Søgaard. What does the failure to reason with "respectively" in zero/few-shot settings tell us about language models?, 2023. URL https://arxiv.org/abs/2305.19597.Rodriguez Portela Johan David. Natural language inference: a spanish case. Master’s thesis, 2024. URL https://hdl.handle.net/1992/75248.Shreyasi Mandal and Ashutosh Modi. IITK at SemEval-2024 task 2: Exploring the capabilities of LLMs for safe biomedical natural language inference for clinical trials. In Atul Kr. Ojha, A. Seza Doğruöz, Harish Tayyar Madabushi, Giovanni Da San Martino, Sara Rosenthal, and Aiala Rosá, editors, Proceedings of the 18th International Workshop on Semantic Evaluation (SemEval-2024), pages 1397–1404, Mexico City, Mexico, June 2024. Association for Computational Linguistics. DOI 10.18653/v1/2024.semeval-1.201. URL https://aclanthology.org/2024.semeval-1.201.Aakanksha Naik, Abhilasha Ravichander, Norman Sadeh, Carolyn Rose, and Graham Neubig. Stress test evaluation for natural language inference. In Emily M. Bender, Leon Derczynski, and Pierre Isabelle, editors, Proceedings of the 27th International Conference on Computational Linguistics, pages 2340–2353, Santa Fe, New Mexico, USA, August 2018. Association for Computational Linguistics. URL https://aclanthology.org/C18-1198.Mobashir Sadat and Cornelia Caragea. Scinli: A corpus for natural language inference on scientific text. In Smaranda Muresan, Preslav Nakov, and Aline Villavicencio, editors, Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 7399–7409, Dublin, Ireland, May 2022. Association for Computational Linguistics. DOI 10.18653/v1/2022.acl-long.511. URL https://aclanthology.org/2022.acl-long.511.Mobashir Sadat and Cornelia Caragea. Mscinli: A diverse benchmark for scientific natural language inference, 2024. URL https://arxiv.org/abs/2404.08066.Jason Wei, Xuezhi Wang, Dale Schuurmans, Maarten Bosma, Ed H. Chi, Quoc Le, and Denny Zhou. Chain of thought prompting elicits reasoning in large language models. CoRR, abs/2201.11903, 2022. URL https://arxiv.org/abs/2201.11903.Shunyu Yao, Dian Yu, Jeffrey Zhao, Izhak Shafran, Thomas L. Griffiths, Yuan Cao, and Karthik Narasimhan. Tree of thoughts: deliberate problem solving with large language models. In Proceedings of the 37th International Conference on Neural Information Processing Systems, NIPS ’23, Red Hook, NY, USA, 2024. Curran Associates Inc.202116903Publication9f6e12e0-098e-4548-ab81-75552e8385e7virtual::22604-19f6e12e0-098e-4548-ab81-75552e8385e7virtual::22604-1LICENSElicense.txtlicense.txttext/plain; charset=utf-82535https://repositorio.uniandes.edu.co/bitstreams/8b817779-86cc-40f6-96e3-00ca8d8f667b/downloadae9e573a68e7f92501b6913cc846c39fMD52ORIGINALsigned_autorizacion_tesis.pdfsigned_autorizacion_tesis.pdfHIDEapplication/pdf349175https://repositorio.uniandes.edu.co/bitstreams/ca8ecc9f-31af-47b3-8341-5df4c80e5804/downloadb7403ef70ec4ca01c7a8ef2fe98aa990MD53Natural Language Inference.pdfNatural Language Inference.pdfapplication/pdf968361https://repositorio.uniandes.edu.co/bitstreams/a42e4392-ccfe-4d44-91cc-b38005090d88/download43a88154e2046bddaff2835fd5ce0ac3MD54CC-LICENSElicense_rdflicense_rdfapplication/rdf+xml; charset=utf-81025https://repositorio.uniandes.edu.co/bitstreams/8b721a48-5dcc-4565-986f-df5b013b491f/download84a900c9dd4b2a10095a94649e1ce116MD55TEXTsigned_autorizacion_tesis.pdf.txtsigned_autorizacion_tesis.pdf.txtExtracted texttext/plain2https://repositorio.uniandes.edu.co/bitstreams/0f99a805-c9cc-4a78-9d42-2c7c3dcadee6/downloade1c06d85ae7b8b032bef47e42e4c08f9MD56Natural Language Inference.pdf.txtNatural Language Inference.pdf.txtExtracted texttext/plain100097https://repositorio.uniandes.edu.co/bitstreams/62cabcf4-1bab-40b8-ab19-8eb0dcc6691f/downloadac3b7599d7678b4fea7cb47cc5ff7f05MD58THUMBNAILsigned_autorizacion_tesis.pdf.jpgsigned_autorizacion_tesis.pdf.jpgGenerated Thumbnailimage/jpeg11112https://repositorio.uniandes.edu.co/bitstreams/ba214ec2-d28d-4775-9cf4-6059d93e8ce3/download4d26fee961d5a88202d840da35eaf6aeMD57Natural Language Inference.pdf.jpgNatural Language Inference.pdf.jpgGenerated Thumbnailimage/jpeg3548https://repositorio.uniandes.edu.co/bitstreams/6516f335-813a-426e-9db8-00633dd781ee/downloade3db4ff7eeda924f49a609528991aafcMD591992/75670oai:repositorio.uniandes.edu.co:1992/756702025-03-05 10:01:55.579http://creativecommons.org/licenses/by-sa/4.0/Attribution-ShareAlike 4.0 Internationalrestrictedhttps://repositorio.uniandes.edu.coRepositorio institucional Sénecaadminrepositorio@uniandes.edu.coPGgzPjxzdHJvbmc+RGVzY2FyZ28gZGUgUmVzcG9uc2FiaWxpZGFkIC0gTGljZW5jaWEgZGUgQXV0b3JpemFjacOzbjwvc3Ryb25nPjwvaDM+CjxwPjxzdHJvbmc+UG9yIGZhdm9yIGxlZXIgYXRlbnRhbWVudGUgZXN0ZSBkb2N1bWVudG8gcXVlIHBlcm1pdGUgYWwgUmVwb3NpdG9yaW8gSW5zdGl0dWNpb25hbCBTw6luZWNhIHJlcHJvZHVjaXIgeSBkaXN0cmlidWlyIGxvcyByZWN1cnNvcyBkZSBpbmZvcm1hY2nDs24gZGVwb3NpdGFkb3MgbWVkaWFudGUgbGEgYXV0b3JpemFjacOzbiBkZSBsb3Mgc2lndWllbnRlcyB0w6lybWlub3M6PC9zdHJvbmc+PC9wPgo8cD5Db25jZWRhIGxhIGxpY2VuY2lhIGRlIGRlcMOzc2l0byBlc3TDoW5kYXIgc2VsZWNjaW9uYW5kbyBsYSBvcGNpw7NuIDxzdHJvbmc+J0FjZXB0YXIgbG9zIHTDqXJtaW5vcyBhbnRlcmlvcm1lbnRlIGRlc2NyaXRvcyc8L3N0cm9uZz4geSBjb250aW51YXIgZWwgcHJvY2VzbyBkZSBlbnbDrW8gbWVkaWFudGUgZWwgYm90w7NuIDxzdHJvbmc+J1NpZ3VpZW50ZScuPC9zdHJvbmc+PC9wPgo8aHI+CjxwPllvLCBlbiBtaSBjYWxpZGFkIGRlIGF1dG9yIGRlbCB0cmFiYWpvIGRlIHRlc2lzLCBtb25vZ3JhZsOtYSBvIHRyYWJham8gZGUgZ3JhZG8sIGhhZ28gZW50cmVnYSBkZWwgZWplbXBsYXIgcmVzcGVjdGl2byB5IGRlIHN1cyBhbmV4b3MgZGUgc2VyIGVsIGNhc28sIGVuIGZvcm1hdG8gZGlnaXRhbCB5L28gZWxlY3Ryw7NuaWNvIHkgYXV0b3Jpem8gYSBsYSBVbml2ZXJzaWRhZCBkZSBsb3MgQW5kZXMgcGFyYSBxdWUgcmVhbGljZSBsYSBwdWJsaWNhY2nDs24gZW4gZWwgU2lzdGVtYSBkZSBCaWJsaW90ZWNhcyBvIGVuIGN1YWxxdWllciBvdHJvIHNpc3RlbWEgbyBiYXNlIGRlIGRhdG9zIHByb3BpbyBvIGFqZW5vIGEgbGEgVW5pdmVyc2lkYWQgeSBwYXJhIHF1ZSBlbiBsb3MgdMOpcm1pbm9zIGVzdGFibGVjaWRvcyBlbiBsYSBMZXkgMjMgZGUgMTk4MiwgTGV5IDQ0IGRlIDE5OTMsIERlY2lzacOzbiBBbmRpbmEgMzUxIGRlIDE5OTMsIERlY3JldG8gNDYwIGRlIDE5OTUgeSBkZW3DoXMgbm9ybWFzIGdlbmVyYWxlcyBzb2JyZSBsYSBtYXRlcmlhLCB1dGlsaWNlIGVuIHRvZGFzIHN1cyBmb3JtYXMsIGxvcyBkZXJlY2hvcyBwYXRyaW1vbmlhbGVzIGRlIHJlcHJvZHVjY2nDs24sIGNvbXVuaWNhY2nDs24gcMO6YmxpY2EsIHRyYW5zZm9ybWFjacOzbiB5IGRpc3RyaWJ1Y2nDs24gKGFscXVpbGVyLCBwcsOpc3RhbW8gcMO6YmxpY28gZSBpbXBvcnRhY2nDs24pIHF1ZSBtZSBjb3JyZXNwb25kZW4gY29tbyBjcmVhZG9yIGRlIGxhIG9icmEgb2JqZXRvIGRlbCBwcmVzZW50ZSBkb2N1bWVudG8uPC9wPgo8cD5MYSBwcmVzZW50ZSBhdXRvcml6YWNpw7NuIHNlIGVtaXRlIGVuIGNhbGlkYWQgZGUgYXV0b3IgZGUgbGEgb2JyYSBvYmpldG8gZGVsIHByZXNlbnRlIGRvY3VtZW50byB5IG5vIGNvcnJlc3BvbmRlIGEgY2VzacOzbiBkZSBkZXJlY2hvcywgc2lubyBhIGxhIGF1dG9yaXphY2nDs24gZGUgdXNvIGFjYWTDqW1pY28gZGUgY29uZm9ybWlkYWQgY29uIGxvIGFudGVyaW9ybWVudGUgc2XDsWFsYWRvLiBMYSBwcmVzZW50ZSBhdXRvcml6YWNpw7NuIHNlIGhhY2UgZXh0ZW5zaXZhIG5vIHNvbG8gYSBsYXMgZmFjdWx0YWRlcyB5IGRlcmVjaG9zIGRlIHVzbyBzb2JyZSBsYSBvYnJhIGVuIGZvcm1hdG8gbyBzb3BvcnRlIG1hdGVyaWFsLCBzaW5vIHRhbWJpw6luIHBhcmEgZm9ybWF0byBlbGVjdHLDs25pY28sIHkgZW4gZ2VuZXJhbCBwYXJhIGN1YWxxdWllciBmb3JtYXRvIGNvbm9jaWRvIG8gcG9yIGNvbm9jZXIuPC9wPgo8cD5FbCBhdXRvciwgbWFuaWZpZXN0YSBxdWUgbGEgb2JyYSBvYmpldG8gZGUgbGEgcHJlc2VudGUgYXV0b3JpemFjacOzbiBlcyBvcmlnaW5hbCB5IGxhIHJlYWxpesOzIHNpbiB2aW9sYXIgbyB1c3VycGFyIGRlcmVjaG9zIGRlIGF1dG9yIGRlIHRlcmNlcm9zLCBwb3IgbG8gdGFudG8sIGxhIG9icmEgZXMgZGUgc3UgZXhjbHVzaXZhIGF1dG9yw61hIHkgdGllbmUgbGEgdGl0dWxhcmlkYWQgc29icmUgbGEgbWlzbWEuPC9wPgo8cD5FbiBjYXNvIGRlIHByZXNlbnRhcnNlIGN1YWxxdWllciByZWNsYW1hY2nDs24gbyBhY2Npw7NuIHBvciBwYXJ0ZSBkZSB1biB0ZXJjZXJvIGVuIGN1YW50byBhIGxvcyBkZXJlY2hvcyBkZSBhdXRvciBzb2JyZSBsYSBvYnJhIGVuIGN1ZXN0acOzbiwgZWwgYXV0b3IgYXN1bWlyw6EgdG9kYSBsYSByZXNwb25zYWJpbGlkYWQsIHkgc2FsZHLDoSBkZSBkZWZlbnNhIGRlIGxvcyBkZXJlY2hvcyBhcXXDrSBhdXRvcml6YWRvcywgcGFyYSB0b2RvcyBsb3MgZWZlY3RvcyBsYSBVbml2ZXJzaWRhZCBhY3TDumEgY29tbyB1biB0ZXJjZXJvIGRlIGJ1ZW5hIGZlLjwvcD4KPHA+U2kgdGllbmUgYWxndW5hIGR1ZGEgc29icmUgbGEgbGljZW5jaWEsIHBvciBmYXZvciwgY29udGFjdGUgY29uIGVsIDxhIGhyZWY9Im1haWx0bzpiaWJsaW90ZWNhQHVuaWFuZGVzLmVkdS5jbyIgdGFyZ2V0PSJfYmxhbmsiPkFkbWluaXN0cmFkb3IgZGVsIFNpc3RlbWEuPC9hPjwvcD4K