Método basado en aprendizaje automático para la calificación de ensayos cortos en inglés de una muestra de estudiantes de bachillerato

ilustraciones, diagramas

Autores:
Bofill Barrera, Joan Gabriel
Tipo de recurso:
Fecha de publicación:
2024
Institución:
Universidad Nacional de Colombia
Repositorio:
Universidad Nacional de Colombia
Idioma:
spa
OAI Identifier:
oai:repositorio.unal.edu.co:unal/86173
Acceso en línea:
https://repositorio.unal.edu.co/handle/unal/86173
https://repositorio.unal.edu.co/
Palabra clave:
000 - Ciencias de la computación, información y obras generales::004 - Procesamiento de datos Ciencia de los computadores
370 - Educación::373 - Educación secundaria
PLN
Aprendizaje supervisado
Transformers
Ensamble de modelos
SVR
Calificación automática de ensayos
NLP
Automatic essay grading
Supervised learning
SVR
Kaggle contest
Ensamble of models
Método de evaluación
Procesamiento de datos
Evaluation methods
Data processing
aprendizaje supervisado
supervised learning
Rights
openAccess
License
Atribución-NoComercial 4.0 Internacional
id UNACIONAL2_9a0ee2be5b05c5a4530851a21a327db2
oai_identifier_str oai:repositorio.unal.edu.co:unal/86173
network_acronym_str UNACIONAL2
network_name_str Universidad Nacional de Colombia
repository_id_str
dc.title.spa.fl_str_mv Método basado en aprendizaje automático para la calificación de ensayos cortos en inglés de una muestra de estudiantes de bachillerato
dc.title.translated.eng.fl_str_mv Machine learning based method for scoring short english essays from a high school student sample
title Método basado en aprendizaje automático para la calificación de ensayos cortos en inglés de una muestra de estudiantes de bachillerato
spellingShingle Método basado en aprendizaje automático para la calificación de ensayos cortos en inglés de una muestra de estudiantes de bachillerato
000 - Ciencias de la computación, información y obras generales::004 - Procesamiento de datos Ciencia de los computadores
370 - Educación::373 - Educación secundaria
PLN
Aprendizaje supervisado
Transformers
Ensamble de modelos
SVR
Calificación automática de ensayos
NLP
Automatic essay grading
Supervised learning
SVR
Kaggle contest
Ensamble of models
Método de evaluación
Procesamiento de datos
Evaluation methods
Data processing
aprendizaje supervisado
supervised learning
title_short Método basado en aprendizaje automático para la calificación de ensayos cortos en inglés de una muestra de estudiantes de bachillerato
title_full Método basado en aprendizaje automático para la calificación de ensayos cortos en inglés de una muestra de estudiantes de bachillerato
title_fullStr Método basado en aprendizaje automático para la calificación de ensayos cortos en inglés de una muestra de estudiantes de bachillerato
title_full_unstemmed Método basado en aprendizaje automático para la calificación de ensayos cortos en inglés de una muestra de estudiantes de bachillerato
title_sort Método basado en aprendizaje automático para la calificación de ensayos cortos en inglés de una muestra de estudiantes de bachillerato
dc.creator.fl_str_mv Bofill Barrera, Joan Gabriel
dc.contributor.advisor.spa.fl_str_mv Niño Vásquez, Luis Fernando
dc.contributor.author.spa.fl_str_mv Bofill Barrera, Joan Gabriel
dc.contributor.referee.spa.fl_str_mv León Guzmán, Elizabeth
dc.contributor.researchgroup.spa.fl_str_mv laboratorio de Investigación en Sistemas Inteligentes Lisi
dc.subject.ddc.spa.fl_str_mv 000 - Ciencias de la computación, información y obras generales::004 - Procesamiento de datos Ciencia de los computadores
370 - Educación::373 - Educación secundaria
topic 000 - Ciencias de la computación, información y obras generales::004 - Procesamiento de datos Ciencia de los computadores
370 - Educación::373 - Educación secundaria
PLN
Aprendizaje supervisado
Transformers
Ensamble de modelos
SVR
Calificación automática de ensayos
NLP
Automatic essay grading
Supervised learning
SVR
Kaggle contest
Ensamble of models
Método de evaluación
Procesamiento de datos
Evaluation methods
Data processing
aprendizaje supervisado
supervised learning
dc.subject.proposal.spa.fl_str_mv PLN
Aprendizaje supervisado
Transformers
Ensamble de modelos
SVR
Calificación automática de ensayos
dc.subject.proposal.eng.fl_str_mv NLP
Automatic essay grading
Supervised learning
SVR
Kaggle contest
Ensamble of models
dc.subject.unesco.spa.fl_str_mv Método de evaluación
Procesamiento de datos
dc.subject.unesco.eng.fl_str_mv Evaluation methods
Data processing
dc.subject.wikidata.spa.fl_str_mv aprendizaje supervisado
dc.subject.wikidata.eng.fl_str_mv supervised learning
description ilustraciones, diagramas
publishDate 2024
dc.date.accessioned.none.fl_str_mv 2024-05-28T22:15:47Z
dc.date.available.none.fl_str_mv 2024-05-28T22:15:47Z
dc.date.issued.none.fl_str_mv 2024
dc.type.spa.fl_str_mv Trabajo de grado - Maestría
dc.type.driver.spa.fl_str_mv info:eu-repo/semantics/masterThesis
dc.type.version.spa.fl_str_mv info:eu-repo/semantics/acceptedVersion
dc.type.content.spa.fl_str_mv Text
dc.type.redcol.spa.fl_str_mv http://purl.org/redcol/resource_type/TM
status_str acceptedVersion
dc.identifier.uri.none.fl_str_mv https://repositorio.unal.edu.co/handle/unal/86173
dc.identifier.instname.spa.fl_str_mv Universidad Nacional de Colombia
dc.identifier.reponame.spa.fl_str_mv Repositorio Institucional Universidad Nacional de Colombia
dc.identifier.repourl.spa.fl_str_mv https://repositorio.unal.edu.co/
url https://repositorio.unal.edu.co/handle/unal/86173
https://repositorio.unal.edu.co/
identifier_str_mv Universidad Nacional de Colombia
Repositorio Institucional Universidad Nacional de Colombia
dc.language.iso.spa.fl_str_mv spa
language spa
dc.relation.references.spa.fl_str_mv P. Kline, The New Psychometrics: Science, Psychology and Measurement. Routledge, 1 ed., 1999.
T. N. Fitria, “Artificial intelligence (AI) technology in OpenAI ChatGPT application: A review of ChatGPT in writing English essay.,” ELT Forum: Journal of English Language Teaching, vol. 12, no. 1, pp. 44–58, 2023.
E. B. Page, “Grading Essays by Computer: Progress Report. Proceedings of the 1966 Invitational Conference on Testing Problems.,” Princeton, N.J. Educational Testing Service, pp. 87–100, 1967.
E. Page, “The use of the computer in analyzing student essays,” Int Rev Educ, pp. 210–225, 1968.
K. L. Gwet, “Handbook of inter-rater reliability: The definitive guide to measuring the extent of agreement among raters,” Advanced Analytics, LLC, 2014.
Mark D. S., “Contrasting State-of-the-Art in the Machine Scoring of Short-Form Cons-tructed Responses,” Educational Assessment, vol. 20, no. 1, pp. 46–65, 2015.
Alex Franklin, Natalie Rambis, Maggie Meg Benner, Perpetual Baffour, Ryan Holbrook, and u. Scott Crossley, “Feedback Prize - English Language Learning. Kaggle. ,” 2022.
S. A. Crossley, K. Kyle, and D. S. Mcnamara, “To Aggregate or Not? Linguistic Features in Automatic Essay Scoring and Feedback Systems,” Grantee Submission, vol. 8, no. 1, 2015.
C. Ramineni and D. M. Williamson, “Automated essay scoring: Psychometric guidelines and practices,” Assessing Writing, pp. 25–39, 2013.
S. P. Balfour, “Assessing Writing in MOOCs: Automated Essay Scoring and Calibrated PeerReview™,” Research & Practice in Assessment, pp. 40–48, 2013.
S. Cushing Weigle, “Validation of automated scores of TOEFL iBT tasks against non-test indicators of writing ability,” Language Testing, vol. 27, no. 3, pp. 335–353, 2010.
K. Taghipour, Robust trait-specific essay scoring using neural networks and density es-timators. PhD thesis, National University of Singapore, Singapore, 2017.
H. Shi and V. Aryadoust, “Correction to: A systematic review of automated writing evaluation systems,” Education and Information Technologies, vol. 28, pp. 6189–6190, 5 2023.
P. C. Jackson, Toward human-level artificial intelligence: Representation and computation of meaning in natural language. Dover Publications, 11 2019.
E. Mayfield and C. P. Rosé, “LightSIDE,” in Handbook of Automated Essay Evaluation, Routledge, 1 ed., 2013.
M. Shermis and J. Burstein, Handbook of Automated Essay Evaluation: Current Applications and New Directions. Routledge, 1 ed., 2013.
S. Burrows, I. Gurevych, and B. Stein, “The Eras and Trends of Automatic Short Answer Grading,” Int J Artif Intell Educ, vol. 25, pp. 60–117, 2015.
K. Zupanc and Z. Bosnic, “Automated essay evaluation with semantic analysis,” El-sevier, vol. 120, pp. 118–132, 2017.
D. Yan, A. A. Rupp, and P. W. Foltz, Handbook of Automated Scoring; Theory into Practice. Chapman and Hall/CRC., 1 ed., 2020.
B. Kitchenham, O. Pearl Brereton, D. Budgen, M. Turner, J. Bailey, and S. Linkman, “Systematic literature reviews in software engineering – A systematic literature review,” Information and Software Technology, vol. 51, no. 1, pp. 7–15, 2009.
Y.-Y. Chen, C.-L. Liu, C.-H. Lee, and T.-H. Chang, “An unsupervised automated essay scoring system,” IEEE Intelligent Systems, vol. 25, no. 5, pp. 61–67, 2010.
Y. Wang, Z. Wei, Y. Zhou, and X. Huang, “Automatic essay scoring incorporating rating schema via reinforcement learning,” in Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, EMNLP, pp. 791–797, 2018.
C. Lu and M. Cutumisu, “Integrating Deep Learning into An Automated Feedback Generation System for Automated Essay Scoring,” International Educational Data Mining Society, 2021.
K. S. McCarthy, R. D. Roscoe, L. K. Allen, A. D. Likens, and D. S. McNamara, “Automated writing evaluation: Does spelling and grammar feedback support high-quality writing and revision?,” Assessing Writing, vol. 52, 4 2022.
A. Sharma and D. B. Jayagopi, “Automated grading of handwritten essays,” in Proceedings of International Conference on Frontiers in Handwriting Recognition, ICFHR, vol. 2018-August, pp. 279–284, 2018.
A. Vaswani, G. Brain, N. Shazeer, N. Parmar, J. Uszkoreit, L. Jones, A. N. Gomez, L. Kaiser, and I. Polosukhin, “Attention Is All You Need,” Neural Information Processing Systems, 2017.
T. Pedersen, S. Patwardhan, and J. Michelizzi, “WordNet::Similarity-Measuring the Relatedness of Concepts,” AAAI, vol. 4, pp. 25–29, 7 2004.
F. Dong and Y. Zhang, “Automatic Features for Essay Scoring – An Empirical Study. In Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing,” Association for Computational Linguistics., pp. 1072–1077, 11 2016.
Y. Liu, M. Ott, N. Goyal, J. Du, M. Joshi, D. Chen, O. Levy, M. Lewis, L. Zettlemoyer, and V. Stoyanov, “Roberta: A robustly optimized bert pretraining approach,” Paul G. Allen School of Computer Science & Engineering, University of Washington, Seattle, WA, 2019.
E. Mayfield and A. W. Black, “Should You Fine-Tune BERT for Automated Essay Scoring?,” Proceedings of the Fifteenth Workshop on Innovative Use of NLP for Building Educational Applications. Association for Computational Linguistics, pp. 151–162, 7 2020.
D. Ramesh and S. K. Sanampudi, “An automated essay scoring systems: a systematic literature review,” Artificial Intelligence Review, vol. 55, pp. 2495–2527, 3 2022.
H. Yannakoudakis and R. Cummins, “Evaluating the performance of automated text scoring systems,” Proceedings of the Tenth Workshop on Innovative Use of NLP for Building Educational Applications, pp. 213–223, 2015.
R. Bhatt, M. Patel, G. Srivastava, and V. Mago, “A Graph Based Approach to Automate Essay Evaluation,” in Conference Proceedings - IEEE International Conference on Systems, Man and Cybernetics, vol. 2020-Octob, pp. 4379–4385, 2020.
Z. Ke and V. Ng, “Automated essay scoring: A survey of the state of the art,” in IJCAI International Joint Conference on Artificial Intelligence, vol. 2019-Augus, pp. 6300–6308, 2019.
J. Devlin, M.-W. Chang, K. Lee, K. T. Google, and A. I. Language, “BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding,” arXiv preprint arXiv:1810.04805, 2018.
D. Castro-Castro, R. Lannes-Losada, M. Maritxalar, I. Niebla, C. Pérez-Marqués, N. Álamo-Suárez, and A. Pons-Porrata, “A multilingual application for automated essay scoring,” Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics), vol. 5290 LNAI, pp. 243–251, 2008.
P. U. Rodriguez, A. Jafari, and C. M. Ormerod, “Language models and Automated Essay Scoring,” ArXiv, 9 2019.
S. Ghannay, B. Favre, Y. Estève, and N. Camelin, “Word Embeddings Evaluation and Combination,” Proceedings of the Tenth International Conference on Language Resources and Evaluation, pp. 300–305, 5 2016.
M. Mars, “From Word Embeddings to Pre-Trained Language Models: A State-of-the-Art Walkthrough,” Applied Sciences (Switzerland), vol. 12, 9 2022.
Y. Zhang, R. Jin, and Z.-H. Zhou, “Understanding bag-of-words model: a statistical framework,” International journal of machine learning and cybernetics, vol. 1, pp. 43–52, 12 2010.
K. W. CHURCH, “Word2Vec,” Natural Language Engineering, vol. 23, pp. 155–162, 1 2017.
J. Pennington, R. Socher, and C. D. Manning, “GloVe: Global Vectors for Word Representation,” in Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP), pp. 1535–1543, 2014.
V. Kumar and B. Subba, “A TfidfVectorizer and SVM based sentiment analysis framework for text data corpus,” in 2020 National Conference on Communications (NCC), pp. 1–6, IEEE, 2 2020.
Microsoft, “GitHub - Microsoft/LightGBM:Light Gradient Boosting Machine.”
G. Ke, Q. Meng, T. Finley, T. Wang, W. Chen, W. Ma, Q. Ye, and T.Y. Liu, “LightGBM: A Highly Efficient Gradient Boosting Decision Tree,” 31st Conference on Neural Information Processing Systems (NIPS 2017), pp. 3149–3157, 2017.
T. Chen and C. Guestrin, “XGBoost: A scalable tree boosting system,” in Proceedings of the ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, vol. 13-17-August-2016, pp. 785–794, Association for Computing Machinery, 8 2016.
J.J.Espinosa-Zúñiga, “Aplicación de algoritmos Random Forest y XGBoost en una base de solicitudes de tarjetas de crédito,” Ingeniería Investigación y Tecnología, vol. 21, no. 3, pp. 1–16, 2020.
C. Cortes, V. Vapnik, and L. Saitta, “Support-Vector Networks,” Machine Leaming, vol. 20, pp. 273–297, 1995.
M. Awad, R. Khanna, M. Awad, and R. Khanna, Support vector regression. Efficient learning machines: Theories, concepts, and applications for engineers and system designers. Apress, 2015.
M.-C. Popescu, V. E. Balas, L. Perescu-Popescu, and N. Mastorakis, “Multilayer Perceptron and Neural Networks,” WSEAS Transactions on Circuits and Systems, vol. 8, no. 7, pp. 579–588, 2009.
T. B. Brown, B. Mann, N. Ryder, M. Subbiah, J. Kaplan, P. Dhariwal, A. Neelakantan, P. Shyam, G. Sastry, A. Askell, S. Agarwal, A. Herbert-Voss, G. Krueger, T. Henighan, R. Child, A. Ramesh, D. M. Ziegler, J. Wu, C. Winter, C. Hesse, M. Chen, E. Sigler, M. Litwin, S. Gray, B. Chess, J. Clark, C. Berner, S. McCandlish, A. Radford, I. Sutskever, and D. Amodei, “Language Models are Few-Shot Learners,” ArXiv, 5 2020.
T. Wolf, L. Debut, V. Sanh, J. Chaumond, C. Delangue, A. Moi, P. Cistac, T. Rault, R. Louf, M. Funtowicz, J. Davison, S. Shleifer, P. Von Platen, C. Ma, Y. Jernite, J. Plu, C. Xu, T. Le Scao, S. Gugger, M. Drame, Q. Lhoest, and A. M. Rush, “Transformers: State-of-the-Art Natural Language Processing,” Association for Computational Linguistics, pp. 38–45, 2020.
P. He, X. Liu, J. Gao, W. Chen, and M. Dynamics, “Deberta: Decoding-enhanced bert with disentangled attention,” Conference paper at ICLR, 2021.
P. Zhang, “Longformer-based Automated Writing Assessment for English Language Learners Stanford CS224N Custom Project,” tech. rep., Standford, 2023.
K. K. Y. Chan, T. Bond, and Z. Yan, “Application of an Automated Essay Scoring engine to English writing assessment using Many-Facet Rasch Measurement,” Language Testing, vol. 40, pp. 61–85, 1 2023.
A. Mizumoto and M. Eguchi, “Exploring the potential of using an AI language model for automated essay scoring,” Research Methods in Applied Linguistics, vol. 2, 8 2023.
V. Mohan, M. J. Ilamathi, and M. Nithya, “Preprocessing Techniques for Text Mining-An Overview,” International Journal of Computer Science & Communication Networks, vol. 5, no. 1, pp. 7–16, 2015.
M. Siino, I. Tinnirello, and M. La Cascia, “Is text preprocessing still worth the time? A comparative survey on the influence of popular preprocessing methods on Transformers and traditional classifiers,” Information Systems, vol. 121, p. 102342, 3 2024.
S. A. Crossley, D. B. Allen, and J. S. Danielle McNamara, “Text readability and intuitive simplification: A comparison of readability formulas,” Reading in a foreign language, vol. 23, no. 1, pp. 84–101, 2011.
F. Scarselli and A. C. Tsoi, “Universal Approximation Using Feedforward Neural Networks: A Survey of Some Existing Methods, and Some New Results,” Neural Networks, vol. 11, no. 1, pp. 15–37, 1998.
P. He, J. Gao, and W. Chen, “DeBERTaV3: Improving DeBERTa using ELECTRA-Style Pre-Training with Gradient-Disentangled Embedding Sharing,” ArXiv, 11 2021.
D. Wu, S.-t. Xia, and Y. Wang, “Adversarial Weight Perturbation Helps Robust Generalization,” ArXiv, 4 2020.
H. Inoue, “Multi-Sample Dropout for Accelerated Training and Better Generalization,” ArXiv, 5 2019.
A. Stanciu, I. Cristescu, E. M. Ciuperca, and C. E. Cˆırnu, “Using an ensemble of transformer-based models for automated writing evaluation of essays,” in 14th International Conference on Education and New Learning Technologies, (Palma, Spain), pp. 5276–5282, IATED, 7 2022.
H. Zhang, Y. Gong, Y. Shen, W. Li, J. Lv, N. Duan, and W. Chen, “Poolingformer: Long Document Modeling with Pooling Attention,” ArXiv, 2021.
A. Aziz, M. Akram Hossain, and A. Nowshed Chy, “CSECU-DSG at SemEval-2023 Task 4: Fine-tuning DeBERTa Transformer Model with Cross-fold Training and Multisample Dropout for Human Values Identification,” tech. rep., Department of Computer Science and Engineering University of Chittagong, Chattogram, Bangladesh, 2023.
E. Mayfield and A. W. Black, “Should You Fine-Tune BERT for Automated Essay Scoring?,” Proceedings of the Fifteenth Workshop on Innovative Use of NLP for Building Educational Applications. Association for Computational Linguistics, pp. 151–162, 7 2020.
dc.rights.coar.fl_str_mv http://purl.org/coar/access_right/c_abf2
dc.rights.license.spa.fl_str_mv Atribución-NoComercial 4.0 Internacional
dc.rights.uri.spa.fl_str_mv http://creativecommons.org/licenses/by-nc/4.0/
dc.rights.accessrights.spa.fl_str_mv info:eu-repo/semantics/openAccess
rights_invalid_str_mv Atribución-NoComercial 4.0 Internacional
http://creativecommons.org/licenses/by-nc/4.0/
http://purl.org/coar/access_right/c_abf2
eu_rights_str_mv openAccess
dc.format.extent.spa.fl_str_mv vii, 61 páginas
dc.format.mimetype.spa.fl_str_mv application/pdf
dc.publisher.spa.fl_str_mv Universidad Nacional de Colombia
dc.publisher.program.spa.fl_str_mv Bogotá - Ingeniería - Maestría en Ingeniería - Ingeniería de Sistemas y Computación
dc.publisher.faculty.spa.fl_str_mv Facultad de Ingeniería
dc.publisher.place.spa.fl_str_mv Bogotá, Colombia
dc.publisher.branch.spa.fl_str_mv Universidad Nacional de Colombia - Sede Bogotá
institution Universidad Nacional de Colombia
bitstream.url.fl_str_mv https://repositorio.unal.edu.co/bitstream/unal/86173/3/license.txt
https://repositorio.unal.edu.co/bitstream/unal/86173/4/1032469305.2024.pdf
https://repositorio.unal.edu.co/bitstream/unal/86173/5/1032469305.2024.pdf.jpg
bitstream.checksum.fl_str_mv eb34b1cf90b7e1103fc9dfd26be24b4a
01c50e2289445d13b5bde26ac84ff4fd
68ebc4cf77fb4d55d3c97838b915f4e2
bitstream.checksumAlgorithm.fl_str_mv MD5
MD5
MD5
repository.name.fl_str_mv Repositorio Institucional Universidad Nacional de Colombia
repository.mail.fl_str_mv repositorio_nal@unal.edu.co
_version_ 1814090242137784320
spelling Atribución-NoComercial 4.0 Internacionalhttp://creativecommons.org/licenses/by-nc/4.0/info:eu-repo/semantics/openAccesshttp://purl.org/coar/access_right/c_abf2Niño Vásquez, Luis Fernandobc784b82735e16fe53653c3f5c8f3bbeBofill Barrera, Joan Gabriel6124025a95a25c44620bb2a9a8957b68León Guzmán, Elizabethlaboratorio de Investigación en Sistemas Inteligentes Lisi2024-05-28T22:15:47Z2024-05-28T22:15:47Z2024https://repositorio.unal.edu.co/handle/unal/86173Universidad Nacional de ColombiaRepositorio Institucional Universidad Nacional de Colombiahttps://repositorio.unal.edu.co/ilustraciones, diagramasEste trabajo aborda el desafío de la calificación automática de ensayos argumentativos en inglés escritos por estudiantes de bachillerato que están aprendiendo el inglés como segunda lengua. El objetivo general es implementar un método automético basado en aprendizaje supervisado que permita resolver esta tarea para 6 indicadores en simultáneo: Cohesión, Sintaxis, Vocabulario, Gramática, Fraseología y Convenciones en escala de 1 a 5. Para lograrlo, se realiza un análisis descriptivo de los datos, se aplican procedimientos de preprocesamiento y se extraen características relevantes; se exploran diferentes estrategias, técnicas de representación y modelos desde algunos clásicos hasta aquellos con mejor desempeño en la actualidad, evaluando en cada iteración su rendimiento, contrastándola con las calificaciones humanas. Luego, se presenta el modelo con menor error que está basado principalmente en DeBERTa al cual se le aplican distintas técnicas para mejorar su desempeño y se combina con un modelo SVR que toma como características los embeddings de los textos concatenados en 10 modelos preentrenados sin fine-tuning. Con esta estrategia, el resultado se acerca bastante a las calificaciones humanas, presentando un RMSE de 0.45 sobre todos los indicadores. (Texto tomado de la fuente).This work addresses the challenge of automatically grading argumentative essays in English written by high school students that learn English as a second language. The general objective is to implement an automatic method based on supervised learning that allows solving this task for 6 indicators simultaneously: Cohesion, Syntax, Vocabulary, Grammar, Phraseology and Conventions rated on a scale from 1 to 5. To achieve this, a descriptive analysis of the data is conducted, preprocessing procedures are applied and relevant features are extracted; different strategies, representation techniques and models are explored, from some classic ones to the currently best performing models. Their performance is evaluated in each iteration, contrasting it with human ratings with a chosen measure. Then, the method with the best performance is presented, it is based mainly on DeBERTa V3 Large, where different techniques are applied to improve its performance. Finally, and is combined with a regressor model SVR that takes as features the concatenated embeddings of the texts in 10 different pretrained models. With this strategy, the result is quite close to human ratings, presenting a root mean square error of 0.45 over all indicators.MaestríaMagíster en Ingeniería - Ingeniería de Sistemas y ComputaciónSistemas inteligentesvii, 61 páginasapplication/pdfspaUniversidad Nacional de ColombiaBogotá - Ingeniería - Maestría en Ingeniería - Ingeniería de Sistemas y ComputaciónFacultad de IngenieríaBogotá, ColombiaUniversidad Nacional de Colombia - Sede Bogotá000 - Ciencias de la computación, información y obras generales::004 - Procesamiento de datos Ciencia de los computadores370 - Educación::373 - Educación secundariaPLNAprendizaje supervisadoTransformersEnsamble de modelosSVRCalificación automática de ensayosNLPAutomatic essay gradingSupervised learningSVRKaggle contestEnsamble of modelsMétodo de evaluaciónProcesamiento de datosEvaluation methodsData processingaprendizaje supervisadosupervised learningMétodo basado en aprendizaje automático para la calificación de ensayos cortos en inglés de una muestra de estudiantes de bachilleratoMachine learning based method for scoring short english essays from a high school student sampleTrabajo de grado - Maestríainfo:eu-repo/semantics/masterThesisinfo:eu-repo/semantics/acceptedVersionTexthttp://purl.org/redcol/resource_type/TMP. Kline, The New Psychometrics: Science, Psychology and Measurement. Routledge, 1 ed., 1999.T. N. Fitria, “Artificial intelligence (AI) technology in OpenAI ChatGPT application: A review of ChatGPT in writing English essay.,” ELT Forum: Journal of English Language Teaching, vol. 12, no. 1, pp. 44–58, 2023.E. B. Page, “Grading Essays by Computer: Progress Report. Proceedings of the 1966 Invitational Conference on Testing Problems.,” Princeton, N.J. Educational Testing Service, pp. 87–100, 1967.E. Page, “The use of the computer in analyzing student essays,” Int Rev Educ, pp. 210–225, 1968.K. L. Gwet, “Handbook of inter-rater reliability: The definitive guide to measuring the extent of agreement among raters,” Advanced Analytics, LLC, 2014.Mark D. S., “Contrasting State-of-the-Art in the Machine Scoring of Short-Form Cons-tructed Responses,” Educational Assessment, vol. 20, no. 1, pp. 46–65, 2015.Alex Franklin, Natalie Rambis, Maggie Meg Benner, Perpetual Baffour, Ryan Holbrook, and u. Scott Crossley, “Feedback Prize - English Language Learning. Kaggle. ,” 2022.S. A. Crossley, K. Kyle, and D. S. Mcnamara, “To Aggregate or Not? Linguistic Features in Automatic Essay Scoring and Feedback Systems,” Grantee Submission, vol. 8, no. 1, 2015.C. Ramineni and D. M. Williamson, “Automated essay scoring: Psychometric guidelines and practices,” Assessing Writing, pp. 25–39, 2013.S. P. Balfour, “Assessing Writing in MOOCs: Automated Essay Scoring and Calibrated PeerReview™,” Research & Practice in Assessment, pp. 40–48, 2013.S. Cushing Weigle, “Validation of automated scores of TOEFL iBT tasks against non-test indicators of writing ability,” Language Testing, vol. 27, no. 3, pp. 335–353, 2010.K. Taghipour, Robust trait-specific essay scoring using neural networks and density es-timators. PhD thesis, National University of Singapore, Singapore, 2017.H. Shi and V. Aryadoust, “Correction to: A systematic review of automated writing evaluation systems,” Education and Information Technologies, vol. 28, pp. 6189–6190, 5 2023.P. C. Jackson, Toward human-level artificial intelligence: Representation and computation of meaning in natural language. Dover Publications, 11 2019.E. Mayfield and C. P. Rosé, “LightSIDE,” in Handbook of Automated Essay Evaluation, Routledge, 1 ed., 2013.M. Shermis and J. Burstein, Handbook of Automated Essay Evaluation: Current Applications and New Directions. Routledge, 1 ed., 2013.S. Burrows, I. Gurevych, and B. Stein, “The Eras and Trends of Automatic Short Answer Grading,” Int J Artif Intell Educ, vol. 25, pp. 60–117, 2015.K. Zupanc and Z. Bosnic, “Automated essay evaluation with semantic analysis,” El-sevier, vol. 120, pp. 118–132, 2017.D. Yan, A. A. Rupp, and P. W. Foltz, Handbook of Automated Scoring; Theory into Practice. Chapman and Hall/CRC., 1 ed., 2020.B. Kitchenham, O. Pearl Brereton, D. Budgen, M. Turner, J. Bailey, and S. Linkman, “Systematic literature reviews in software engineering – A systematic literature review,” Information and Software Technology, vol. 51, no. 1, pp. 7–15, 2009.Y.-Y. Chen, C.-L. Liu, C.-H. Lee, and T.-H. Chang, “An unsupervised automated essay scoring system,” IEEE Intelligent Systems, vol. 25, no. 5, pp. 61–67, 2010.Y. Wang, Z. Wei, Y. Zhou, and X. Huang, “Automatic essay scoring incorporating rating schema via reinforcement learning,” in Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, EMNLP, pp. 791–797, 2018.C. Lu and M. Cutumisu, “Integrating Deep Learning into An Automated Feedback Generation System for Automated Essay Scoring,” International Educational Data Mining Society, 2021.K. S. McCarthy, R. D. Roscoe, L. K. Allen, A. D. Likens, and D. S. McNamara, “Automated writing evaluation: Does spelling and grammar feedback support high-quality writing and revision?,” Assessing Writing, vol. 52, 4 2022.A. Sharma and D. B. Jayagopi, “Automated grading of handwritten essays,” in Proceedings of International Conference on Frontiers in Handwriting Recognition, ICFHR, vol. 2018-August, pp. 279–284, 2018.A. Vaswani, G. Brain, N. Shazeer, N. Parmar, J. Uszkoreit, L. Jones, A. N. Gomez, L. Kaiser, and I. Polosukhin, “Attention Is All You Need,” Neural Information Processing Systems, 2017.T. Pedersen, S. Patwardhan, and J. Michelizzi, “WordNet::Similarity-Measuring the Relatedness of Concepts,” AAAI, vol. 4, pp. 25–29, 7 2004.F. Dong and Y. Zhang, “Automatic Features for Essay Scoring – An Empirical Study. In Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing,” Association for Computational Linguistics., pp. 1072–1077, 11 2016.Y. Liu, M. Ott, N. Goyal, J. Du, M. Joshi, D. Chen, O. Levy, M. Lewis, L. Zettlemoyer, and V. Stoyanov, “Roberta: A robustly optimized bert pretraining approach,” Paul G. Allen School of Computer Science & Engineering, University of Washington, Seattle, WA, 2019.E. Mayfield and A. W. Black, “Should You Fine-Tune BERT for Automated Essay Scoring?,” Proceedings of the Fifteenth Workshop on Innovative Use of NLP for Building Educational Applications. Association for Computational Linguistics, pp. 151–162, 7 2020.D. Ramesh and S. K. Sanampudi, “An automated essay scoring systems: a systematic literature review,” Artificial Intelligence Review, vol. 55, pp. 2495–2527, 3 2022.H. Yannakoudakis and R. Cummins, “Evaluating the performance of automated text scoring systems,” Proceedings of the Tenth Workshop on Innovative Use of NLP for Building Educational Applications, pp. 213–223, 2015.R. Bhatt, M. Patel, G. Srivastava, and V. Mago, “A Graph Based Approach to Automate Essay Evaluation,” in Conference Proceedings - IEEE International Conference on Systems, Man and Cybernetics, vol. 2020-Octob, pp. 4379–4385, 2020.Z. Ke and V. Ng, “Automated essay scoring: A survey of the state of the art,” in IJCAI International Joint Conference on Artificial Intelligence, vol. 2019-Augus, pp. 6300–6308, 2019.J. Devlin, M.-W. Chang, K. Lee, K. T. Google, and A. I. Language, “BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding,” arXiv preprint arXiv:1810.04805, 2018.D. Castro-Castro, R. Lannes-Losada, M. Maritxalar, I. Niebla, C. Pérez-Marqués, N. Álamo-Suárez, and A. Pons-Porrata, “A multilingual application for automated essay scoring,” Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics), vol. 5290 LNAI, pp. 243–251, 2008.P. U. Rodriguez, A. Jafari, and C. M. Ormerod, “Language models and Automated Essay Scoring,” ArXiv, 9 2019.S. Ghannay, B. Favre, Y. Estève, and N. Camelin, “Word Embeddings Evaluation and Combination,” Proceedings of the Tenth International Conference on Language Resources and Evaluation, pp. 300–305, 5 2016.M. Mars, “From Word Embeddings to Pre-Trained Language Models: A State-of-the-Art Walkthrough,” Applied Sciences (Switzerland), vol. 12, 9 2022.Y. Zhang, R. Jin, and Z.-H. Zhou, “Understanding bag-of-words model: a statistical framework,” International journal of machine learning and cybernetics, vol. 1, pp. 43–52, 12 2010.K. W. CHURCH, “Word2Vec,” Natural Language Engineering, vol. 23, pp. 155–162, 1 2017.J. Pennington, R. Socher, and C. D. Manning, “GloVe: Global Vectors for Word Representation,” in Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP), pp. 1535–1543, 2014.V. Kumar and B. Subba, “A TfidfVectorizer and SVM based sentiment analysis framework for text data corpus,” in 2020 National Conference on Communications (NCC), pp. 1–6, IEEE, 2 2020.Microsoft, “GitHub - Microsoft/LightGBM:Light Gradient Boosting Machine.”G. Ke, Q. Meng, T. Finley, T. Wang, W. Chen, W. Ma, Q. Ye, and T.Y. Liu, “LightGBM: A Highly Efficient Gradient Boosting Decision Tree,” 31st Conference on Neural Information Processing Systems (NIPS 2017), pp. 3149–3157, 2017.T. Chen and C. Guestrin, “XGBoost: A scalable tree boosting system,” in Proceedings of the ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, vol. 13-17-August-2016, pp. 785–794, Association for Computing Machinery, 8 2016.J.J.Espinosa-Zúñiga, “Aplicación de algoritmos Random Forest y XGBoost en una base de solicitudes de tarjetas de crédito,” Ingeniería Investigación y Tecnología, vol. 21, no. 3, pp. 1–16, 2020.C. Cortes, V. Vapnik, and L. Saitta, “Support-Vector Networks,” Machine Leaming, vol. 20, pp. 273–297, 1995.M. Awad, R. Khanna, M. Awad, and R. Khanna, Support vector regression. Efficient learning machines: Theories, concepts, and applications for engineers and system designers. Apress, 2015.M.-C. Popescu, V. E. Balas, L. Perescu-Popescu, and N. Mastorakis, “Multilayer Perceptron and Neural Networks,” WSEAS Transactions on Circuits and Systems, vol. 8, no. 7, pp. 579–588, 2009.T. B. Brown, B. Mann, N. Ryder, M. Subbiah, J. Kaplan, P. Dhariwal, A. Neelakantan, P. Shyam, G. Sastry, A. Askell, S. Agarwal, A. Herbert-Voss, G. Krueger, T. Henighan, R. Child, A. Ramesh, D. M. Ziegler, J. Wu, C. Winter, C. Hesse, M. Chen, E. Sigler, M. Litwin, S. Gray, B. Chess, J. Clark, C. Berner, S. McCandlish, A. Radford, I. Sutskever, and D. Amodei, “Language Models are Few-Shot Learners,” ArXiv, 5 2020.T. Wolf, L. Debut, V. Sanh, J. Chaumond, C. Delangue, A. Moi, P. Cistac, T. Rault, R. Louf, M. Funtowicz, J. Davison, S. Shleifer, P. Von Platen, C. Ma, Y. Jernite, J. Plu, C. Xu, T. Le Scao, S. Gugger, M. Drame, Q. Lhoest, and A. M. Rush, “Transformers: State-of-the-Art Natural Language Processing,” Association for Computational Linguistics, pp. 38–45, 2020.P. He, X. Liu, J. Gao, W. Chen, and M. Dynamics, “Deberta: Decoding-enhanced bert with disentangled attention,” Conference paper at ICLR, 2021.P. Zhang, “Longformer-based Automated Writing Assessment for English Language Learners Stanford CS224N Custom Project,” tech. rep., Standford, 2023.K. K. Y. Chan, T. Bond, and Z. Yan, “Application of an Automated Essay Scoring engine to English writing assessment using Many-Facet Rasch Measurement,” Language Testing, vol. 40, pp. 61–85, 1 2023.A. Mizumoto and M. Eguchi, “Exploring the potential of using an AI language model for automated essay scoring,” Research Methods in Applied Linguistics, vol. 2, 8 2023.V. Mohan, M. J. Ilamathi, and M. Nithya, “Preprocessing Techniques for Text Mining-An Overview,” International Journal of Computer Science & Communication Networks, vol. 5, no. 1, pp. 7–16, 2015.M. Siino, I. Tinnirello, and M. La Cascia, “Is text preprocessing still worth the time? A comparative survey on the influence of popular preprocessing methods on Transformers and traditional classifiers,” Information Systems, vol. 121, p. 102342, 3 2024.S. A. Crossley, D. B. Allen, and J. S. Danielle McNamara, “Text readability and intuitive simplification: A comparison of readability formulas,” Reading in a foreign language, vol. 23, no. 1, pp. 84–101, 2011.F. Scarselli and A. C. Tsoi, “Universal Approximation Using Feedforward Neural Networks: A Survey of Some Existing Methods, and Some New Results,” Neural Networks, vol. 11, no. 1, pp. 15–37, 1998.P. He, J. Gao, and W. Chen, “DeBERTaV3: Improving DeBERTa using ELECTRA-Style Pre-Training with Gradient-Disentangled Embedding Sharing,” ArXiv, 11 2021.D. Wu, S.-t. Xia, and Y. Wang, “Adversarial Weight Perturbation Helps Robust Generalization,” ArXiv, 4 2020.H. Inoue, “Multi-Sample Dropout for Accelerated Training and Better Generalization,” ArXiv, 5 2019.A. Stanciu, I. Cristescu, E. M. Ciuperca, and C. E. Cˆırnu, “Using an ensemble of transformer-based models for automated writing evaluation of essays,” in 14th International Conference on Education and New Learning Technologies, (Palma, Spain), pp. 5276–5282, IATED, 7 2022.H. Zhang, Y. Gong, Y. Shen, W. Li, J. Lv, N. Duan, and W. Chen, “Poolingformer: Long Document Modeling with Pooling Attention,” ArXiv, 2021.A. Aziz, M. Akram Hossain, and A. Nowshed Chy, “CSECU-DSG at SemEval-2023 Task 4: Fine-tuning DeBERTa Transformer Model with Cross-fold Training and Multisample Dropout for Human Values Identification,” tech. rep., Department of Computer Science and Engineering University of Chittagong, Chattogram, Bangladesh, 2023.E. Mayfield and A. W. Black, “Should You Fine-Tune BERT for Automated Essay Scoring?,” Proceedings of the Fifteenth Workshop on Innovative Use of NLP for Building Educational Applications. Association for Computational Linguistics, pp. 151–162, 7 2020.EstudiantesInvestigadoresMaestrosPúblico generalLICENSElicense.txtlicense.txttext/plain; charset=utf-85879https://repositorio.unal.edu.co/bitstream/unal/86173/3/license.txteb34b1cf90b7e1103fc9dfd26be24b4aMD53ORIGINAL1032469305.2024.pdf1032469305.2024.pdfTesis de Maestría en Ingeniería - Ingeniería de Sistemas y Computaciónapplication/pdf2089722https://repositorio.unal.edu.co/bitstream/unal/86173/4/1032469305.2024.pdf01c50e2289445d13b5bde26ac84ff4fdMD54THUMBNAIL1032469305.2024.pdf.jpg1032469305.2024.pdf.jpgGenerated Thumbnailimage/jpeg5011https://repositorio.unal.edu.co/bitstream/unal/86173/5/1032469305.2024.pdf.jpg68ebc4cf77fb4d55d3c97838b915f4e2MD55unal/86173oai:repositorio.unal.edu.co:unal/861732024-05-28 23:04:21.074Repositorio Institucional Universidad Nacional de Colombiarepositorio_nal@unal.edu.coUEFSVEUgMS4gVMOJUk1JTk9TIERFIExBIExJQ0VOQ0lBIFBBUkEgUFVCTElDQUNJw5NOIERFIE9CUkFTIEVOIEVMIFJFUE9TSVRPUklPIElOU1RJVFVDSU9OQUwgVU5BTC4KCkxvcyBhdXRvcmVzIHkvbyB0aXR1bGFyZXMgZGUgbG9zIGRlcmVjaG9zIHBhdHJpbW9uaWFsZXMgZGUgYXV0b3IsIGNvbmZpZXJlbiBhIGxhIFVuaXZlcnNpZGFkIE5hY2lvbmFsIGRlIENvbG9tYmlhIHVuYSBsaWNlbmNpYSBubyBleGNsdXNpdmEsIGxpbWl0YWRhIHkgZ3JhdHVpdGEgc29icmUgbGEgb2JyYSBxdWUgc2UgaW50ZWdyYSBlbiBlbCBSZXBvc2l0b3JpbyBJbnN0aXR1Y2lvbmFsLCBiYWpvIGxvcyBzaWd1aWVudGVzIHTDqXJtaW5vczoKCgphKQlMb3MgYXV0b3JlcyB5L28gbG9zIHRpdHVsYXJlcyBkZSBsb3MgZGVyZWNob3MgcGF0cmltb25pYWxlcyBkZSBhdXRvciBzb2JyZSBsYSBvYnJhIGNvbmZpZXJlbiBhIGxhIFVuaXZlcnNpZGFkIE5hY2lvbmFsIGRlIENvbG9tYmlhIHVuYSBsaWNlbmNpYSBubyBleGNsdXNpdmEgcGFyYSByZWFsaXphciBsb3Mgc2lndWllbnRlcyBhY3RvcyBzb2JyZSBsYSBvYnJhOiBpKSByZXByb2R1Y2lyIGxhIG9icmEgZGUgbWFuZXJhIGRpZ2l0YWwsIHBlcm1hbmVudGUgbyB0ZW1wb3JhbCwgaW5jbHV5ZW5kbyBlbCBhbG1hY2VuYW1pZW50byBlbGVjdHLDs25pY28sIGFzw60gY29tbyBjb252ZXJ0aXIgZWwgZG9jdW1lbnRvIGVuIGVsIGN1YWwgc2UgZW5jdWVudHJhIGNvbnRlbmlkYSBsYSBvYnJhIGEgY3VhbHF1aWVyIG1lZGlvIG8gZm9ybWF0byBleGlzdGVudGUgYSBsYSBmZWNoYSBkZSBsYSBzdXNjcmlwY2nDs24gZGUgbGEgcHJlc2VudGUgbGljZW5jaWEsIHkgaWkpIGNvbXVuaWNhciBhbCBww7pibGljbyBsYSBvYnJhIHBvciBjdWFscXVpZXIgbWVkaW8gbyBwcm9jZWRpbWllbnRvLCBlbiBtZWRpb3MgYWzDoW1icmljb3MgbyBpbmFsw6FtYnJpY29zLCBpbmNsdXllbmRvIGxhIHB1ZXN0YSBhIGRpc3Bvc2ljacOzbiBlbiBhY2Nlc28gYWJpZXJ0by4gQWRpY2lvbmFsIGEgbG8gYW50ZXJpb3IsIGVsIGF1dG9yIHkvbyB0aXR1bGFyIGF1dG9yaXphIGEgbGEgVW5pdmVyc2lkYWQgTmFjaW9uYWwgZGUgQ29sb21iaWEgcGFyYSBxdWUsIGVuIGxhIHJlcHJvZHVjY2nDs24geSBjb211bmljYWNpw7NuIGFsIHDDumJsaWNvIHF1ZSBsYSBVbml2ZXJzaWRhZCByZWFsaWNlIHNvYnJlIGxhIG9icmEsIGhhZ2EgbWVuY2nDs24gZGUgbWFuZXJhIGV4cHJlc2EgYWwgdGlwbyBkZSBsaWNlbmNpYSBDcmVhdGl2ZSBDb21tb25zIGJham8gbGEgY3VhbCBlbCBhdXRvciB5L28gdGl0dWxhciBkZXNlYSBvZnJlY2VyIHN1IG9icmEgYSBsb3MgdGVyY2Vyb3MgcXVlIGFjY2VkYW4gYSBkaWNoYSBvYnJhIGEgdHJhdsOpcyBkZWwgUmVwb3NpdG9yaW8gSW5zdGl0dWNpb25hbCwgY3VhbmRvIHNlYSBlbCBjYXNvLiBFbCBhdXRvciB5L28gdGl0dWxhciBkZSBsb3MgZGVyZWNob3MgcGF0cmltb25pYWxlcyBkZSBhdXRvciBwb2Ryw6EgZGFyIHBvciB0ZXJtaW5hZGEgbGEgcHJlc2VudGUgbGljZW5jaWEgbWVkaWFudGUgc29saWNpdHVkIGVsZXZhZGEgYSBsYSBEaXJlY2Npw7NuIE5hY2lvbmFsIGRlIEJpYmxpb3RlY2FzIGRlIGxhIFVuaXZlcnNpZGFkIE5hY2lvbmFsIGRlIENvbG9tYmlhLiAKCmIpIAlMb3MgYXV0b3JlcyB5L28gdGl0dWxhcmVzIGRlIGxvcyBkZXJlY2hvcyBwYXRyaW1vbmlhbGVzIGRlIGF1dG9yIHNvYnJlIGxhIG9icmEgY29uZmllcmVuIGxhIGxpY2VuY2lhIHNlw7FhbGFkYSBlbiBlbCBsaXRlcmFsIGEpIGRlbCBwcmVzZW50ZSBkb2N1bWVudG8gcG9yIGVsIHRpZW1wbyBkZSBwcm90ZWNjacOzbiBkZSBsYSBvYnJhIGVuIHRvZG9zIGxvcyBwYcOtc2VzIGRlbCBtdW5kbywgZXN0byBlcywgc2luIGxpbWl0YWNpw7NuIHRlcnJpdG9yaWFsIGFsZ3VuYS4KCmMpCUxvcyBhdXRvcmVzIHkvbyB0aXR1bGFyZXMgZGUgZGVyZWNob3MgcGF0cmltb25pYWxlcyBkZSBhdXRvciBtYW5pZmllc3RhbiBlc3RhciBkZSBhY3VlcmRvIGNvbiBxdWUgbGEgcHJlc2VudGUgbGljZW5jaWEgc2Ugb3RvcmdhIGEgdMOtdHVsbyBncmF0dWl0bywgcG9yIGxvIHRhbnRvLCByZW51bmNpYW4gYSByZWNpYmlyIGN1YWxxdWllciByZXRyaWJ1Y2nDs24gZWNvbsOzbWljYSBvIGVtb2x1bWVudG8gYWxndW5vIHBvciBsYSBwdWJsaWNhY2nDs24sIGRpc3RyaWJ1Y2nDs24sIGNvbXVuaWNhY2nDs24gcMO6YmxpY2EgeSBjdWFscXVpZXIgb3RybyB1c28gcXVlIHNlIGhhZ2EgZW4gbG9zIHTDqXJtaW5vcyBkZSBsYSBwcmVzZW50ZSBsaWNlbmNpYSB5IGRlIGxhIGxpY2VuY2lhIENyZWF0aXZlIENvbW1vbnMgY29uIHF1ZSBzZSBwdWJsaWNhLgoKZCkJUXVpZW5lcyBmaXJtYW4gZWwgcHJlc2VudGUgZG9jdW1lbnRvIGRlY2xhcmFuIHF1ZSBwYXJhIGxhIGNyZWFjacOzbiBkZSBsYSBvYnJhLCBubyBzZSBoYW4gdnVsbmVyYWRvIGxvcyBkZXJlY2hvcyBkZSBwcm9waWVkYWQgaW50ZWxlY3R1YWwsIGluZHVzdHJpYWwsIG1vcmFsZXMgeSBwYXRyaW1vbmlhbGVzIGRlIHRlcmNlcm9zLiBEZSBvdHJhIHBhcnRlLCAgcmVjb25vY2VuIHF1ZSBsYSBVbml2ZXJzaWRhZCBOYWNpb25hbCBkZSBDb2xvbWJpYSBhY3TDumEgY29tbyB1biB0ZXJjZXJvIGRlIGJ1ZW5hIGZlIHkgc2UgZW5jdWVudHJhIGV4ZW50YSBkZSBjdWxwYSBlbiBjYXNvIGRlIHByZXNlbnRhcnNlIGFsZ8O6biB0aXBvIGRlIHJlY2xhbWFjacOzbiBlbiBtYXRlcmlhIGRlIGRlcmVjaG9zIGRlIGF1dG9yIG8gcHJvcGllZGFkIGludGVsZWN0dWFsIGVuIGdlbmVyYWwuIFBvciBsbyB0YW50bywgbG9zIGZpcm1hbnRlcyAgYWNlcHRhbiBxdWUgY29tbyB0aXR1bGFyZXMgw7puaWNvcyBkZSBsb3MgZGVyZWNob3MgcGF0cmltb25pYWxlcyBkZSBhdXRvciwgYXN1bWlyw6FuIHRvZGEgbGEgcmVzcG9uc2FiaWxpZGFkIGNpdmlsLCBhZG1pbmlzdHJhdGl2YSB5L28gcGVuYWwgcXVlIHB1ZWRhIGRlcml2YXJzZSBkZSBsYSBwdWJsaWNhY2nDs24gZGUgbGEgb2JyYS4gIAoKZikJQXV0b3JpemFuIGEgbGEgVW5pdmVyc2lkYWQgTmFjaW9uYWwgZGUgQ29sb21iaWEgaW5jbHVpciBsYSBvYnJhIGVuIGxvcyBhZ3JlZ2Fkb3JlcyBkZSBjb250ZW5pZG9zLCBidXNjYWRvcmVzIGFjYWTDqW1pY29zLCBtZXRhYnVzY2Fkb3Jlcywgw61uZGljZXMgeSBkZW3DoXMgbWVkaW9zIHF1ZSBzZSBlc3RpbWVuIG5lY2VzYXJpb3MgcGFyYSBwcm9tb3ZlciBlbCBhY2Nlc28geSBjb25zdWx0YSBkZSBsYSBtaXNtYS4gCgpnKQlFbiBlbCBjYXNvIGRlIGxhcyB0ZXNpcyBjcmVhZGFzIHBhcmEgb3B0YXIgZG9ibGUgdGl0dWxhY2nDs24sIGxvcyBmaXJtYW50ZXMgc2Vyw6FuIGxvcyByZXNwb25zYWJsZXMgZGUgY29tdW5pY2FyIGEgbGFzIGluc3RpdHVjaW9uZXMgbmFjaW9uYWxlcyBvIGV4dHJhbmplcmFzIGVuIGNvbnZlbmlvLCBsYXMgbGljZW5jaWFzIGRlIGFjY2VzbyBhYmllcnRvIENyZWF0aXZlIENvbW1vbnMgeSBhdXRvcml6YWNpb25lcyBhc2lnbmFkYXMgYSBzdSBvYnJhIHBhcmEgbGEgcHVibGljYWNpw7NuIGVuIGVsIFJlcG9zaXRvcmlvIEluc3RpdHVjaW9uYWwgVU5BTCBkZSBhY3VlcmRvIGNvbiBsYXMgZGlyZWN0cmljZXMgZGUgbGEgUG9sw610aWNhIEdlbmVyYWwgZGUgbGEgQmlibGlvdGVjYSBEaWdpdGFsLgoKCmgpCVNlIGF1dG9yaXphIGEgbGEgVW5pdmVyc2lkYWQgTmFjaW9uYWwgZGUgQ29sb21iaWEgY29tbyByZXNwb25zYWJsZSBkZWwgdHJhdGFtaWVudG8gZGUgZGF0b3MgcGVyc29uYWxlcywgZGUgYWN1ZXJkbyBjb24gbGEgbGV5IDE1ODEgZGUgMjAxMiBlbnRlbmRpZW5kbyBxdWUgc2UgZW5jdWVudHJhbiBiYWpvIG1lZGlkYXMgcXVlIGdhcmFudGl6YW4gbGEgc2VndXJpZGFkLCBjb25maWRlbmNpYWxpZGFkIGUgaW50ZWdyaWRhZCwgeSBzdSB0cmF0YW1pZW50byB0aWVuZSB1bmEgZmluYWxpZGFkIGhpc3TDs3JpY2EsIGVzdGFkw61zdGljYSBvIGNpZW50w61maWNhIHNlZ8O6biBsbyBkaXNwdWVzdG8gZW4gbGEgUG9sw610aWNhIGRlIFRyYXRhbWllbnRvIGRlIERhdG9zIFBlcnNvbmFsZXMuCgoKClBBUlRFIDIuIEFVVE9SSVpBQ0nDk04gUEFSQSBQVUJMSUNBUiBZIFBFUk1JVElSIExBIENPTlNVTFRBIFkgVVNPIERFIE9CUkFTIEVOIEVMIFJFUE9TSVRPUklPIElOU1RJVFVDSU9OQUwgVU5BTC4KClNlIGF1dG9yaXphIGxhIHB1YmxpY2FjacOzbiBlbGVjdHLDs25pY2EsIGNvbnN1bHRhIHkgdXNvIGRlIGxhIG9icmEgcG9yIHBhcnRlIGRlIGxhIFVuaXZlcnNpZGFkIE5hY2lvbmFsIGRlIENvbG9tYmlhIHkgZGUgc3VzIHVzdWFyaW9zIGRlIGxhIHNpZ3VpZW50ZSBtYW5lcmE6CgphLglDb25jZWRvIGxpY2VuY2lhIGVuIGxvcyB0w6lybWlub3Mgc2XDsWFsYWRvcyBlbiBsYSBwYXJ0ZSAxIGRlbCBwcmVzZW50ZSBkb2N1bWVudG8sIGNvbiBlbCBvYmpldGl2byBkZSBxdWUgbGEgb2JyYSBlbnRyZWdhZGEgc2VhIHB1YmxpY2FkYSBlbiBlbCBSZXBvc2l0b3JpbyBJbnN0aXR1Y2lvbmFsIGRlIGxhIFVuaXZlcnNpZGFkIE5hY2lvbmFsIGRlIENvbG9tYmlhIHkgcHVlc3RhIGEgZGlzcG9zaWNpw7NuIGVuIGFjY2VzbyBhYmllcnRvIHBhcmEgc3UgY29uc3VsdGEgcG9yIGxvcyB1c3VhcmlvcyBkZSBsYSBVbml2ZXJzaWRhZCBOYWNpb25hbCBkZSBDb2xvbWJpYSAgYSB0cmF2w6lzIGRlIGludGVybmV0LgoKCgpQQVJURSAzIEFVVE9SSVpBQ0nDk04gREUgVFJBVEFNSUVOVE8gREUgREFUT1MgUEVSU09OQUxFUy4KCkxhIFVuaXZlcnNpZGFkIE5hY2lvbmFsIGRlIENvbG9tYmlhLCBjb21vIHJlc3BvbnNhYmxlIGRlbCBUcmF0YW1pZW50byBkZSBEYXRvcyBQZXJzb25hbGVzLCBpbmZvcm1hIHF1ZSBsb3MgZGF0b3MgZGUgY2Fyw6FjdGVyIHBlcnNvbmFsIHJlY29sZWN0YWRvcyBtZWRpYW50ZSBlc3RlIGZvcm11bGFyaW8sIHNlIGVuY3VlbnRyYW4gYmFqbyBtZWRpZGFzIHF1ZSBnYXJhbnRpemFuIGxhIHNlZ3VyaWRhZCwgY29uZmlkZW5jaWFsaWRhZCBlIGludGVncmlkYWQgeSBzdSB0cmF0YW1pZW50byBzZSByZWFsaXphIGRlIGFjdWVyZG8gYWwgY3VtcGxpbWllbnRvIG5vcm1hdGl2byBkZSBsYSBMZXkgMTU4MSBkZSAyMDEyIHkgZGUgbGEgUG9sw610aWNhIGRlIFRyYXRhbWllbnRvIGRlIERhdG9zIFBlcnNvbmFsZXMgZGUgbGEgVW5pdmVyc2lkYWQgTmFjaW9uYWwgZGUgQ29sb21iaWEuIFB1ZWRlIGVqZXJjZXIgc3VzIGRlcmVjaG9zIGNvbW8gdGl0dWxhciBhIGNvbm9jZXIsIGFjdHVhbGl6YXIsIHJlY3RpZmljYXIgeSByZXZvY2FyIGxhcyBhdXRvcml6YWNpb25lcyBkYWRhcyBhIGxhcyBmaW5hbGlkYWRlcyBhcGxpY2FibGVzIGEgdHJhdsOpcyBkZSBsb3MgY2FuYWxlcyBkaXNwdWVzdG9zIHkgZGlzcG9uaWJsZXMgZW4gd3d3LnVuYWwuZWR1LmNvIG8gZS1tYWlsOiBwcm90ZWNkYXRvc19uYUB1bmFsLmVkdS5jbyIKClRlbmllbmRvIGVuIGN1ZW50YSBsbyBhbnRlcmlvciwgYXV0b3Jpem8gZGUgbWFuZXJhIHZvbHVudGFyaWEsIHByZXZpYSwgZXhwbMOtY2l0YSwgaW5mb3JtYWRhIGUgaW5lcXXDrXZvY2EgYSBsYSBVbml2ZXJzaWRhZCBOYWNpb25hbCBkZSBDb2xvbWJpYSBhIHRyYXRhciBsb3MgZGF0b3MgcGVyc29uYWxlcyBkZSBhY3VlcmRvIGNvbiBsYXMgZmluYWxpZGFkZXMgZXNwZWPDrWZpY2FzIHBhcmEgZWwgZGVzYXJyb2xsbyB5IGVqZXJjaWNpbyBkZSBsYXMgZnVuY2lvbmVzIG1pc2lvbmFsZXMgZGUgZG9jZW5jaWEsIGludmVzdGlnYWNpw7NuIHkgZXh0ZW5zacOzbiwgYXPDrSBjb21vIGxhcyByZWxhY2lvbmVzIGFjYWTDqW1pY2FzLCBsYWJvcmFsZXMsIGNvbnRyYWN0dWFsZXMgeSB0b2RhcyBsYXMgZGVtw6FzIHJlbGFjaW9uYWRhcyBjb24gZWwgb2JqZXRvIHNvY2lhbCBkZSBsYSBVbml2ZXJzaWRhZC4gCgo=